29卷/4期

29卷/4期

華藝線上圖書館

Pages:

191-210

論文名稱

以時間序列遙測資料評估大規模崩塌地植被結構與光譜指標的恢復速率差異

Title

Evaluating Recovery Rate Differences between Vegetation Structure and Spectral Indices in Large-Scale Landslides Using Time Series Remote Sensing Data

作者

宋承恩、王素芬、陳毅青

Author

Cheng-En Song, Su-Fen Wang, Yi-Chin Chen

中文摘要

本研究以時間序列的植生光譜變量為基礎,建立冠層結構推估模型來檢測大規模崩塌的植被恢復,並比較植生指標與冠層結構的恢復速率差異。分析顯示,機器學習模型能有效模擬冠層結構,模擬值與觀測值的R2可達0.9以上,可在廣泛的時空尺度下推演植被結構變化。植被恢復軌跡呈現高度變異,僅約14 %的崩塌表面有望能恢復至成熟林分狀態。植生指標的恢復突顯了飽和效應問題,容易高估恢復速率,估計良好恢復的植被能在15年內到達成熟森林水準,但冠層結構則需要數十年至百年的發展時間。故植生指標適用於初期演替階段的評估,長期復育監測仍須考量植被結構變化,整合光譜與結構訊息將有助於更全面地評估復育動態。

Abstract

This study developed a canopy structure estimation model based on time-series vegetation spectral variables to detect vegetation recovery in large-scale landslides and compare the recovery rates between vegetation indices and canopy structure. The analysis showed that the machine learning model effectively simulates canopy structure, achieving an R² of over 0.9 between simulated and observed values, enabling predictions of vegetation structural changes across broad spatial and temporal scales. The recovery trajectories of vegetation spectral indices and canopy structure revealed high variability in successional progress, with only approximately 14% of the landslide surface expected to recover to a mature forest state. The recovery of vegetation indices highlights saturation effects, tending to overestimate recovery rates and suggesting that well-restored vegetation could reach a mature forest state within 15 years. In contrast, canopy structure could require several decades to centuries to fully develop. Thus, vegetation indices are suitable for assessing early successional stages, while long-term restoration monitoring must also consider structural changes. Integrating spectral and structural information will facilitate a more comprehensive evaluation of restoration dynamics.

關鍵字

時間序列、植生指標、冠層結構、機器學習、崩塌復育

Keywords

Time-series, Vegetation index, Canopy structure, Machine learning, Landslide restoration

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202501080014-00001/

備註說明

N / A

Pages:

211-230

論文名稱

深度學習影像特徵匹配應用於無人機影像視覺定位

Title

Deep Learning-based Image Feature Matching for UAV Visual Positioning

作者

鄒來翰、林昭宏

Author

Lai-Han Zou, Chao-Hung Lin

中文摘要

當無人機 (unmanned aerial vehicle, UAV) 配備之定位及定向設備無作用時,可使用影像視覺定位技術單以影像共軛點進行空間後方交會推導載具外方位。本研究提出影像視覺定位流程,並改善使用深度學習模型特徵點匹配時因影像間平面旋轉而匹配成功率大幅降低地問題。加入資料擴增隨機旋轉影像,以特徵萃取模型提取特徵後輸入匹配模型學習。另外,透過提出內插法以及可學習參數法將原本用於匹配之特徵描述符替換為傳統特徵描述符,使其具有旋轉不變性。萃取影像中之特徵點並進行匹配後,可用一般傳統攝影測量空間後方交會求解位於載具上的相機6個外方位元素,進行載具定位。經本文影像視覺定位流程,解算外方位平面位置誤差最佳可達3 m、姿態角誤差最佳可達1.3°。

Abstract

When the positioning and orientation equipment on an unmanned aerial vehicle (UAV) is unavailable, visual positioning technology can be utilized to perform spatial resection using only conjugate points from images to derive the vehicle's exterior orientation. This study proposes a visual positioning workflow and addresses the issue of significantly reduced matching success rates when using deep learning models for feature point matching due to planar rotation between images. By incorporating data augmentation with random image rotations, feature points are extracted using a feature extraction model and then input into the matching model for learning. Additionally, the study introduces interpolation methods and learnable parameter methods to replace the feature descriptors used for matching with traditional feature descriptors, enhancing rotational invariance. After extracting and matching feature points from the images, conventional photogrammetry spatial resection can be used to solve for the six exterior orientation elements of the camera mounted on the vehicle, thus achieving vehicle positioning. With the proposed visual positioning workflow, the best achievable plane position error is 3 meters, and the best achievable attitude angle error is 1.3°.

關鍵字

深度學習、特徵萃取、影像匹配、視覺定位、旋轉不變性

Keywords

Deep Learning, Feature Extraction, Image Matching, Visual Positioning, Rotational Invariance

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202501080014-00002/

備註說明

N / A

Pages:

231-240

論文名稱

使用深度學習演算進行Sentinel-2影像之土地利用和土地覆蓋分類

Title

Land-use and Land-cover Classification from Sentinel-2 Imagery Using Deep Learning Algorithms

作者

呂明倫

Author

Ming-Lun Lu

中文摘要

土地利用和土地覆蓋 (land use and land cover, LULC) 圖是各種景觀規劃與資源管理中不可或缺的基礎資料。深度學習的卷積神經網路 (convolutional neural network, CNN) 可自動擷取遙測影像特徵,快速獲取LULC圖,近年來已成為廣受關注的影像分類方法之一。本研究選擇Sentinel-2衛星影像做為材料,建構具有7層架構的CNN模型執行LULC分類,並與機器學習的隨機森林 (random forest, RF) 進行比較。結果顯示,CNN在總體準確度 (89%) 和kappa係數 (0.84) 方面均優於RF (分別為87%與0.81)。9種LULC類型中,除草地、休耕稻作與農用設施外,其餘類型的分類結果均達到可接受的水準。總結而言,CNN展現了深度學習結合衛星影像在大面積製圖上的應用潛力。

Abstract

Land use and land cover (LULC) maps are essential foundational data for various landscape planning and resource management applications. Convolutional neural networks (CNNs), a deep learning method, can automatically extract features from remote sensing imagery and efficiently generate LULC maps. In recent years, CNNs have emerged as a widely recognized technique for image classification. This study utilized Sentinel-2 satellite imagery to construct a CNN model with a seven-layer architecture for LULC classification and compared its performance with the random forest (RF) machine learning algorithm. The results indicate that the CNN model outperformed RF, achieving an overall accuracy of 89% and a kappa coefficient of 0.84, compared to 87% and 0.81, respectively. Among the nine LULC categories, most classifications reached acceptable levels, with the exception of grasslands, fallow rice fields, and agricultural facilities. Overall, these findings demonstrate the potential of combining CNNs with satellite imagery for large-scale LULC mapping.

關鍵字

遙測、卷積神經網路、隨機森林、製圖

Keywords

Remote sensing, Convolutional neural networks, Random forest, Mapping

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202501080014-00003/

備註說明

N / A

Pages:

241-254

論文名稱

合歡山地區玉山箭竹分布與地形的關係探討

Title

Discussion on the Relationship between the Distribution and Topography of Yushan Cane in Hehuan Mountain Area

作者

鄭絜鈁、馮馨瑩

Author

Chieh-Fang Cheng, Hsin-Ying Feng

中文摘要

合歡山區有大面積玉山箭竹覆蓋,鄰近山區相近海拔高度區域則不一定有大面積玉山箭竹分布。因此利用福衛五號衛星影像進行合歡山區地表覆蓋的非監督式分類及分類結果評估,結合20 m數值地形模型,進行地形分析。發現玉山箭竹所在的高度與前人研究所述一致,主要在3100 m至3600 m,研究並發現箭竹分布區較針葉林的坡度緩。另從分析結果發現研究區的玉山箭竹主要分布於東坡及東南坡,與前人提出玉山箭竹多分布在南坡有差異。本研究整理出合歡山區的玉山箭竹從海拔1800 m以上即有分布,但3000 m以上箭竹林分布的面積比例大幅提高,且主要出現在山頭、山脊線等較為緩的區域。綜合上述,除了高度影響,坡度是影響合歡山區玉山箭竹分布的重要因素。

Abstract

The Hehuan Mountain area has extensive coverage of Yushan Cane, while neighboring areas at similar altitudes do not exhibit widespread distributions of Yushan Cane. Therefore, an unsupervised classification of surface cover was conducted on the Hehuan Mountain area using Formosat-5 satellite images, and the classification results were evaluated. This process was combined with a 20m DTM for terrain analysis. The study found that the altitude range where Yushan Cane is located aligns with previous research, primarily between 3100-3600m. It also revealed that areas with Yushan Cane have gentler slopes than those with Abies Zone. Furthermore, the analysis showed that Yushan Cane in the study area is mainly distributed on eastern and southeastern slopes, differing from previous studies that indicated a southern slope preference. This study found that Yushan Cane is distributed at elevations above 1800m in the Hehuan Mountain area, with a significant increase above 3000m, primarily on mountain peaks, ridges, and other gentler terrain. In summary, besides altitude, slope is a key factor influencing the distribution of Yushan Cane in the Hehuan Mountain area.

關鍵字

合歡山、玉山箭竹、衛星影像、地形、非監督式分類

Keywords

Hehuan Mountain, Yushan Cane, Satellite Images, Digital Terrain Model, Unsupervised Classification

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202501080014-00004/

備註說明

N / A

更多學刊論文