28卷/4期

28卷/4期

華藝線上圖書館

Pages:

209-226

論文名稱

應用深度學習於不同時期真實正射影像自動偵測建物變遷

Title

Applying Deep Learning to Automatically Detect Building Changes from True Orthoimages in Different Periods

作者

許家彰、邱式鴻

Author

Chia-Chang Hsu, Shih-Hong Chio

中文摘要

本研究於不同時期真實正射影像採用深度學習偵測建物變遷資訊。於第一階段以深度學習MS-FCN模型進行建物辨識,研究加入DSM與DHM探討高程對模型之助益,成果顯示相比僅使用真實正射影像,加入DSM與DHM之高程資訊能提升模型建物辨識能力,其F1-score能達87.16%與87.65%;於第二階段以深度學習U-Net模型執行建物變遷,然而在比較兩期真實正射影像間建物變遷時,可能因兩期真實正射影像有些許的對位誤差,故研究中透過將訓練資料隨機移動,訓練能抵抗對位誤差之深度學習模型,其F1-score約為71.63%,成果顯示應用深度學習搭配高解析度真實正射影像協助建物變遷偵測作業有其可行性。

Abstract

The change of urban building is an important factor influencing urban development. It is particularly important for urban planners to efficiently and quickly understand building changes in the urban environment. However, most building monitoring operations still rely heavily on manual image recognition, which is not only time-consuming but also labor-intensive. Therefore, this study uses MS-FCN and U-Net deep learning models to assist in the detection of building change information in the Shezi Island area of Taipei City from true orthoimages in different periods. In the first stage of building recognition using MS-FCN deep learning model, the study added DSM (digital surface model) and DHM (digital height model) to explore the benefits of elevation on the model. The results of the building recognition stage show that adding elevation information from DSM and DHM can improve the model's building recognition ability compared to using only the aerial true orthoimages. The F1-scores achieved by adding DSM and DHM are 87.16% and 87.65%, respectively. In the building change detection stage, the U-Net deep learning model that was trained to resist registration errors and can achieve an F1-score of 71.63%. The results demonstrate the feasibility of using deep learning in combination high-resolution aerial true orthoimages and DHM to assist in building change detection operations.

關鍵字

建物辨識、建物變遷、深度學習、數值地表模型、數值高度模型

Keywords

Building Recognition, Building Change Detection, Deep Learning, Digital Surface Model, Digital Height Model

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202312280009-00001/

備註說明

N / A

Pages:

227-238

論文名稱

基於逐步光束法平差發展立體視覺里程計

Title

Development of Stereo Visual Odometry Based on Stepwise Bundle Adjustment

作者

黃瓘茗、曾義星

Author

Guan-Ming Huang, Yi-Hsing Tseng

中文摘要

本研究自製行動載台搭載經過率定的雙相機系統,拍攝立體像對,透過逐步光束法平差的演算法發展立體視覺里程計,其中利用攝影測量中的共面式以及共線式進行影像匹配後錯誤特徵點的除錯。為加強特徵點的穩定性,加入了循環匹配的概念,保留前一時刻像對與此時刻像對的四張影像共同的特徵點。最後利用逐步光束法平差,解算前後時刻拍攝的四張影像的相對位移量,將每一站解算成果結合即可恢復載台的移動軌跡。 本實驗包含室內及室外場域,室內場地為成大測量系系館一樓,室外場地為成大博物館前的空地,成果顯示兩者的漂移比率分別小於1%及1.6%。

Abstract

In this research, we develop the stereo visual odometry based on the algorithm of the stepwise bundle adjustment, with mobile stereo camera system which is made by ourselves. Coplanar condition and Collinear condition, which are the concepts of the Photogrammetry, are utilized to eliminate the error matches after image matching. To improve the feature points, the concept of circular matching is added into the algorithm, which means keeping the feature points that is caught by all adjacent image pairs. The last step is using the stepwise bundle adjustment to solve the relative motion of adjacent image pairs, and we can rebuild the whole trajectory of the mobile stereo system with combining all the results. There are indoor scenario and outdoor scenario in the experiment of this research. Indoor scenario is the ground floor of department of Geomatics of NCKU; outdoor scenario is the square in front of museum of NCKU. The results show that the drift ratio of the two scenario are smaller than 1% and 1.6% respectively.

關鍵字

立體視覺里程計、共面式、共線式、逐步光束法平差、循環匹配

Keywords

Stereo Visual Odometry, Coplanar Condition, Collinear Condition, Stepwise Bundle Adjustment, Circular Matching

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202312280009-00002/

備註說明

N / A

Pages:

239-254

論文名稱

以二元羅吉斯迴歸與機器學習模型定義之自駕車車禍嚴重程度相關因子比較

Title

Using Binary Logistic Regression and Machine Learning Approach to Model Injury Severity in Autonomous Vehicle Crashes

作者

郭佩棻、許瑋庭、林宏叡

Author

Pei-Fen Kuo, Wei-Ting Hsu, Hung-Ruei Lin

中文摘要

近年來自駕車技術發展,可降低人為失誤以改善交通安全。而現有自駕安全研究多使用單一方法,且少考慮路外環境因素與其傷亡程度。因此,本研究採用四種研究方法 (關聯規則、決策樹、隨機森林與羅吉斯迴歸),以加州自駕車測試報告 (2019至2021年),並額外由開放街圖蒐集興趣點資料,以求完整探討自駕車傷亡事故相關因子。結果顯示:新興自駕車廠牌與車禍傷亡負相關;而自駕車特定被撞位置、車輛行駛狀態 (直行、停止、左右轉、加減速) 和興趣點 (Point of Interest, POI) (商業、交通) 亦與事故傷亡程度相關。建議未來研究可分析不同車廠自駕模式安全性、改善加固自駕車外觀車體或探討特定興趣點對自駕車安全影響。

Abstract

Autonomous vehicles have recently gained increasing popularity, and it is well-known that autonomous driving can reduce human error crashes and improve traffic safety. However, a few existing studies used two or more models to define the off-road environmental factors related to the crash severity level. Therefore, four research models (association rule, decision tree, random forest, and logistic regression) were used in this study for comparison. Our study data is California self-driving accident dataset from 2019 to 2021 (266 cases). The variables include self-driving car manufacturers, location, collision type, and severity. In addition, the number of various points of interest (POIS) near the crash location was also summarized based on the Open Street map. The results showed that factors such as if the crash involved an autonomous car from a tech-savvy company and minor vehicle damage, its severity level tends to be minor. Other related factors include shade in damaged area, movement preceding collision, and the density of POI (commercial, traffic).

關鍵字

自駕車肇事嚴重程度、興趣點、關聯規則、決策樹、隨機森林

Keywords

Autonomous Vehicles Severity, Point of Interest (POI), Association Rule, Decision Tree, Random Forest

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202312280009-00003/

備註說明

N / A

Pages:

255-271

論文名稱

針對裂縫影像進行比較多種拼接方法之研究

Title

Research on Multiple Stitching Methods for Crack Image Comparison

作者

莊敬薇、高書屏、王豐良、林志憲

Author

Ching-Wei Chuang, Szu-Pyng Kao, Feng-Liang Wang, Jhih-Sian Lin

中文摘要

使用影像處理進行裂縫檢測是近年常見的方法,現今常使用無人機協助拍攝,透過將這些影像拼接在一起,可以得到一個完整高解析度之裂縫影像。 拼接方法主要分為傳統以及拼接學習方法,傳統方法高度依賴所選取之特徵點,在特徵較少或解析度較低的場景中,拼接性能下降可能會造成拼接失敗,近年來拼接學習方法興起,利用卷積神經網路 (CNN) 發揮強大的特徵提取能力,本研究選用Nie et al. (2021) 提出的具深度感知能力的多網格深度單應性估計網路模型進行針對裂縫之訓練,從實驗中尋找出最適用於背景單調之混凝土裂縫之連續影像的拼接方法。

Abstract

The use of image processing for crack detection has become a common method in recent years, and drones are frequently employed today to assist with capturing images. By stitching these images together, a complete high-resolution image of cracks can be obtained. The stitching methods can be broadly categorized into traditional and learning-based approaches. Traditional methods rely heavily on selected feature points, and in scenes with fewer features or lower resolutions, stitching performance may degrade, potentially leading to stitching failures. In recent years, learning -based methods have gained popularity, harnessing the powerful feature extraction capabilities of Convolutional Neural Networks (CNNs). In this study, we adopted the multi-grid deep homography estimation network model proposed by Nie et al. (2021), which has depth-aware capabilities, for training specifically targeting cracks. Through experiments, we aim to identify the most suitable stitching method for continuous images of monotonous concrete backgrounds with cracks.

關鍵字

橋梁檢測、無人機、傳統拼接方法、深度學習拼接方法

Keywords

Bridge Detection, Drones, Traditional Splicing Methods, Deep Learning Splicing Methods

附件檔名

華芸線上圖書館

https://www.airitilibrary.com/Article/Detail/10218661-N202312280009-00004/

備註說明

N / A

更多活動學刊