共 41 条
Landing Zone Identification for Autonomous UAV Applications Using Fused Hyperspectral Imagery and LIDAR Point Clouds
被引:0
|作者:
Lane, Sarah
[1
]
Kira, Zsolt
[2
]
James, Ryan
[1
]
Carr, Domenic
[1
]
Tuell, Grady
[3
]
机构:
[1] Georgia Tech Res Inst, Electroopt Syst Lab, 925 Dalney St, Atlanta, GA 30332 USA
[2] Georgia Tech Res Inst, Aerosp Transportat & Adv Syst Lab, 250 14th St, Atlanta, GA 30332 USA
[3] 3D Ideas LLC, 651 North Main St, Madison, GA 30650 USA
来源:
关键词:
multi-modal data fusion;
hyperspectral imagery;
LIDAR;
autonomous UAV;
D O I:
10.1117/12.2305136
中图分类号:
TP7 [遥感技术];
学科分类号:
081102 ;
0816 ;
081602 ;
083002 ;
1404 ;
摘要:
Multi-modal data fusion for situational awareness is of interest because fusion of data can provide more information than the individual modalities alone. However, many questions remain, including what data is beneficial, what algorithms work the best or are fastest, and where in the processing pipeline should data be fused? In this paper, we explore some of these questions through a processing pipeline designed for multi-modal data fusion in an autonomous UAV landing scenario. In this paper, we assess landing zone identification methods using two data modalities: hyperspectral imagery and LIDAR point clouds. Using hyperspectral image and LIDAR data from two datasets of Maui and a university campus, we assess the accuracies of different landing zone identification methods, compare rule-based and machine learning based classifications, and show that depending on the dataset, fusion does not always increase performance. However, we show that machine learning methods can be used to ascertain the usefulness of individual modalities and their resulting attributes when used to perform classification.
引用
收藏
页数:12
相关论文