共 50 条
Generating Bird's Eye View from Egocentric RGB Videos
被引:2
|作者:
Jain, Vanita
[1
]
Wu, Qiming
[2
]
Grover, Shivam
[1
]
Sidana, Kshitij
[1
]
Chaudhary, Gopal
[1
]
Myint, San Hlaing
[3
]
Hua, Qiaozhi
[4
]
机构:
[1] Bharati Vidyapeeths Coll Engn, New Delhi, India
[2] China Mobile Hangzhou Informat Technol Co Ltd, Hangzhou, Peoples R China
[3] Waseda Univ, Global Informat & Telecommun Inst, Tokyo, Japan
[4] Hubei Univ Arts & Sci, Comp Sch, Xiangyang, Peoples R China
来源:
关键词:
RANDOM-ACCESS;
INTERNET;
D O I:
10.1155/2021/7479473
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
In this paper, we present a method for generating bird's eye video from egocentric RGB videos. Working with egocentric views is tricky since such the view is highly warped and prone to occlusions. On the other hand, a bird's eye view has a consistent scaling in at least the two dimensions it shows. Moreover, most of the state-of-the-art systems for tasks such as path prediction are built for bird's eye views of the subjects. We present a deep learning-based approach that transfers the egocentric RGB images captured from a dashcam of a car to bird's eye view. This is a task of view translation, and we perform two experiments. The first one uses an image-to-image translation method, and the other uses a video-to-video translation. We compare the results of our work with homographic transformation, and our SSIM values are better by a margin of 77% and 14.4%, and the RMSE errors are lower by 40% and 14.6% for image-to-image translation and video-to-video translation, respectively. We also visually show the efficacy and limitations of each method with helpful insights for future research. Compared to previous works that use homography and LIDAR for 3D point clouds, our work is more generalizable and does not require any expensive equipment.
引用
收藏
页数:11
相关论文