AR Tips: Augmented First-Person View Task Instruction Videos

被引:3
|
作者
Lee, Gun A. [1 ]
Ahn, Seungjun [1 ]
Hoff, William [2 ]
Billinghurst, Mark [1 ]
机构
[1] Univ South Australia, Adelaide, SA, Australia
[2] Colorado Sch Mines, Golden, CO 80401 USA
关键词
Human-centered computing; Human computer interaction (HCI); Interaction paradigms; Mixed / augmented reality;
D O I
10.1109/ISMAR-Adjunct.2019.00024
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This research investigates applying Augmented Reality (AR) visualisation of spatial cues in first-person view task instruction videos. Instructional videos are becoming popular, and are not only used in formal education and training, but even in everyday life as more people seek for how-to videos when they need help with instructions. However, video clips are 2D visualisation of the task space, sometimes making it hard for the viewer to follow and match the objects in the video to those in the real-world task space. We propose augmenting task instruction videos with 3D visualisation of spatial cues to overcome this problem, focusing on creating and viewing first-person view instruction videos. As a proof of concept, we designed and implemented a prototype system, called AR Tips, which allows users to capture and watch first-person view instructional videos on a wearable AR device, augmented with 3D visual cues shown in-situ at the task environment. Initial feedback from potential end users indicate that the prototype system is very easy to use and could be applied to various scenarios.
引用
收藏
页码:34 / 36
页数:3
相关论文
共 50 条
  • [1] Enhancing First-Person View Task Instruction Videos with Augmented Reality Cues
    Lee, Gun A.
    Ahn, Seungjun
    Hoff, William
    Billinghurst, Mark
    2020 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2020), 2020, : 498 - 508
  • [2] Future Person Localization in First-Person Videos
    Yagi, Takuma
    Mangalam, Karttikeya
    Yonetani, Ryo
    Sato, Yoichi
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7593 - 7602
  • [3] CHARACTERIZING DISTORTIONS IN FIRST-PERSON VIDEOS
    Bai, Chen
    Reibman, Amy R.
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2440 - 2444
  • [4] Vibrotactile Rendering of Camera Motion for Bimanual Experience of First-Person View Videos
    Gongora, Daniel
    Nagano, Hikaru
    Konyo, Masashi
    Tadokoro, Satoshi
    2017 IEEE WORLD HAPTICS CONFERENCE (WHC), 2017, : 454 - 459
  • [5] Video saliency prediction for First-Person View UAV videos: Dataset and benchmark
    Cai, Hao
    Zhang, Kao
    Chen, Zhao
    Jiang, Chenxi
    Chen, Zhenzhong
    NEUROCOMPUTING, 2024, 594
  • [6] First-person Hyper-lapse Videos
    Kopf, Johannes
    Cohen, Michael F.
    Szeliski, Richard
    ACM TRANSACTIONS ON GRAPHICS, 2014, 33 (04):
  • [7] Pooled Motion Features for First-Person Videos
    Ryoo, M. S.
    Rothrock, Brandon
    Matthies, Larry
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 896 - 904
  • [8] Viewing Experience Model of First-Person Videos
    Ma, Biao
    Reibman, Amy R.
    JOURNAL OF IMAGING, 2018, 4 (09)
  • [9] Personal Object Discovery in First-Person Videos
    Lu, Cewu
    Liao, Renjie
    Jia, Jiaya
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) : 5789 - 5799
  • [10] Image quality assessment in first-person videos
    Bai, Chen
    Reibman, Amy R.
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 54 : 123 - 132