Recognizing Personal Locations From Egocentric Videos

被引:28
|
作者
Furnari, Antonino [1 ]
Farinella, Giovanni Maria [1 ]
Battiato, Sebastiano [1 ]
机构
[1] Univ Catania, Dept Math & Comp Sci, I-95124 Catania, Italy
关键词
Context-aware computing; egocentric dataset; egocentric vision; first person vision; personal location recognition; CONTEXT; CLASSIFICATION; RECOGNITION; SCENE; SHAPE;
D O I
10.1109/THMS.2016.2612002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contextual awareness in wearable computing allows for construction of intelligent systems, which are able to interact with the user in a more natural way. In this paper, we study how personal locations arising from the user's daily activities can be recognized from egocentric videos. We assume that few training samples are available for learning purposes. Considering the diversity of the devices available on the market, we introduce a benchmark dataset containing egocentric videos of eight personal locations acquired by a user with four different wearable cameras. To make our analysis useful in real-world scenarios, we propose a method to reject negative locations, i.e., those not belonging to any of the categories of interest for the end-user. We assess the performances of the main state-of-the-art representations for scene and object classification on the considered task, as well as the influence of device-specific factors such as the field of view and the wearing modality. Concerning the different device-specific factors, experiments revealed that the best results are obtained using a head-mounted wide-angular device. Our analysis shows the effectiveness of using representations based on convolutional neural networks, employing basic transfer learning techniques and an entropy-based rejection algorithm.
引用
收藏
页码:6 / 18
页数:13
相关论文
共 50 条
  • [31] Predicting Group Convergence in Egocentric Videos
    Nigam, Jyoti
    Rameshan, Renu M.
    ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2019, : 773 - 777
  • [32] Curriculum Learning with Infant Egocentric Videos
    Sheybani, Saber
    Hansaria, Himanshu
    Wood, Justin N.
    Smith, Linda B.
    Tiganj, Zoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Compact CNN for Indexing Egocentric Videos
    Poleg, Yair
    Ephrat, Ariel
    Peleg, Shmuel
    Arora, Chetan
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [34] Recognizing flu-like symptoms from videos
    Tuan Hue Thi
    Wang, Li
    Ye, Ning
    Zhang, Jian
    Maurer-Stroh, Sebastian
    Cheng, Li
    BMC BIOINFORMATICS, 2014, 15
  • [35] Recognizing flu-like symptoms from videos
    Tuan Hue Thi
    Li Wang
    Ning Ye
    Jian Zhang
    Sebastian Maurer-Stroh
    Li Cheng
    BMC Bioinformatics, 15
  • [36] Modeling, Recognizing, and Explaining Apparent Personality From Videos
    Jair Escalante, Hugo
    Kaya, Heysem
    Salah, Albert Ali
    Escalera, Sergio
    Gucluturk, Yagmur
    Guclu, Umut
    Baro, Xavier
    Guyon, Isabelle
    Jacques, Julio C. S., Jr.
    Madadi, Meysam
    Ayache, Stephane
    Viegas, Evelyne
    Gurpinar, Furkan
    Wicaksana, Achmadnoer Sukma
    Liem, Cynthia C. S.
    van Gerven, Marcel A. J.
    van Lier, Rob
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (02) : 894 - 911
  • [37] Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos
    Liu, Shaowei
    Tripathi, Subarna
    Majumdar, Somdeb
    Wang, Xiaolong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 3272 - 3282
  • [38] Instance Tracking in 3D Scenes from Egocentric Videos
    Zhao, Yunhan
    Ma, Haoyu
    Kong, Shu
    Fowlkes, Charless
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 21933 - 21944
  • [39] My View is the Best View: Procedure Learning from Egocentric Videos
    Bansal, Siddhant
    Arora, Chetan
    Jawahar, C. V.
    COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 657 - 675
  • [40] First-Person Animal Activity Recognition from Egocentric Videos
    Iwashita, Yumi
    Takamine, Asamichi
    Kurazume, Ryo
    Ryoo, M. S.
    2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 4310 - 4315