PlaceAvoider: Steering First-Person Cameras away from Sensitive Spaces

被引:19
|
作者
Tetnpleman, Robert [1 ,2 ]
Korayem, Mohammed [1 ]
Crandall, David [1 ]
Kapadia, Apu [1 ]
机构
[1] Indiana Univ, Sch Informat & Comp, Bloomington, IN 47401 USA
[2] Naval Surface Warfare Ctr, Crane Div, Bethesda, MD USA
来源
21ST ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2014) | 2014年
基金
美国国家科学基金会;
关键词
D O I
10.14722/ndss.2014.23014
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Cameras are now commonplace in our social and computing landscapes and embedded into consumer devices like smartphones and tablets. A new generation of wearable devices (such as Google Glass) will soon make 'first-person' cameras nearly ubiquitous, capturing vast amounts of imagery without deliberate human action. `Lifelogging' devices and applications will record and share images from people's daily lives with their social networks. These devices that automatically capture images in the background raise serious privacy concerns, since they are likely to capture deeply private information. Users of these devices need ways to identify and prevent the sharing of sensitive images. As a first step, we introduce PlaceAvoider, a technique for owners of first-person cameras to `blacklist' sensitive spaces (like bathrooms and bedrooms). PlaceAvoider recognizes images captured in these spaces and flags them for review before the images are made available to applications. PlaceAvoider performs novel image analysis using both fine-grained image features (like specific objects) and coarse-grained, scene-level features (like colors and textures) to classify where a photo was taken. PlaceAvoider combines these features in a probabilistic framework that jointly labels streams of images in order to improve accuracy. We test the technique on five realistic first-person image datasets and show it is robust to blurriness, motion, and occlusion.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Temporal Segmentation and Activity Classification from First-person Sensing
    Spriggs, Ekaterina H.
    De La Torre, Fernando
    Hebert, Martial
    2009 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPR WORKSHOPS 2009), VOLS 1 AND 2, 2009, : 17 - 24
  • [22] Disability Visibility: First-Person Stories from the Twenty-First Century
    Sendaula, Stephanie
    LIBRARY JOURNAL, 2020, 145 (06) : 100 - 100
  • [23] Spatial knowledge acquired from first-person and dynamic map perspectives
    van der Kuil, M. N. A.
    Evers, A. W. M.
    Visser-Meily, J. M. A.
    van der Ham, I. J. M.
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2021, 85 (06): : 2137 - 2150
  • [24] Unsupervised Workflow Extraction from First-Person Video of Mechanical Assembly
    Truong-An Pham
    Xiao, Yu
    HOTMOBILE'18: PROCEEDINGS OF THE 19TH INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS & APPLICATIONS, 2018, : 31 - 36
  • [25] Believer, Beware: First-Person Dispatches from the Margins of Faith.
    Christian, Graham
    LIBRARY JOURNAL, 2009, 134 (08) : 67 - 67
  • [26] Summarizing First-Person Videos from Third Persons' Points of Views
    Ho, Hsuan-, I
    Chiu, Wei-Chen
    Wang, Yu-Chiang Frank
    COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 72 - 89
  • [27] Spatial knowledge acquired from first-person and dynamic map perspectives
    M. N. A. van der Kuil
    A. W. M. Evers
    J. M. A. Visser-Meily
    I. J. M. van der Ham
    Psychological Research, 2021, 85 : 2137 - 2150
  • [28] Looking from within: Comparing first-person approaches to studying experience
    Lumma, Anna-Lena
    Weger, Ulrich
    CURRENT PSYCHOLOGY, 2023, 42 (12) : 10437 - 10453
  • [29] Desktop Action Recognition From First-Person Point-of-View
    Cai, Minjie
    Lu, Feng
    Gao, Yue
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (05) : 1616 - 1628
  • [30] Egocentric Basketball Motion Planning from a Single First-Person Image
    Bertasius, Gedas
    Chan, Aaron
    Shi, Jianbo
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5889 - 5898