Tracking Without Re-recognition in Humans and Machines

被引:0
|
作者
Linsley, Drew [1 ]
Malik, Girik [2 ]
Kim, Junkyung [3 ]
Govindarajan, Lakshmi N. [1 ]
Mingolla, Ennio [2 ]
Serre, Thomas [1 ]
机构
[1] Brown Univ, Carney Inst Brain Sci, Providence, RI 02912 USA
[2] Northeastern Univ, Boston, MA USA
[3] DeepMind, London, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021) | 2021年
关键词
OBJECT; ATTENTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Imagine trying to track one particular fruitfly in a swarm of hundreds. Higher biological visual systems have evolved to track moving objects by relying on both their appearance and their motion trajectories. We investigate if state-of-the-art spatiotemporal deep neural networks are capable of the same. For this, we introduce PathTracker, a synthetic visual challenge that asks human observers and machines to track a target object in the midst of identical-looking "distractor" objects. While humans effortlessly learn PathTracker and generalize to systematic variations in task design, deep networks struggle. To address this limitation, we identify and model circuit mechanisms in biological brains that are implicated in tracking objects based on motion cues. When instantiated as a recurrent network, our circuit model learns to solve PathTracker with a robust visual strategy that rivals human performance and explains a significant proportion of their decision-making on the challenge. We also show that the success of this circuit model extends to object tracking in natural videos. Adding it to a transformer-based architecture for object tracking builds tolerance to visual nuisances that affect object appearance, establishing the new state of the art on the large-scale TrackingNet challenge. Our work highlights the importance of understanding human vision to improve computer vision.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Cross-modal pedestrian re-recognition based on attention mechanism
    Yuyao Zhao
    Hang Zhou
    Hai Cheng
    Chunguang Huang
    The Visual Computer, 2024, 40 : 2405 - 2418
  • [22] Re-recognition of the development trend of the modern track and field sports training
    Fang, Wenli
    Cao, Yecheng
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 126 : 33 - 33
  • [23] Re-recognition of Tieshan “Syenite” and its Geological Significance in Zhenghe, Fujian Province
    CHEN Shizhong
    XING Guangfu
    LI Yanan
    XI Wanwan
    ZHU Xiaoting
    ZHANG Xiaodong
    Acta Geologica Sinica(English Edition), 2017, (S1) : 72 - 73
  • [24] Re-recognition of the aging precipitation behavior in the Mg-Sm binary alloy
    Xie, Hongbo
    Liu, Boshu
    Bai, Junyuan
    Guan, Changli
    Lou, Dongfang
    Pang, Xueyong
    Zhao, Hong
    Li, Shanshan
    Ren, Yuping
    Pan, Hucheng
    Yang, Changlin
    Qin, Gaowu
    JOURNAL OF ALLOYS AND COMPOUNDS, 2020, 814
  • [25] Super-resolution video target re-recognition based on joint training
    Chen, Jinhuang
    Chen, Zhaoqi
    Liu, Zhiyang
    Tu, Peiqi
    2024 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND INTELLIGENT SYSTEMS ENGINEERING, MLISE 2024, 2024, : 152 - 158
  • [26] Re-Recognition of Ride-Sourcing Service: From the Perspective of Operational Efficiency and Social Welfare
    Zhang, Zipeng
    Zhang, Ning
    SUSTAINABILITY, 2021, 13 (15)
  • [27] Re-recognition of design and safety evaluation of high-grade steel mountain pipeline
    Hu, Wenjun
    Zhao, Yu
    Hu, Kaiheng
    Chen, Guiyu
    Fu, Kaiwei
    Zhang, Xiaopeng
    He Jishu/Nuclear Techniques, 2023, 46 (05): : 83 - 92
  • [28] Speech recognition in adverse conditions by humans and machines
    Patman, Chloe
    Chodroff, Eleanor
    JASA EXPRESS LETTERS, 2024, 4 (11):
  • [29] Pedestrian Re-Recognition Algorithm Based on Optimization Deep Learning-Sequence Memory Model
    An, Feng-Ping
    COMPLEXITY, 2019, 2019
  • [30] Intelligent re-recognition algorithm for specific ship target in busy waters under the actual scene
    Lv, Jinwen
    Chen, Xianqiao
    Salah, M.
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 35 (04) : 4433 - 4443