FairScene: Learning unbiased object interactions for indoor scene synthesis

被引:0
|
作者
Wu, Zhenyu [1 ]
Wang, Ziwei [2 ]
Liu, Shengyu [2 ]
Luo, Hao [1 ]
Lu, Jiwen [2 ]
Yan, Haibin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Intelligent Engn & Automat, Beijing 100876, Peoples R China
[2] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Indoor scene synthesis; Graph neural networks; Causal inference;
D O I
10.1016/j.patcog.2024.110737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose an unbiased graph neural network learning method called FairScene for indoor scene synthesis. Conventional methods directly apply graphical models to represent the correlation of objects for subsequent furniture insertion. However, due to the object category imbalance in dataset collection and complex object entanglement with implicit confounders, these methods usually generate significantly biased scenes. Moreover, the performance of these methods varies greatly for different indoor scenes. To address this, we propose a framework named FairScene which can fully exploit unbiased object interactions through causal reasoning, so that fair scene synthesis is achieved by calibrating the long-tailed category distribution and mitigating the confounder effects. Specifically, we remove the long-tailed object priors subtract the counterfactual prediction obtained from default input, and intervene in the input feature by cutting off the causal link to confounders based on the causal graph. Extensive experiments on the 3D-FRONT dataset show that our proposed method outperforms the state-of-the-art indoor scene generation methods and enhances vanilla models on a wide variety of vision tasks including scene completion and object recognition.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Object-to-Scene: Learning to Transfer Object Knowledge to Indoor Scene Recognition
    Miao, Bo
    Zhou, Liguang
    Mian, Ajmal Saeed
    Lam, Tin Lun
    Xu, Yangsheng
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2069 - 2075
  • [2] Indoor scene perception for object detection and manipulation
    Manso, L. J.
    Bustos, P.
    Franco, J.
    Bachiller, P.
    COGNITIVE PROCESSING, 2012, 13 : S4 - S5
  • [3] Indoor Scene Recognition Through Object Detection
    Espinace, P.
    Kollar, T.
    Soto, A.
    Roy, N.
    2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2010, : 1406 - 1413
  • [4] 3D Scene Reconstruction and Object Recognition for Indoor Scene
    Shen, Yangping
    Manabe, Yoshitsugu
    Yata, Noriko
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT) 2019, 2019, 11049
  • [5] Adaptive Feature Learning for Unbiased Scene Graph Generation
    Yang, Jiarui
    Wang, Chuan
    Yang, Liang
    Jiang, Yuchen
    Cao, Angelina
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 2252 - 2265
  • [6] Conceptual structure and object salience in indoor scene descriptions
    Doore, Stacy
    Beard, Kate
    Giudice, Nicholas
    COGNITIVE PROCESSING, 2018, 19 : S38 - S38
  • [7] Adaptive Template for Parsing Object of Indoor Scene Image
    Xia, Changqun
    Xu, Jie
    Li, Qing
    Zhang, Yu
    Li, Jia
    Chen, Xiaowu
    2015 5TH INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2015), 2015, : 16 - 23
  • [8] ArrangementNet: Learning Scene Arrangements for Vectorized Indoor Scene Modeling
    Huang, Jingwei
    Zhang, Shanshan
    Duan, Bo
    Zhang, Yanfeng
    Guo, Xiaoyang
    Sun, Mingwei
    Yi, Li
    ACM TRANSACTIONS ON GRAPHICS, 2023, 42 (04):
  • [9] Deep style estimator for 3D indoor object collection organization and scene synthesis
    Wang, Xiaotian
    Zhou, Bin
    Zhang, Yu
    Zhao, Yifan
    COMPUTERS & GRAPHICS-UK, 2018, 74 : 76 - 84
  • [10] Dark Knowledge Balance Learning for Unbiased Scene Graph Generation
    Chen, Zhiqing
    Luo, Yawei
    Shao, Jian
    Yang, Yi
    Wang, Chunping
    Chen, Lei
    Xiao, Jun
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4838 - 4847