FairScene: Learning unbiased object interactions for indoor scene synthesis

被引:0
|
作者
Wu, Zhenyu [1 ]
Wang, Ziwei [2 ]
Liu, Shengyu [2 ]
Luo, Hao [1 ]
Lu, Jiwen [2 ]
Yan, Haibin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Intelligent Engn & Automat, Beijing 100876, Peoples R China
[2] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金;
关键词
Indoor scene synthesis; Graph neural networks; Causal inference;
D O I
10.1016/j.patcog.2024.110737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose an unbiased graph neural network learning method called FairScene for indoor scene synthesis. Conventional methods directly apply graphical models to represent the correlation of objects for subsequent furniture insertion. However, due to the object category imbalance in dataset collection and complex object entanglement with implicit confounders, these methods usually generate significantly biased scenes. Moreover, the performance of these methods varies greatly for different indoor scenes. To address this, we propose a framework named FairScene which can fully exploit unbiased object interactions through causal reasoning, so that fair scene synthesis is achieved by calibrating the long-tailed category distribution and mitigating the confounder effects. Specifically, we remove the long-tailed object priors subtract the counterfactual prediction obtained from default input, and intervene in the input feature by cutting off the causal link to confounders based on the causal graph. Extensive experiments on the 3D-FRONT dataset show that our proposed method outperforms the state-of-the-art indoor scene generation methods and enhances vanilla models on a wide variety of vision tasks including scene completion and object recognition.
引用
收藏
页数:13
相关论文
共 50 条
  • [11] BORM: Bayesian Object Relation Model for Indoor Scene Recognition
    Zhou, Liguang
    Cen, Jun
    Wang, Xingchao
    Sun, Zhenglong
    Lam, Tin Lun
    Xu, Yangsheng
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 39 - 46
  • [12] Learning guidelines for automatic indoor scene design
    Yuan Liang
    Song-Hai Zhang
    Ralph Robert Martin
    Multimedia Tools and Applications, 2019, 78 : 5003 - 5023
  • [13] Learning robust features for indoor scene recognition
    Nuhoho, Raphael Elimeli
    Chen Wenyu
    Baffour, Adu Asare
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (03) : 3681 - 3693
  • [14] Learning guidelines for automatic indoor scene design
    Liang, Yuan
    Zhang, Song-Hai
    Martin, Ralph Robert
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (04) : 5003 - 5023
  • [15] Deep Convolutional Priors for Indoor Scene Synthesis
    Wang, Kai
    Savva, Manolis
    Chang, Angel X.
    Ritchie, Daniel
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (04):
  • [16] ATISS: Autoregressive Transformers for Indoor Scene Synthesis
    Paschalidou, Despoina
    Kar, Amlan
    Shugrina, Maria
    Kreis, Karsten
    Geiger, Andreas
    Fidler, Sanja
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [17] Unsupervised learning of scene and object planar parts
    Mele, Katarina
    Mayer, Jasna
    ELEKTROTEHNISKI VESTNIK, 2007, 74 (05): : 297 - 302
  • [18] Learning Scene Context for Multiple Object Tracking
    Maggio, Emilio
    Cavallaro, Andrea
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2009, 18 (08) : 1873 - 1884
  • [19] Inter-object discriminative graph modeling for indoor scene recognition
    Song, Chuanxin
    Wu, Hanbo
    Ma, Xin
    KNOWLEDGE-BASED SYSTEMS, 2024, 302
  • [20] An Object Recognition Approach based on Structural Feature for Cluttered Indoor Scene
    Yuan, Wenbo
    Cao, Zhiqiang
    Zhao, Peng
    Tan, Min
    Yang, Yuequan
    2014 IEEE 11TH INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL (ICNSC), 2014, : 92 - 95