NeurNCD: Novel Class Discovery via Implicit Neural Representation

被引:0
|
作者
Wang, Junming [1 ]
Shi, Yi [2 ]
机构
[1] Univ Hong Kong, Hong Kong, Peoples R China
[2] Beijing Jiaotong Univ, Beijing, Peoples R China
关键词
Neural Radiation Field; Visual Embedding Space; Novel Class Discovery; Feature Fusion; Novel View Synthesis;
D O I
10.1145/3652583.3658073
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Discovering novel classes in open-world settings is crucial for real-world applications. Traditional explicit representations, such as object descriptors or 3D segmentation maps, are constrained by their discrete, hole-prone, and noisy nature, which hinders accurate novel class discovery. To address these challenges, we introduce NeurNCD, the first versatile and data-efficient framework for novel class discovery that employs the meticulously designed Embedding-NeRF model combined with KL divergence as a substitute for traditional explicit 3D segmentation maps to aggregate semantic embedding and entropy in visual embedding space. NeurNCD also integrates several key components, including feature query, feature modulation and clustering, facilitating efficient feature augmentation and information exchange between the pre-trained semantic segmentation network and implicit neural representations. As a result, our framework achieves superior segmentation performance in both open and closed-world settings without relying on densely labelled datasets for supervised training or human interaction to generate sparse label supervision. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches on the NYUv2 and Replica datasets.
引用
收藏
页码:257 / 265
页数:9
相关论文
共 50 条
  • [41] Super-resolution biomedical imaging via reference-free statistical implicit neural representation
    Ye, Siqi
    Shen, Liyue
    Islam, Md Tauhidul
    Xing, Lei
    PHYSICS IN MEDICINE AND BIOLOGY, 2023, 68 (20):
  • [42] Class-Incremental Novel Class Discovery
    Roy, Subhankar
    Liu, Mingxuan
    Zhong, Zhun
    Sebe, Nicu
    Ricci, Elisa
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 317 - 333
  • [43] Implicit updating of object representation via temporal associations
    Yu, Ru Qi
    Zhao, Jiaying
    COGNITION, 2018, 181 : 127 - 134
  • [44] Label Enhancement via Joint Implicit Representation Clustering
    Lu, Yunan
    Li, Weiwei
    Jia, Xiuyi
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4019 - 4027
  • [45] Deep representation for classification of refrigerator image via novel convolutional neural network
    Lian, Jian
    Zhang, Yan
    Fan, Mingqu
    Pu, Haitao
    Lin, Jianwei
    Zheng, Yuanjie
    JOURNAL OF THE CHINESE INSTITUTE OF ENGINEERS, 2021, 44 (01) : 33 - 40
  • [46] Deep representation for classification of refrigerator image via novel convolutional neural network
    Lian, Jian
    Zhang, Yan
    Fan, Mingqu
    Pu, Haitao
    Lin, Jianwei
    Zheng, Yuanjie
    Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A, 2021, 44 (01): : 33 - 40
  • [47] Implicit Regularization via Neural Feature Alignment
    Baratin, Aristide
    George, Thomas
    Laurent, Cesar
    Hjelm, R. Devon
    Lajoie, Guillaume
    Vincent, Pascal
    Lacoste-Julien, Simon
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [48] SIGNAL COMPRESSION VIA NEURAL IMPLICIT REPRESENTATIONS
    Pistilli, Francesca
    Valsesia, Diego
    Fracastoro, Giulia
    Magli, Enrico
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3733 - 3737
  • [49] Enhanced Quantified Local Implicit Neural Representation for Image Compression
    Zhang, Gai
    Zhang, Xinfeng
    Tang, Lv
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1742 - 1746
  • [50] Investigation of a neural implicit representation tomography method for flow diagnostics
    Kelly, Dustin
    Thurow, Brian
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (05)