3D Semantic Novelty Detection via Large-Scale Pre-Trained Models

被引:0
|
作者
Rabino, Paolo [1 ,2 ]
Alliegro, Antonio [1 ]
Tommasi, Tatiana [1 ]
机构
[1] Polytechnic University of Turin, Department of Control and Computer Engineering, Turin,10129, Italy
[2] Italian Institute of Technology, Genoa,16163, Italy
关键词
Compendex;
D O I
10.1109/ACCESS.2024.3464334
中图分类号
学科分类号
摘要
3D modeling - Adversarial machine learning - Contrastive Learning - Deep learning - Metadata - Three dimensional computer graphics
引用
下载
收藏
页码:135352 / 135361
相关论文
共 50 条
  • [31] A comprehensive exploration of semantic relation extraction via pre-trained CNNs
    Li, Qing
    Li, Lili
    Wang, Weinan
    Li, Qi
    Zhong, Jiang
    KNOWLEDGE-BASED SYSTEMS, 2020, 194
  • [32] An Interactive Display System for Large-Scale 3D Models
    Liu, Zijian
    Sun, Kun
    Tao, Wenbing
    Liu, Liman
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [33] Hollowed-Out Icon Colorization with Pre-trained Large-Scale Image Generation Model
    Miyauchi, Koki
    Orihara, Ryohei
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    Transactions of the Japanese Society for Artificial Intelligence, 2024, 39 (06):
  • [34] Text Detoxification using Large Pre-trained Neural Models
    Dale, David
    Voronov, Anton
    Dementieva, Daryna
    Logacheva, Varvara
    Kozlova, Olga
    Semenov, Nikita
    Panchenko, Alexander
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7979 - 7996
  • [35] ON THE USE OF MODALITY-SPECIFIC LARGE-SCALE PRE-TRAINED ENCODERS FOR MULTIMODAL SENTIMENT ANALYSIS
    Ando, Atsushi
    Masumura, Ryo
    Takashima, Akihiko
    Suzuki, Satoshi
    Makishima, Naoki
    Suzuki, Keita
    Moriya, Takafumi
    Ashihara, Takanori
    Sato, Hiroshi
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 739 - 746
  • [36] Probing Toxic Content in Large Pre-Trained Language Models
    Ousidhoum, Nedjma
    Zhao, Xinran
    Fang, Tianqing
    Song, Yangqiu
    Yeung, Dit-Yan
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4262 - 4274
  • [37] 3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment
    Zhu, Ziyu
    Ma, Xiaojian
    Chen, Yixin
    Deng, Zhidong
    Huang, Siyuan
    Li, Qing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2899 - 2909
  • [38] Compressing Pre-trained Models of Code into 3 MB
    Shi, Jieke
    Yang, Zhou
    Xu, Bowen
    Kang, Hong Jin
    Lo, David
    PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
  • [39] Multilingual Translation via Grafting Pre-trained Language Models
    Sun, Zewei
    Wang, Mingxuan
    Li, Lei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2735 - 2747
  • [40] Compression of Generative Pre-trained Language Models via Quantization
    Tao, Chaofan
    Hou, Lu
    Zhang, Wei
    Shang, Lifeng
    Jiang, Xin
    Liu, Qun
    Luo, Ping
    Wong, Ngai
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4821 - 4836