MuralDiff: Diffusion for Ancient Murals Restoration on Large-Scale Pre-Training

被引:3
|
作者
Xu, Zishan [1 ]
Zhang, Xiaofeng [2 ]
Chen, Wei [1 ,3 ,4 ]
Liu, Jueting [1 ]
Xu, Tingting [1 ]
Wang, Zehua [1 ,5 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou, Peoples R China
[2] Shanghai Jiao Tong Univ, Sch Elect Engn & Elect Informat, Shanghai 200240, Peoples R China
[3] China Univ Min & Technol Beijing, Sch Mech Elect & Informat Engn, Beijing 100083, Peoples R China
[4] Minist Emergency Management, Key Lab Intelligent Min & Robot, Beijing 100083, Peoples R China
[5] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
基金
中国国家自然科学基金;
关键词
Image restoration; Image edge detection; Task analysis; Generative adversarial networks; Cultural differences; Adaptation models; Training; Mural restoration; diffusion model; crack detection; image inpainting; large-scale pretraining;
D O I
10.1109/TETCI.2024.3359038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper focuses on the crack detection and digital restoration of ancient mural cultural heritage, proposing a comprehensive method that combines the Unet network structure and diffusion model. Firstly, the Unet network structure is used for efficient crack detection in murals by constructing an ancient mural image dataset for training and validation, achieving accurate identification of mural cracks. Next, an edge-guided optimized masking strategy is adopted for mural restoration, effectively preserving the information of the murals and reducing the damage to the original murals during the restoration process. Lastly, a diffusion model is employed for digital restoration of murals, improving the restoration performance by adjusting parameters to achieve natural repair of mural cracks. Experimental results show that comprehensive method based on the Unet network and diffusion model has significant advantages in the tasks of crack detection and digital restoration of murals, providing a novel and effective approach for the protection and restoration of ancient murals. In addition, this research has significant implications for the technological development in the field of mural restoration and cultural heritage preservation, contributing to the advancement and technological innovation in related fields.
引用
收藏
页码:2169 / 2181
页数:13
相关论文
共 50 条
  • [1] Pre-training on Large-Scale Heterogeneous Graph
    Jiang, Xunqiang
    Jia, Tianrui
    Fang, Yuan
    Shi, Chuan
    Lin, Zhe
    Wang, Hui
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 756 - 766
  • [2] Synthetic Augmentation with Large-Scale Unconditional Pre-training
    Ye, Jiarong
    Ni, Haomiao
    Jin, Peng
    Huang, Sharon X.
    Xue, Yuan
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 754 - 764
  • [3] PreDet: Large-scale weakly supervised pre-training for detection
    Ramanathan, Vignesh
    Wang, Rui
    Mahajan, Dhruv
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2845 - 2855
  • [4] An Optimized Method for Large-Scale Pre-Training in Symbolic Music
    Liu, Shike
    Xu, Hongguang
    Xu, Ke
    [J]. Proceedings of the International Conference on Anti-Counterfeiting, Security and Identification, ASID, 2022, 2022-December : 105 - 109
  • [5] Automating Code Review Activities by Large-Scale Pre-training
    Li, Zhiyu
    Lu, Shuai
    Guo, Daya
    Duan, Nan
    Jannu, Shailesh
    Jenks, Grant
    Majumder, Deep
    Green, Jared
    Svyatkovskiy, Alexey
    Fu, Shengyu
    Sundaresan, Neel
    [J]. PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1035 - 1047
  • [6] MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
    Zeng, Mingliang
    Tan, Xu
    Wang, Rui
    Ju, Zeqian
    Qin, Tao
    Liu, Tie-Yan
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 791 - 800
  • [7] BigDetection: A Large-scale Benchmark for Improved Object Detector Pre-training
    Cai, Likun
    Zhang, Zhi
    Zhu, Yi
    Zhang, Li
    Li, Mu
    Xue, Xiangyang
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 4776 - 4786
  • [8] DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation
    Zhang, Yizhe
    Sun, Siqi
    Galley, Michel
    Chen, Yen-Chun
    Brockett, Chris
    Gao, Xiang
    Gao, Jianfeng
    Liu, Jingjing
    Dolan, Bill
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020): SYSTEM DEMONSTRATIONS, 2020, : 270 - 278
  • [9] Large-Scale Unsupervised Audio Pre-Training for Video-to-Speech Synthesis
    Kefalas, Triantafyllos
    Panagakis, Yannis
    Pantic, Maja
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 2255 - 2268
  • [10] SelfPAB: large-scale pre-training on accelerometer data for human activity recognition
    Aleksej Logacjov
    Sverre Herland
    Astrid Ustad
    Kerstin Bach
    [J]. Applied Intelligence, 2024, 54 : 4545 - 4563