A unified pre-training and adaptation framework for combinatorial optimization on graphs

被引:0
|
作者
Zeng, Ruibin [1 ]
Lei, Minglong [2 ]
Niu, Lingfeng [3 ]
Cheng, Lan [4 ]
机构
[1] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[2] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
[3] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[4] Cent South Univ, Sch Math & Stat, Changsha 410075, Peoples R China
基金
中国国家自然科学基金;
关键词
combinatorial optimization; graph neural networks; domain adaptation; maximum satisfiability problem; STEINER TREE PROBLEM; ALGORITHMS; BRANCH; SALESMAN; SEARCH;
D O I
10.1007/s11425-023-2247-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Combinatorial optimization (CO) on graphs is a classic topic that has been extensively studied across many scientific and industrial fields. Recently, solving CO problems on graphs through learning methods has attracted great attention. Advanced deep learning methods, e.g., graph neural networks (GNNs), have been used to effectively assist the process of solving COs. However, current frameworks based on GNNs are mainly designed for certain CO problems, thereby failing to consider their transferable and generalizable abilities among different COs on graphs. Moreover, simply using original graphs to model COs only captures the direct correlations among objects, which does not consider the mathematical logicality and properties of COs. In this paper, we propose a unified pre-training and adaptation framework for COs on graphs with the help of the maximum satisfiability (Max-SAT) problem. We first use Max-SAT to bridge different COs on graphs since they can be converted to Max-SAT problems represented by standard formulas and clauses with logical information. Then we further design a pre-training and domain adaptation framework to extract the transferable and generalizable features so that different COs can benefit from them. In the pre-training stage, Max-SAT instances are generated to initialize the parameters of the model. In the fine-tuning stage, instances from CO and Max-SAT problems are used for adaptation so that the transferable ability can be further improved. Numerical experiments on several datasets show that features extracted by our framework exhibit superior transferability and Max-SAT can boost the ability to solve COs on graphs.
引用
收藏
页码:1439 / 1456
页数:18
相关论文
共 50 条
  • [21] Pre-training to Match for Unified Low-shot Relation Extraction
    Liu, Fangchao
    Lin, Hongyu
    Han, Xianpei
    Cao, Boxi
    Sun, Le
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5785 - 5795
  • [22] UniXcoder: Unified Cross-Modal Pre-training for Code Representation
    Guo, Daya
    Lu, Shuai
    Duan, Nan
    Wang, Yanlin
    Zhou, Ming
    Yin, Jian
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7212 - 7225
  • [23] Unified Vision-Language Pre-Training for Image Captioning and VQA
    Zhou, Luowei
    Palangi, Hamid
    Zhang, Lei
    Hu, Houdong
    Corso, Jason J.
    Gao, Jianfeng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13041 - 13049
  • [24] A Quantum Framework for Combinatorial Optimization Problem over Graphs
    Shi, Meng
    Wu, Sai
    Li, Ying
    Yuan, Gongsheng
    Yao, Chang
    Chen, Gang
    DATA SCIENCE AND ENGINEERING, 2025,
  • [25] An Adaptive Graph Pre-training Framework for Localized Collaborative Filtering
    Wang, Yiqi
    Li, Chaozhuo
    Liu, Zheng
    Li, Mingzheng
    Tang, Jiliang
    Xie, Xing
    Chen, Lei
    Yu, Philip S.
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (02)
  • [26] ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding
    Sun, Yu
    Wang, Shuohuan
    Li, Yukun
    Feng, Shikun
    Tian, Hao
    Wu, Hua
    Wang, Haifeng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8968 - 8975
  • [27] A unified framework for modeling and solving combinatorial optimization problems: A tutorial
    Kochenberger, GA
    Clover, F
    MULTISCALE OPTIMIZATION METHODS AND APPLICATIONS, 2006, 82 : 101 - +
  • [28] Unsupervised Video Domain Adaptation with Masked Pre-Training and Collaborative Self-Training
    Reddy, Arun
    Paul, William
    Rivera, Corban
    Shah, Ketul
    de Melo, Celso M.
    Chellappa, Rama
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 18919 - 18929
  • [29] UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language
    Mou, Xinyi
    Wei, Zhongyu
    Zhang, Qi
    Huang, Xuanjing
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 11996 - 12012
  • [30] Unified building change detection pre-training method with masked semantic annotations
    Quan, Yujun
    Yu, Anzhu
    Guo, Wenyue
    Lu, Xuanbei
    Jiang, Bingchun
    Zheng, Shulei
    He, Peipei
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2023, 120