A unified pre-training and adaptation framework for combinatorial optimization on graphs

被引:0
|
作者
Zeng, Ruibin [1 ]
Lei, Minglong [2 ]
Niu, Lingfeng [3 ]
Cheng, Lan [4 ]
机构
[1] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[2] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
[3] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[4] Cent South Univ, Sch Math & Stat, Changsha 410075, Peoples R China
基金
中国国家自然科学基金;
关键词
combinatorial optimization; graph neural networks; domain adaptation; maximum satisfiability problem; STEINER TREE PROBLEM; ALGORITHMS; BRANCH; SALESMAN; SEARCH;
D O I
10.1007/s11425-023-2247-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Combinatorial optimization (CO) on graphs is a classic topic that has been extensively studied across many scientific and industrial fields. Recently, solving CO problems on graphs through learning methods has attracted great attention. Advanced deep learning methods, e.g., graph neural networks (GNNs), have been used to effectively assist the process of solving COs. However, current frameworks based on GNNs are mainly designed for certain CO problems, thereby failing to consider their transferable and generalizable abilities among different COs on graphs. Moreover, simply using original graphs to model COs only captures the direct correlations among objects, which does not consider the mathematical logicality and properties of COs. In this paper, we propose a unified pre-training and adaptation framework for COs on graphs with the help of the maximum satisfiability (Max-SAT) problem. We first use Max-SAT to bridge different COs on graphs since they can be converted to Max-SAT problems represented by standard formulas and clauses with logical information. Then we further design a pre-training and domain adaptation framework to extract the transferable and generalizable features so that different COs can benefit from them. In the pre-training stage, Max-SAT instances are generated to initialize the parameters of the model. In the fine-tuning stage, instances from CO and Max-SAT problems are used for adaptation so that the transferable ability can be further improved. Numerical experiments on several datasets show that features extracted by our framework exhibit superior transferability and Max-SAT can boost the ability to solve COs on graphs.
引用
收藏
页码:1439 / 1456
页数:18
相关论文
共 50 条
  • [31] ProQA: Structural Prompt-based Pre-training for Unified Question Answering
    Zhong, Wanjun
    Gao, Yifan
    Ding, Ning
    Qin, Yujia
    Liu, Zhiyuan
    Zhou, Ming
    Wang, Jiahai
    Yin, Jian
    Duan, Nan
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4230 - 4243
  • [32] Pre-training the deep generative models with adaptive hyperparameter optimization
    Yao, Chengwei
    Cai, Deng
    Bu, Jiajun
    Chen, Gencai
    NEUROCOMPUTING, 2017, 247 : 144 - 155
  • [33] Explanation Graph Generation via Generative Pre-training over Synthetic Graphs
    Cui, Han
    Li, Shangzhan
    Zhang, Yu
    Shi, Qi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 9916 - 9934
  • [34] An efficient interactive segmentation framework for medical images without pre-training
    Sun, Lei
    Tian, Zhiqiang
    Chen, Zhang
    Luo, Wenrui
    Du, Shaoyi
    MEDICAL PHYSICS, 2023, 50 (04) : 2239 - 2248
  • [35] Multi-stage Pre-training over Simplified Multimodal Pre-training Models
    Liu, Tongtong
    Feng, Fangxiang
    Wang, Xiaojie
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
  • [36] FEDBFPT: An Efficient Federated Learning Framework for BERT Further Pre-training
    Wang, Xin'ao
    Li, Huan
    Chen, Ke
    Shou, Lidan
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4344 - 4352
  • [37] A Diffusion-Based Pre-training Framework for Crystal Property Prediction
    Song, Zixing
    Meng, Ziqiao
    King, Irwin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8993 - 9001
  • [38] Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs
    Yu, Xingtong
    Liu, Zhenghao
    Fang, Yuan
    Liu, Zemin
    Chen, Sihong
    Zhang, Xinming
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6237 - 6250
  • [39] CCSUMSP: A cross-subject Chinese speech decoding framework with unified topology and multi-modal semantic pre-training
    Huang, Shuai
    Wang, Yongxiong
    Luo, Huan
    INFORMATION FUSION, 2025, 119
  • [40] A Multilingual Framework Based on Pre-training Model for Speech Emotion Recognition
    Zhang, Zhaohang
    Zhang, Xiaohui
    Guo, Min
    Zhang, Wei-Qiang
    Li, Ke
    Huang, Yukai
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 750 - 755