A unified pre-training and adaptation framework for combinatorial optimization on graphs

被引:0
|
作者
Zeng, Ruibin [1 ]
Lei, Minglong [2 ]
Niu, Lingfeng [3 ]
Cheng, Lan [4 ]
机构
[1] Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
[2] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
[3] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[4] Cent South Univ, Sch Math & Stat, Changsha 410075, Peoples R China
基金
中国国家自然科学基金;
关键词
combinatorial optimization; graph neural networks; domain adaptation; maximum satisfiability problem; STEINER TREE PROBLEM; ALGORITHMS; BRANCH; SALESMAN; SEARCH;
D O I
10.1007/s11425-023-2247-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Combinatorial optimization (CO) on graphs is a classic topic that has been extensively studied across many scientific and industrial fields. Recently, solving CO problems on graphs through learning methods has attracted great attention. Advanced deep learning methods, e.g., graph neural networks (GNNs), have been used to effectively assist the process of solving COs. However, current frameworks based on GNNs are mainly designed for certain CO problems, thereby failing to consider their transferable and generalizable abilities among different COs on graphs. Moreover, simply using original graphs to model COs only captures the direct correlations among objects, which does not consider the mathematical logicality and properties of COs. In this paper, we propose a unified pre-training and adaptation framework for COs on graphs with the help of the maximum satisfiability (Max-SAT) problem. We first use Max-SAT to bridge different COs on graphs since they can be converted to Max-SAT problems represented by standard formulas and clauses with logical information. Then we further design a pre-training and domain adaptation framework to extract the transferable and generalizable features so that different COs can benefit from them. In the pre-training stage, Max-SAT instances are generated to initialize the parameters of the model. In the fine-tuning stage, instances from CO and Max-SAT problems are used for adaptation so that the transferable ability can be further improved. Numerical experiments on several datasets show that features extracted by our framework exhibit superior transferability and Max-SAT can boost the ability to solve COs on graphs.
引用
收藏
页码:1439 / 1456
页数:18
相关论文
共 50 条
  • [1] A unified pre-training and adaptation framework for combinatorial optimization on graphs
    Ruibin Zeng
    Minglong Lei
    Lingfeng Niu
    Lan Cheng
    Science China(Mathematics), 2024, 67 (06) : 1439 - 1456
  • [2] UniCLIP: Unified Framework for Contrastive Language-Image Pre-training
    Lee, Janghyeon
    Kim, Jongsuk
    Shon, Hyounguk
    Kim, Bumsoo
    Kim, Seung Hwan
    Lee, Honglak
    Kim, Junmo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
    Li, Zhaowen
    Zhu, Yousong
    Yang, Fan
    Li, Wei
    Zhao, Chaoyang
    Chen, Yingying
    Chen, Zhiyang
    Xie, Jiahao
    Wu, Liwei
    Zhao, Rui
    Tang, Ming
    Wang, Jinqiao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14607 - 14616
  • [4] Unified pre-training for program understanding and generation
    Ahmad, Wasi Uddin
    Chakraborty, Saikat
    Ray, Baishakhi
    Chang, Kai-Wei
    arXiv, 2021,
  • [5] Unified Pre-training for Program Understanding and Generation
    Ahmad, Wasi Uddin
    Chakraborty, Saikat
    Ray, Baishakhi
    Chang, Kai-Wei
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 2655 - 2668
  • [6] Contrastive Pre-Training of GNNs on Heterogeneous Graphs
    Jiang, Xunqiang
    Lu, Yuanfu
    Fang, Yuan
    Shi, Chuan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 803 - 812
  • [7] Bridge Pre-Training and Clustering: A Unified Contrastive Learning Framework for OOD Intent Discovery
    Mou, Yutao
    Xu, Heyang
    IEEE ACCESS, 2023, 11 : 63714 - 63724
  • [8] PT-KGNN: A framework for pre-training biomedical knowledge graphs with graph neural networks
    Wang Z.
    Wei Z.
    Computers in Biology and Medicine, 2024, 178
  • [9] A Broad Study of Pre-training for Domain Generalization and Adaptation
    Kim, Donghyun
    Wang, Kaihong
    Sclaroff, Stan
    Saenko, Kate
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 621 - 638
  • [10] Unified Knowledge Prompt Pre-training for Customer Service Dialogues
    He, Keqing
    Wang, Jingang
    Sun, Chaobo
    Wu, Wei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4009 - 4013