Ordering-Based Causal Discovery with Reinforcement Learning

被引:0
|
作者
Wang, Xiaoqiang [1 ]
Du, Yali [2 ]
Zhu, Shengyu [3 ]
Ke, Liangjun [1 ]
Chen, Zhitang [3 ]
Hao, Jianye [3 ,4 ]
Wang, Jun [2 ]
机构
[1] Xi An Jiao Tong Univ, Sch Automat Sci & Engn, State Key Lab Mfg Syst Engn, Xian, Peoples R China
[2] UCL, London, England
[3] Huawei Noahs Ark Lab, Quebec City, PQ, Canada
[4] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is a long-standing question to discover causal relations among a set of variables in many empirical sciences. Recently, Reinforcement Learning (RL) has achieved promising results in causal discovery from observational data. However, searching the space of directed graphs and enforcing acyclicity by implicit penalties tend to be inefficient and restrict the existing RL-based method to small scale problems. In this work, we propose a novel RL-based approach for causal discovery, by incorporating RL into the ordering-based paradigm. Specifically, we formulate the ordering search problem as a multi-step Markov decision process, implement the ordering generating process with an encoder-decoder architecture, and finally use RL to optimize the proposed model based on the reward mechanisms designed for each ordering. A generated ordering would then be processed using variable selection to obtain the final causal graph. We analyze the consistency and computational complexity of the proposed method, and empirically show that a pretrained model can be exploited to accelerate training. Experimental results on both synthetic and real data sets shows that the proposed method achieves a much improved performance over existing RL-based method.
引用
收藏
页码:3566 / 3573
页数:8
相关论文
共 50 条
  • [41] Ordering-based pruning for improving the performance of ensembles of classifiers in the framework of imbalanced datasets
    Galar, Mikel
    Fernandez, Alberto
    Barrenechea, Edurne
    Bustince, Humberto
    Herrera, Francisco
    INFORMATION SCIENCES, 2016, 354 : 178 - 196
  • [42] Causal reinforcement learning based on Bayesian networks applied to industrial settings
    Valverde, Gabriel
    Quesada, David
    Larranaga, Pedro
    Bielza, Concha
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 125
  • [43] Latent Causal Dynamics Model for Model-Based Reinforcement Learning
    Hao, Zhifeng
    Zhu, Haipeng
    Chen, Wei
    Cai, Ruichu
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 219 - 230
  • [44] Q*-based state abstraction and knowledge discovery in reinforcement learning
    Taherian, Nahid
    Shiri, Mohammad Ebrahim
    INTELLIGENT DATA ANALYSIS, 2014, 18 (06) : 1153 - 1175
  • [45] Reinforcement learning transfer based on subgoal discovery and subtask similarity
    Wang, Hao
    Fan, Shunguo
    Song, Jinhua
    Gao, Yang
    Chen, Xingguo
    IEEE/CAA Journal of Automatica Sinica, 2014, 1 (03) : 257 - 266
  • [46] Reinforcement Learning Transfer Based on Subgoal Discovery and Subtask Similarity
    Hao Wang
    Shunguo Fan
    Jinhua Song
    Yang Gao
    Xingguo Chen
    IEEE/CAAJournalofAutomaticaSinica, 2014, 1 (03) : 257 - 266
  • [47] Active learning of causal structures with deep reinforcement learning
    Amirinezhad, Amir
    Salehkaleybar, Saber
    Hashemi, Matin
    NEURAL NETWORKS, 2022, 154 : 22 - 30
  • [48] Design Drives Discovery in Causal Learning
    Walker, Caren M.
    Rett, Alexandra
    Bonawitz, Elizabeth
    PSYCHOLOGICAL SCIENCE, 2020, 31 (02) : 129 - 138
  • [49] Causal discovery for fuzzy rule learning
    Kunitomo-Jacquin, Lucie
    Lomet, Aurore
    Poli, Jean-Philippe
    2022 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2022,
  • [50] Learning latent functions for causal discovery
    Diaz, Emiliano
    Varando, Gherardo
    Johnson, J. Emmanuel
    Camps-Valls, Gustau
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (03):