DQN-SCI: A Reinforcement Learning Method for Sequential Causal Inference

被引:0
|
作者
Tian, Enqi [1 ]
Lyu, Shengfei [2 ]
Chen, Huanhuan [1 ]
Liu, Lei [1 ,3 ]
Li, Bin [1 ]
机构
[1] Univ Sci & Technol China, Hefei 230027, Peoples R China
[2] Nanyang Technol Univ, Alibaba NTU Global E Sustainabil CorpLab, Singapore 639798, Singapore
[3] Lab Big Data & Decis, Changsha 410037, Peoples R China
关键词
Sequential Causal Inference; Feature Selection; Active Feature Acquisition; Deep Q-network;
D O I
10.1109/BIGDIA63733.2024.10808647
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Causal inference is crucial in decision-making as it helps estimate the effects of interventions or treatments on key variables. Currently, the task of causal inference focuses on estimating causal effects using fully measured features. However, this does not accurately reflect real-world scenarios, where the challenge often involves first selecting the relevant features to measure before performing the estimation. To solve the challenge, in this paper, we introduce a new task called sequential causal inference and propose an innovative approach named deep Q-network for sequential causal inference (DQN-SCI). DQN-SCI designs a 'decider-inferencer' framework to solve the task, where the decider first selects valuable features for the subsequent inferencer. DQN-SCI outperforms the compared methods on a synthetic dataset, demonstrating its effectiveness.
引用
收藏
页码:777 / 784
页数:8
相关论文
共 50 条
  • [31] On the relationship of machine learning with causal inference
    Sheng-Hsuan Lin
    Mohammad Arfan Ikram
    European Journal of Epidemiology, 2020, 35 : 183 - 185
  • [32] Causal Inference Meets Machine Learning
    Cui, Peng
    Shen, Zheyan
    Li, Sheng
    Yao, Liuyi
    Li, Yaliang
    Chu, Zhixuan
    Gao, Jing
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3527 - 3528
  • [33] Applying Double DQN to Reinforcement learning of Automated Designing ICT System
    Okamura, Natsuki
    Yakuwa, Yutaka
    Kuroda, Takayuki
    Yairi, Ikuko E.
    IEICE COMMUNICATIONS EXPRESS, 2022, 11 (10): : 667 - 672
  • [34] CT-DQN: Control-Tutored Deep Reinforcement Learning
    De Lellis, Francesco
    Coraggio, Marco
    Russo, Giovanni
    Musolesi, Mirco
    di Bernardo, Mario
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [35] Applying Double DQN to Reinforcement learning of Automated Designing ICT System
    Okamura, Natsuki
    Yakuwa, Yutaka
    Kuroda, Takayuki
    Yairi, Ikuko E.
    IEICE COMMUNICATIONS EXPRESS, 2022,
  • [36] Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning
    Anschel, Oron
    Baram, Nir
    Shimkin, Nahum
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [37] Causal Bandits: Learning Good Interventions via Causal Inference
    Lattimore, Finnian
    Lattimore, Tor
    Reid, Mark D.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [38] A Lightweight Reinforcement Learning Based Packet Routing Method Using Online Sequential Learning
    Nemoto, Kenji
    Matsutani, Hiroki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2023, E106D (11) : 1796 - 1807
  • [39] Qc - DQN: A Novel Constrained Reinforcement Learning Method for Computation Offloading in Multi-access Edge Computing
    Zhuang, Shen
    Gao, Chengxi
    He, Ying
    Yu, F. Richard
    Wang, Yuhang
    Pan, Weike
    Ming, Zhong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [40] Online Reinforcement Learning by Bayesian Inference
    Xia, Zhongpu
    Zhao, Dongbin
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,