Counterfactual Explanation and Causal Inference In Service of Robustness in Robot Control

被引:3
|
作者
Smith, Simon C. [1 ]
Ramamoorthy, Subramanian [1 ]
机构
[1] Univ Edinburgh, Inst Percept Act & Behav, Sch Informat, Edinburgh, Midlothian, Scotland
基金
英国工程与自然科学研究理事会;
关键词
counterfactual conditionals; causal inference; model explainability; state envisioning; controller robustness; DESIGN;
D O I
10.1109/icdl-epirob48136.2020.9278061
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an architecture for training generative models of counterfactual conditionals of the form, 'can we modify event A to cause B instead of C?', motivated by applications in robot control. Using an `adversarial training' paradigm, an image-based deep neural network model is trained to produce small and realistic modifications to an original image in order to cause user-defined effects. These modifications can be used in the design process of image-based robust control - to determine the ability of the controller to return to a working regime by modifications in the input space, rather than by adaptation. In contrast to conventional control design approaches, where robustness is quantified in terms of the ability to reject noise, we explore the space of counterfactuals that might cause a certain requirement to be violated, thus proposing an alternative model that might be more expressive in certain robotics applications. So, we propose the generation of counterfactuals as an approach to explanation of black-box models and the envisioning of potential movement paths in autonomous robotic control. Firstly, we demonstrate this approach in a set of classification tasks, using the well known MNIST and CelebFaces Attributes datasets. Then, addressing multi-dimensional regression, we demonstrate our approach in a reaching task with a physical robot, and in a navigation task with a robot in a digital twin simulation.
引用
收藏
页数:8
相关论文
共 50 条