Characterizing and Improving the Robustness of Predict-Then-Optimize Frameworks

被引:0
|
作者
Johnson-Yu, Sonja [1 ]
Finocchiaro, Jessie [1 ,2 ]
Wang, Kai [3 ]
Vorobeychik, Yevgeniy [5 ]
Sinha, Arunesh [6 ]
Taneja, Aparna [4 ]
Tambe, Milind [1 ,2 ,4 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Ctr Res Computat & Soc, Boston, MA USA
[3] MIT, Cambridge, MA USA
[4] Google Res India, Bangalore, Karnataka, India
[5] Washington Univ, St Louis, MO USA
[6] Rutgers State Univ, Newark, NJ USA
基金
美国国家科学基金会;
关键词
predict-then-optimize; adversarial label drift; decision-focused learning;
D O I
10.1007/978-3-031-50670-3_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization tasks situated in incomplete information settings are often preceded by a prediction problem to estimate the missing information; past work shows the traditional predict-then-optimize (PTO) framework can be improved by training a predictive model with respect to the optimization task through a PTO paradigm called decision-focused learning. Little is known, however, about the performance of traditional PTO and decision-focused learning when exposed to adversarial label drift. We provide modifications of traditional PTO and decision-focused learning that attempt to improve robustness by anticipating label drift. When the predictive model is perfectly expressive, we cast these learning problems as Stackelberg games. With these games, we provide a necessary condition for when anticipating label drift can improve the performance of a PTO algorithm: if performance can be improved, then the downstream optimization objective must be asymmetric. We then bound the loss of decision quality in the presence of adversarial label drift to show there may exist a strict gap between the performance of the two algorithms. We verify our theoretical findings empirically in two asymmetric and two symmetric settings. These experimental results demonstrate that robustified decision-focused learning is generally more robust to adversarial label drift than both robust and traditional PTO.
引用
收藏
页码:133 / 152
页数:20
相关论文
共 50 条
  • [21] Start Globally, Optimize Locally, Predict Globally: Improving Performance on Imbalanced Data
    Cieslak, David A.
    Chawla, Nitesh V.
    ICDM 2008: EIGHTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2008, : 143 - 152
  • [22] Smart "Predict, then Optimize"
    Elmachtoub, Adam N.
    Grigas, Paul
    MANAGEMENT SCIENCE, 2022, 68 (01) : 9 - 26
  • [23] Characterizing stability in evolving frameworks
    Mattsson, M
    Bosch, J
    TOOLS 29: TECHNOLOGY OF OBJECT-ORIENTED LANGUAGES AND SYSTEMS, PROCEEDINGS, 1999, 29 : 118 - 130
  • [24] Characterizing stability in evolving frameworks
    Mattsson, Michael
    Bosch, Jan
    Proceedings of the Conference on Technology of Object-Oriented Languages and Systems, TOOLS, 1999, : 118 - 130
  • [25] Improving the Robustness of JPEG Steganography With Robustness Cost
    Zhang, Jimin
    Zhao, Xianfeng
    He, Xiaolei
    Zhang, Hong
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 164 - 168
  • [26] Improving the Robustness of Scagnostics
    Wang, Yunhai
    Wang, Zeyu
    Liu, Tingting
    Correll, Michael
    Cheng, Zhanglin
    Deussen, Oliver
    Sedlmair, Michael
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) : 759 - 769
  • [27] Improving network robustness
    Beygelzimer, A
    Grinstein, G
    Linsker, R
    Rish, I
    INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING, PROCEEDINGS, 2004, : 322 - 323
  • [28] Characterizing the Universal Rigidity of Generic Frameworks
    Gortler, Steven J.
    Thurston, Dylan P.
    DISCRETE & COMPUTATIONAL GEOMETRY, 2014, 51 (04) : 1017 - 1036
  • [29] Characterizing the Universal Rigidity of Generic Frameworks
    Steven J. Gortler
    Dylan P. Thurston
    Discrete & Computational Geometry, 2014, 51 : 1017 - 1036
  • [30] Characterizing strong equivalence for argumentation frameworks
    Oikarinen, Emilia
    Woltran, Stefan
    ARTIFICIAL INTELLIGENCE, 2011, 175 (14-15) : 1985 - 2009