Explainable machine learning for project management control

被引:8
|
作者
Ignacio Santos, Jose [1 ]
Pereda, Maria [2 ,3 ]
Ahedo, Virginia [1 ]
Manuel Galan, Jose [1 ]
机构
[1] Univ Burgos, Escuela Politecn Super, Dept Ingn Org, Ave Cantabria S-N, Burgos 09006, Spain
[2] Univ Politecn Madrid, Escuela Tecn Super Ingn Ind, Dept Ingn Org Adm empresas & Estadist, Grp Invest Ingn Org & Logist IOL, C Jose Gutierrez Abascal 2, Madrid 28006, Spain
[3] Grp Interdisciplinar Sistemas Complejos GISC, Madrid, Spain
关键词
Project management; Stochastic project control; Earned value management; Shapley values; Explainable machine learning; SHAP; EARNED VALUE MANAGEMENT; TOLERANCE LIMITS; RISK ANALYSIS; DURATION; PERFORMANCE; COST; CLASSIFICATIONS; UNCERTAINTY; REGRESSION; EXTENSION;
D O I
10.1016/j.cie.2023.109261
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Project control is a crucial phase within project management aimed at ensuring -in an integrated manner- that the project objectives are met according to plan. Earned Value Management -along with its various refinements- is the most popular and widespread method for top-down project control. For project control under uncertainty, Monte Carlo simulation and statistical/machine learning models extend the earned value framework by allowing the analysis of deviations, expected times and costs during project progress. Recent advances in explainable machine learning, in particular attribution methods based on Shapley values, can be used to link project control to activity properties, facilitating the interpretation of interrelations between activity characteristics and control objectives. This work proposes a new methodology that adds an explainability layer based on SHAP -Shapley Additive exPlanations- to different machine learning models fitted to Monte Carlo simulations of the project network during tracking control points. Specifically, our method allows for both prospective and retrospective analyses, which have different utilities: forward analysis helps to identify key relationships between the different tasks and the desired outcomes, thus being useful to make execution/ replanning decisions; and backward analysis serves to identify the causes of project status during project progress. Furthermore, this method is general, model-agnostic and provides quantifiable and easily interpretable information, hence constituting a valuable tool for project control in uncertain environments.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Explainable machine learning models with privacy
    Aso Bozorgpanah
    Vicenç Torra
    Progress in Artificial Intelligence, 2024, 13 : 31 - 50
  • [22] Hardware Acceleration of Explainable Machine Learning
    Pan, Zhixin
    Mishra, Prabhat
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 1127 - 1130
  • [23] eXplainable Cooperative Machine Learning with NOVA
    Baur, Tobias
    Heimerl, Alexander
    Lingenfelser, Florian
    Wagner, Johannes
    Valstar, Michel F.
    Schuller, Björn
    André, Elisabeth
    KI - Kunstliche Intelligenz, 2020, 34 (02): : 143 - 164
  • [24] Explainable Machine Learning for Intrusion Detection
    Bellegdi, Sameh
    Selamat, Ali
    Olatunji, Sunday O.
    Fujita, Hamido
    Krejcar, Ondfrej
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, IEA-AIE 2024, 2024, 14748 : 122 - 134
  • [25] Explainable Artificial Intelligence and Machine Learning
    Raunak, M. S.
    Kuhn, Rick
    COMPUTER, 2021, 54 (10) : 25 - 27
  • [26] Explainable machine learning in cybersecurity: A survey
    Yan, Feixue
    Wen, Sheng
    Nepal, Surya
    Paris, Cecile
    Xiang, Yang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 12305 - 12334
  • [27] Explainable machine learning for financial risk management: two practical use cases
    Fama, Angelo
    Myftiu, Jurgena
    Pagnottoni, Paolo
    Spelta, Alessandro
    STATISTICS, 2024, 58 (05) : 1267 - 1282
  • [28] Predicting Software Defects with Explainable Machine Learning
    Santos, Geanderson
    Figueiredo, Eduardo
    Veloso, Adriano
    Viggiato, Markos
    Ziviani, Nivio
    PROCEEDINGS OF THE 19TH BRAZILIAN SYMPOSIUM ON SOFTWARE QUALITY, SBOS 2020, 2020,
  • [29] Explainable machine learning for hydrocarbon prospect risking
    Mustafa, Ahmad
    Koster, Klaas
    Alregib, Ghassan
    GEOPHYSICS, 2024, 89 (01) : WA13 - WA24
  • [30] Explainable machine learning for phishing feature detection
    Calzarossa, Maria Carla
    Giudici, Paolo
    Zieni, Rasha
    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, 2024, 40 (01) : 362 - 373