Explainable artificial intelligence for reliable water demand forecasting to increase trust in predictions

被引:0
|
作者
Maußner, Claudia [1 ]
Oberascher, Martin [2 ]
Autengruber, Arnold [3 ]
Kahl, Arno [3 ]
Sitzenfrei, Robert [2 ]
机构
[1] Fraunhofer Austria Research GmbH KI4LIFE, Lakeside B13a, 9020 Klagenfurt am Wörthersee, Austria
[2] Unit of Environmental Engineering, Department of Infrastructure Engineering, University of Innsbruck, Technikerstraße 13, Innsbruck,6020, Austria
[3] Department for Public Law, Constitutional and Administrative Theory, University of Innsbruck, Innrain 52d, Innsbruck,6020, Austria
关键词
Prediction models;
D O I
10.1016/j.watres.2024.122779
中图分类号
学科分类号
摘要
The EU Artificial Intelligence Act sets a framework for the implementation of artificial intelligence (AI) in Europe. As a legal assessment reveals, AI applications in water supply systems are categorised as high-risk AI if a failure in the AI application results in a significant impact on physical infrastructure or supply reliability. The use case of water demand forecasts with AI for automatic tank operation is for example categorised as high-risk AI and must fulfil specific requirements regarding model transparency (traceability, explainability) and technical robustness (accuracy, reliability). To this end, six widely established machine learning models, including both transparent and opaque models, are applied to different datasets for daily water demand forecasting and the requirements regarding model accuracy, transparency and technical robustness are systematically evaluated for this use case. Opaque models generally achieve higher prediction accuracy compared to transparent models due to their ability to capture the complex relationship between parameters like for example weather data and water demand. However, this also makes them vulnerable to deviations and irregularities in weather forecasts and historical water demand. In contrast, transparent models rely mainly on historical water demand data for the utilised dataset and are less influenced by weather data, making them more robust against various data irregularities. In summary, both transparent and opaque models can fulfil the requirements regarding explainability but differ in their level of transparency and robustness to input errors. The choice of model depends also on the operator's preferences and the context of the application. © 2024
引用
下载
收藏
相关论文
共 50 条
  • [1] Can we trust explainable artificial intelligence in wind power forecasting?
    Liao, Wenlong
    Fang, Jiannong
    Ye, Lin
    Bak-Jensen, Birgitte
    Yang, Zhe
    Porte-Agel, Fernando
    APPLIED ENERGY, 2024, 376
  • [2] From Explainable to Reliable Artificial Intelligence
    Narteni, Sara
    Ferretti, Melissa
    Orani, Vanessa
    Vaccari, Ivan
    Cambiaso, Enrico
    Mongelli, Maurizio
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION (CD-MAKE 2021), 2021, 12844 : 255 - 273
  • [3] THE JUDICIAL DEMAND FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE
    Deeks, Ashley
    COLUMBIA LAW REVIEW, 2019, 119 (07) : 1829 - 1850
  • [4] Analysis of driving factors of water demand based on explainable artificial intelligence
    Ou, Zhigang
    He, Fan
    Zhu, Yongnan
    Lu, Peiyi
    Wang, Lichuan
    JOURNAL OF HYDROLOGY-REGIONAL STUDIES, 2023, 47
  • [5] Optimisation of water demand forecasting by artificial intelligence with short data sets
    Gonzalez Perea, Rafael
    Camacho Poyato, Emilio
    Montesinos, Pilar
    Rodriguez Diaz, Juan Antonio
    BIOSYSTEMS ENGINEERING, 2019, 177 : 59 - 66
  • [6] Artificial intelligence for water-energy nexus demand forecasting: a review
    Alhendi, Alya A.
    Al-Sumaiti, Ameena S.
    Elmay, Feruz K.
    Wescaot, James
    Kavousi-Fard, Abdollah
    Heydarian-Forushani, Ehsan
    Alhelou, Hassan Haes
    INTERNATIONAL JOURNAL OF LOW-CARBON TECHNOLOGIES, 2022, 17 : 730 - 744
  • [7] Explainable artificial intelligence as a reliable annotator of archaeal promoter regions
    Martinez, Gustavo Sganzerla
    Perez-Rueda, Ernesto
    Kumar, Aditya
    Sarkar, Sharmilee
    Silva, Scheila de Avila e
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [8] Explainable artificial intelligence as a reliable annotator of archaeal promoter regions
    Gustavo Sganzerla Martinez
    Ernesto Perez-Rueda
    Aditya Kumar
    Sharmilee Sarkar
    Scheila de Avila e Silva
    Scientific Reports, 13
  • [9] Examining Correlation Between Trust and Transparency with Explainable Artificial Intelligence
    Kartikeya, Arnav
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 353 - 358
  • [10] Healthcare Trust Evolution with Explainable Artificial Intelligence: Bibliometric Analysis
    Dhiman, Pummy
    Bonkra, Anupam
    Kaur, Amandeep
    Gulzar, Yonis
    Hamid, Yasir
    Mir, Mohammad Shuaib
    Soomro, Arjumand Bano
    Elwasila, Osman
    INFORMATION, 2023, 14 (10)