When Can We Ignore Missing Data in Model Training?

被引:0
|
作者
Zhen, Cheng [1 ]
Chabada, Amandeep Singh [1 ]
Termehchy, Arash [1 ]
机构
[1] Oregon State Univ, Corvallis, OR 97331 USA
关键词
data cleaning; machine learning; irrelevant and redundant data;
D O I
10.1145/3595360.3595854
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Imputing missing data is typically expensive, and as a result, people seek to avoid it when possible. To address this issue, we introduce a method that determines when data cleaning is unnecessary for machine learning (ML). If a model can minimize the loss function regardless of the missing data's actual values, then data cleaning is not required. We offer efficient algorithms for checking this condition in multiple ML problems, and by analyzing the algorithms, we show that data cleaning is unnecessary when dealing with irrelevant and redundant data. Our preliminary experiments demonstrate that our algorithms can significantly reduce cleaning costs compared to a benchmark method, without incurring much computational overhead in many cases.
引用
收藏
页数:4
相关论文
共 50 条