Algorithm Fairness Through Data Inclusion, Participation, and Reciprocity

被引:4
|
作者
Akintande, Olalekan J. [1 ]
机构
[1] Univ Ibadan, Ibadan 20005, Nigeria
关键词
AI fairness; Inclusion; Participation; Cross-validation;
D O I
10.1007/978-3-030-73200-4_50
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning algorithms have become the basis of decision making and the modern tool of assessment in all spares of human endeavours. Consequently, several competing arguments about the reliability of learning algorithm remain at AI global debate due to concerns about arguable algorithm biases such as data inclusiveness bias, homogeneity assumption in data structuring, coding bias etc., resulting from human imposed bias, and variance among many others. Recent pieces of evidence (computer vision - misclassification of people of colour, face recognition, among many others) have shown that there is indeed a need for concerns. Evidence suggests that algorithm bias typically can be introduced to learning algorithm during the assemblage of a dataset; such as how the data is collected, digitized, structured, adapted, and entered into a database according to human-designed cataloguing criteria. Therefore, addressing algorithm fairness, bias and variance in artificial intelligence imply addressing the training set bias. We propose a framework of data inclusiveness, participation and reciprocity.
引用
收藏
页码:633 / 637
页数:5
相关论文
共 50 条