共 1 条
Algorithmic Transparency and Participation through the Handoff Lens: Lessons Learned from the US Census Bureau's Adoption of Differential Privacy
被引:0
|作者:
Abdu, Amina A.
[1
]
Chambers, Lauren M.
[2
]
Mulligan, Deirdre K.
[2
]
Jacobs, Abigail Z.
[1
]
机构:
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
来源:
基金:
美国国家科学基金会;
关键词:
critical transparency studies;
participatory AI;
differential privacy;
census;
BOUNDARY OBJECTS;
ACCOUNTABILITY;
GOVERNANCE;
D O I:
10.1145/3630106.3658962
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Emerging discussions on the responsible government use of algorithmic technologies propose transparency and public participation as key mechanisms for preserving accountability and trust. But in practice, the adoption and use of any technology shifts the social, organizational, and political context in which it is embedded. Therefore translating transparency and participation efforts into meaningful, effective accountability must take into account these shifts. We adopt two theoretical frames, Mulligan and Nissenbaum's handoff model and Star and Griesemer's boundary objects, to reveal such shifts during the U.S. Census Bureau's adoption of differential privacy (DP) in its updated disclosure avoidance system (DAS) for the 2020 census. This update preserved (and arguably strengthened) the confidentiality protections that the Bureau is mandated to uphold, and the Bureau engaged in a range of activities to facilitate public understanding of and participation in the system design process. Using publicly available documents concerning the Census' implementation of DP, this case study seeks to expand our understanding of how technical shifts implicate values, how such shifts can afford (or fail to afford) greater transparency and participation in system design, and the importance of localized expertise throughout. We present three lessons from this case study toward grounding understandings of algorithmic transparency and participation: (1) efforts towards transparency and participation in algorithmic governance must center values and policy decisions, not just technical design decisions; (2) the handoff model is a useful tool for revealing how such values may be cloaked beneath technical decisions; and (3) boundary objects alone cannot bridge distant communities without trusted experts traveling alongside to broker their adoption.
引用
收藏
页码:1150 / 1162
页数:13
相关论文