‘If You’re Going to Trust the Machine, Then That Trust Has Got to Be Based on Something’:

Validation and the Co-Constitution of Trust in Developing Artificial Intelligence (AI) for the Early Diagnosis of Pulmonary Hypertension (PH)

Authors

Abstract

The role of Artificial Intelligence (AI) in clinical decision-making raises issues of trust. One issue concerns the conditions of trusting the AI which tends to be based on validation. However, little attention has been given to how validation is formed, how comparisons come to be accepted, and how AI algorithms are trusted in decision-making. Drawing on interviews with collaborative researchers developing three AI technologies for the early diagnosis of pulmonary hypertension (PH), we show how validation of the AI is jointly produced so that trust in the algorithm is built up through the negotiation of criteria and terms of comparison during interactions. These processes build up interpretability and interrogation, and co-constitute trust in the technology. As they do so, it becomes difficult to sustain a strict distinction between artificial and human/social intelligence.

Downloads

Download data is not yet available.
Section
Research Papers

Published

2022-03-22 — Updated on 2022-12-15

Versions

How to Cite

Winter, P. and Carusi, A. (2022) “‘If You’re Going to Trust the Machine, Then That Trust Has Got to Be Based on Something’: : Validation and the Co-Constitution of Trust in Developing Artificial Intelligence (AI) for the Early Diagnosis of Pulmonary Hypertension (PH) ”, Science & Technology Studies, 35(4), pp. 58–77. doi: 10.23987/sts.102198.