Health-AI | Human collaboration with AI agents in national health governance: organizational circumstances under which data analysts and medical experts follow or deviate from AI.

Summary
This project will study a multi-sited ethnography of a currently evolving revolution in global health systems: big data/AI-informed national health governance. With health data being considered countries’ ‘future oil’, public and scholarly concerns about ‘algorithmic ethics’ rise. Research has long shown that datasets in AI (re)produce social biases, discriminate and limit personal autonomy. This literature, however, has merely focused on AI design and institutional frameworks, examining the subject through legal, technocratic and philosophical perspectives, whilst overlooking the socio-cultural context in which big data and AI systems are embedded, most particularly organizations in which human agents collaborate with AI. This is problematic, as frameworks for ‘ethical AI’ currently consider human oversight crucial, assuming that humans will correct or resist AI when needed; while empirical evidence for this assumption is extremely thin. Very little is known about when and why people intervene or resist AI. Research done consists of single, mostly Western studies, making it impossible to generalize findings. The innovative force of our research is fourfold: 1) To empirically analyze decisive moments in which data-analysts follow or deviate AI: moments deeply impacting national health policies and individual human lives. 2) To do research in six national settings with various governmental frameworks and in different organizational contexts, enabling us to contrast findings, eventually leading to a theory on the contextual, organizational factors underlying ethical AI. 3) To use innovative anthropological methods of future-scenarioing, which will enrich the anthropological discipline by developing and finetuning future-focused research. 4) The research connects anthropological insights with the expertise of AI-developers, and partners with relevant health decisionmakers and policy-institutions, allowing to both analyze and contribute to fair AI.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101077251
Start date: 01-06-2023
End date: 31-05-2028
Total budget - Public funding: 1 499 961,00 Euro - 1 499 961,00 Euro
Cordis data

Original description

This project will study a multi-sited ethnography of a currently evolving revolution in global health systems: big data/AI-informed national health governance. With health data being considered countries future oil, public and scholarly concerns about algorithmic ethics rise. Research has long shown that datasets in AI (re)produce social biases, discriminate and limit personal autonomy. This literature, however, has merely focused on AI design and institutional frameworks, examining the subject through legal, technocratic and philosophical perspectives, whilst overlooking the socio-cultural context in which big data and AI systems are embedded, most particularly organizations in which human agents collaborate with AI. This is problematic, as frameworks for ethical AI currently consider human oversight crucial, assuming that humans will correct or resist AI when needed; while empirical evidence for this assumption is extremely thin. Very little is known about when and why people intervene or resist AI. Research done consists of single, mostly Western studies, making it impossible to generalize findings. The innovative force of our research is fourfold: 1) To empirically analyze decisive moments in which data-analysts follow or deviate AI: moments deeply impacting national health policies and individual human lives. 2) To do research in six national settings with various governmental frameworks and in different organizational contexts, enabling us to contrast findings, eventually leading to a theory on the contextual, organizational factors underlying ethical AI. 3) To use innovative anthropological methods of future-scenarioing, which will enrich the anthropological discipline by developing and finetuning future-focused research. 4) The research connects anthropological insights with the expertise of AI-developers, and partners with relevant health decisionmakers and policy-institutions, allowing to both analyze and contribute to fair AI.

Status

SIGNED

Call topic

ERC-2022-STG

Update Date

31-07-2023
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
EU-Programme-Call
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2022-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2022-STG ERC STARTING GRANTS