REPORT-IT | Self-interpretability of human cognition: How reportable knowledge emerges in learning

Summary
Current artificial intelligence (AI) surpasses human-level performance in a vast range of tasks. However, its decisional processes are opaque, referred to as the AI interpretability problem. Humans, on the other hand, can verbally describe their decisional processes and strategies. The accuracy of these reports varies, especially in complex environments. Yet, people often come up with reasonably accurate explanations for their decisions, thereby allowing knowledge transfer in society. However, the mechanisms of accurate verbal report generation remain unclear. Therefore, the main research objective of the REPORT-IT project is to study how humans generate adequate reportable knowledge during learning through experience. Inspired by the recent findings from research on metacognition (i.e., insight into one's own cognition) and cognition-emotion interaction, I will test the novel hypothesis that metacognition and learning-related affect support the emergence of reportable knowledge. In two experiments modeling complex learning environments (implicit category learning and probabilistic reward learning tasks), I will track the development of metacognition, affect, and reportable knowledge over time. This will allow me to evaluate the temporal relationships between these components and predict the emergence of reportable knowledge. In the final step of the project, I will study the behavior of deep neural networks (DNNs) in the exact same tasks and test whether DNNs can generate temporal patterns of metacognition and affect, as observed in humans. Thereby, the REPORT-IT project combines my expertise in implicit learning and affective science with expertise in neuroscience of consciousness and DNNs at the host institute (University of Amsterdam). This way, REPORT-IT will contribute to understanding how people generate reportable knowledge and, at the same time, provide new approaches for explainable AI.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101107101
Start date: 01-02-2024
End date: 31-07-2026
Total budget - Public funding: - 203 464,00 Euro
Cordis data

Original description

Current artificial intelligence (AI) surpasses human-level performance in a vast range of tasks. However, its decisional processes are opaque, referred to as the AI interpretability problem. Humans, on the other hand, can verbally describe their decisional processes and strategies. The accuracy of these reports varies, especially in complex environments. Yet, people often come up with reasonably accurate explanations for their decisions, thereby allowing knowledge transfer in society. However, the mechanisms of accurate verbal report generation remain unclear. Therefore, the main research objective of the REPORT-IT project is to study how humans generate adequate reportable knowledge during learning through experience. Inspired by the recent findings from research on metacognition (i.e., insight into one's own cognition) and cognition-emotion interaction, I will test the novel hypothesis that metacognition and learning-related affect support the emergence of reportable knowledge. In two experiments modeling complex learning environments (implicit category learning and probabilistic reward learning tasks), I will track the development of metacognition, affect, and reportable knowledge over time. This will allow me to evaluate the temporal relationships between these components and predict the emergence of reportable knowledge. In the final step of the project, I will study the behavior of deep neural networks (DNNs) in the exact same tasks and test whether DNNs can generate temporal patterns of metacognition and affect, as observed in humans. Thereby, the REPORT-IT project combines my expertise in implicit learning and affective science with expertise in neuroscience of consciousness and DNNs at the host institute (University of Amsterdam). This way, REPORT-IT will contribute to understanding how people generate reportable knowledge and, at the same time, provide new approaches for explainable AI.

Status

SIGNED

Call topic

HORIZON-MSCA-2022-PF-01-01

Update Date

31-07-2023
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
EU-Programme-Call
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2022-PF-01
HORIZON-MSCA-2022-PF-01-01 MSCA Postdoctoral Fellowships 2022