As artificial intelligence systems become increasingly prevalent in critical healthcare scenarios, concerns about potential risks such as hallucinations, biased predictions, and unexpected failures are growing. In response to these challenges, two prominent researchers have proposed a novel solution: labelling AI systems with detailed usage information, mirroring the approach used for prescription drugs.
In a commentary published in Nature Computational Science, MIT's Marzyeh Ghassemi and Boston University's Elaine Nsoesie argue that such labels could help mitigate potential harms associated with AI deployment in healthcare.
The proposed labels would include critical information such as:
1. Time of model training and data collection
2. Place of data origin and model optimisation
3. Manner of intended use
4. Potential side effects and biases
5. Warnings and precautions
6. Approved and unapproved usage scenarios
The researchers argue that implementing such labels would not only inform users but also encourage more careful consideration during the development process. Regarding implementation, Ghassemi suggests that initial labelling should be done by developers and deployers, based on established frameworks. For healthcare applications, she proposes that various agencies within the Department of Health and Human Services could be involved in validating these claims prior to deployment.