As artificial intelligence systems become increasingly prevalent in critical healthcare scenarios, concerns about potential risks such as hallucinations, biased predictions, and unexpected failures are growing. In response to these challenges, two prominent researchers have proposed a novel solution: labelling AI systems with detailed usage information, mirroring the approach used for prescription drugs.

In a commentary published in Nature Computational Science, MIT's Marzyeh Ghassemi and Boston University's Elaine Nsoesie argue that such labels could help mitigate potential harms associated with AI deployment in healthcare.

Ghassemi stresses the unique position of AI in healthcare, stating, "Models and algorithms, whether they incorporate AI or not, skirt a lot of these approval and long-term monitoring processes, and that is something we need to be wary of." She points out that unlike medical devices or drugs, AI models currently lack robust systems for approval and ongoing surveillance.

The proposed labels would include critical information such as:

1. Time of model training and data collection

2. Place of data origin and model optimisation

3. Manner of intended use

4. Potential side effects and biases

5. Warnings and precautions

6. Approved and unapproved usage scenarios

Ghassemi underscores the importance of transparency, noting, "If a developer has evaluated a generative model for reading a patient's clinical notes and generating prospective billing codes, they can disclose that it has bias toward overbilling for specific conditions or underrecognising others."

The researchers argue that implementing such labels would not only inform users but also encourage more careful consideration during the development process. Ghassemi explains, "If I know that at some point I am going to have to disclose the population upon which a model was trained, I would not want to disclose that it was trained only on dialogue from male chatbot users, for instance."

Regarding implementation, Ghassemi suggests that initial labelling should be done by developers and deployers, based on established frameworks. For healthcare applications, she proposes that various agencies within the Department of Health and Human Services could be involved in validating these claims prior to deployment.

Dr. Jennifer Rexford, Chair of the Computer Science Department at Princeton University, who was not involved in the study, commented on the proposal: "This is a thoughtful approach to addressing some of the key challenges we face with AI in healthcare. Transparent labeling could go a long way in building trust and ensuring responsible deployment of these powerful tools."



Share this post
The link has been copied!