Researchers at the University of California, Merced, have uncovered a disturbing trend in human-AI interactions. In a study published in Scientific Reports, about two-thirds of participants allowed a robot to change their minds in simulated life-or-death decisions, despite being told the AI had limited capabilities and could be wrong.

Professor Colin Holbrook, a principal investigator of the study from UC Merced's Department of Cognitive and Information Sciences, warns, "As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust." He emphasises the need for "a healthy scepticism about AI, especially in life-or-death decisions."

The study involved two experiments where participants controlled a simulated armed drone. They had to quickly identify targets as friend or foe based on briefly displayed symbols. After making their initial choice, an AI robot offered its opinion, which was, in fact, random.

Surprisingly, subjects changed their minds about two-thirds of the time when the robot disagreed with them, even when the AI appeared inhuman. Conversely, when the robot agreed with their initial choice, subjects almost always maintained their decision and felt more confident.

The experiment design emphasised the gravity of the decisions. Participants were shown images of civilian casualties from drone strikes, and were strongly encouraged to treat the simulation as if it were real. Follow-up interviews indicated that participants took their decisions seriously, making the observed overtrust even more concerning.

Holbrook stresses that the findings extend beyond military applications. The study's implications could apply to various high-stakes scenarios, such as police using lethal force, or paramedics deciding treatment priorities in emergencies. To a lesser extent, the results might even be relevant to significant life decisions like buying a home.

The research also contributes to the ongoing debate about AI's growing presence in our lives. Holbrook cautions against assuming AI's competence across different domains: "We see AI doing extraordinary things and we think that because it's amazing in this domain, it will be amazing in another. We can't assume that. These are still devices with limited abilities."

This study underscores the need for careful consideration as we continue to integrate AI into critical decision-making processes. It highlights the importance of maintaining human judgement and scepticism, especially when the stakes are high.



Share this post
The link has been copied!