A groundbreaking study by engineers at Purdue University suggests that autonomous vehicles (AVs) could soon interpret passenger commands more naturally and efficiently, thanks to artificial intelligence chatbots like ChatGPT. The research, set to be presented at the 27th IEEE International Conference on Intelligent Transportation Systems on September 25, may be one of the first experiments testing the integration of large language models with real AVs.

Led by Ziran Wang, an assistant professor in Purdue's Lyles School of Civil and Construction Engineering, the study explored how AVs could understand and respond to both direct and indirect commands from passengers. Wang explains, "The power of large language models is that they can more naturally understand all kinds of things you say. I don't think any other existing system can do that."

The research team trained ChatGPT with various prompts, ranging from explicit instructions like "Please drive faster" to more nuanced requests such as "I feel a bit motion sick right now." They then integrated these models with a level four autonomous vehicle, allowing the AI to interpret passenger commands and generate instructions for the vehicle's drive-by-wire system.

Experiments were conducted primarily at a proving ground in Columbus, Indiana, with additional tests in the parking lot of Purdue's Ross-Ade Stadium. The results were promising, with participants reporting lower rates of discomfort compared to typical experiences in level four AVs without language model assistance.

Notably, the AV outperformed baseline values for safe and comfortable rides, even when responding to commands the AI hadn't previously learned. The system took an average of 1.6 seconds to process commands, which is considered acceptable for non-critical scenarios but leaves room for improvement in time-sensitive situations.

While the study shows potential, Wang acknowledges that challenges remain. Issues such as AI "hallucination" – where models can misinterpret learned information – need to be addressed before commercial implementation. Regulatory approval and extensive industry testing would also be required.

Looking ahead, Wang's team is evaluating other chatbots like Google's Gemini and Meta's Llama AI assistants, with ChatGPT currently showing the best performance. They're also exploring the possibility of AVs communicating with each other using large language models and studying the use of large vision models to enhance AV performance in extreme weather conditions.



Share this post
The link has been copied!