Researchers at MIT have developed a new algorithm that could transform how language models work together. The "Co-LLM" system, created by a team at MIT's CSAIL, allows a general-purpose AI model to collaborate with an expert large language model, resulting in more factual and efficient responses to complex queries.
The Co-LLM algorithm employs a "switch variable" that acts as a project manager, identifying areas where the general-purpose model should defer to the specialised expert model. This process occurs at the word (or token) level, allowing for precise integration of expert knowledge into the generated response.
The researchers demonstrated Co-LLM's effectiveness across various domains, including biomedical tasks and mathematical reasoning. In one example, when asked about the ingredients of a specific prescription drug, Co-LLM could combine the general knowledge of a base model with the specialised information from a biomedical expert model like Meditron.
The flexibility of Co-LLM sets it apart from other collaborative AI approaches. Unlike methods such as "Proxy Tuning," which require all component models to be trained similarly, Co-LLM can guide two differently trained models to work together effectively. Additionally, Co-LLM's selective activation of the expert model for particular tokens leads to more efficient response generation.
Looking ahead, the MIT team is exploring ways to further enhance Co-LLM's capabilities. They are considering implementing a more robust deferral approach that can backtrack when the expert model provides an incorrect response, mimicking human self-correction processes. The researchers also aim to develop methods for updating the expert model with new information while only training the base model, ensuring that responses remain current and accurate.
Co-LLM could pave the way for more intelligent, efficient, and adaptable systems. The potential applications range from improving enterprise document management to enhancing the capabilities of smaller, private models working alongside more powerful LLMs.