Researchers at the Massachusetts Institute of Technology (MIT) have developed a new framework called SigLLM that uses large language models (LLMs) to detect anomalies in time-series data from complex systems, without the need for extensive training.

The study will be presented at the upcoming IEEE Conference on Data Science and Advanced Analytics.

SigLLM converts time-series data into text-based inputs that LLMs can process. The researchers developed two approaches for anomaly detection:

1. Prompter: Directly asks the LLM to locate anomalous values in the data.

2. Detector: Uses the LLM to forecast future data points and compares predictions to actual values to identify anomalies.

While the LLMs did not outperform state-of-the-art deep learning models, they performed comparably to some AI approaches and showed promise for off-the-shelf deployment without fine-tuning.

"Since this is just the first iteration, we didn't expect to get there from the first go, but these results show that there's an opportunity here to leverage LLMs for complex anomaly detection tasks," said Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.

The research team, which includes Linh Nguyen, Laure Berti-Equille, and Kalyan Veeramachaneni, sees potential for LLMs to provide plain language explanations for their predictions, enhancing interpretability for operators.

Future work will focus on improving performance, increasing processing speed, and understanding how LLMs perform anomaly detection.

This research was supported by SES S.A., Iberdrola and ScottishPower Renewables, and Hyundai Motor Company, highlighting the industrial interest in this innovative approach to anomaly detection.



Share this post
The link has been copied!