OpenAI has announced the release of a new series of AI models, dubbed 'o1', designed to excel at complex reasoning tasks in fields such as science, coding, and mathematics.

The company reports that the o1 models are trained to spend more time thinking through problems before responding, mimicking human problem-solving processes. In benchmark tests, the upcoming model update reportedly performed on par with PhD students in physics, chemistry, and biology tasks.

OpenAI highlights the models' improved performance in challenging domains. For instance, while GPT-4 correctly solved only 13% of problems in an International Mathematics Olympiad qualifying exam, the new reasoning model achieved an 83% success rate. In coding, it reached the 89th percentile in Codeforces competitions.

The initial release, named 'o1-preview', is now available to ChatGPT Plus and Team users, with API access for qualified developers. OpenAI is also introducing 'o1-mini', a smaller, more cost-effective model specifically optimised for coding tasks.



Share this post
The link has been copied!