On Friday, OpenAI made o3-mini, the company’s most cost-efficient AI reasoning model so far, available in ChatGPT and the API. OpenAI previewed the new reasoning model last December, but now all ChatGPT Plus, Team, and Pro users have full access to o3-mini.
Although o3-mini is a small model, OpenAI says that it “advances the boundaries of what small models can achieve.” For instance, the model supports function calling, Structured Outputs, and developer messages, as well as streaming. Developers also have the option to choose between reasoning effort options (low, medium, and high) depending on the task.
Starting today, o3-mini will replace o1-mini in the model picker for ChatGPT Plus, Team, and Pro users. The new model not only offers higher rate limits (150 messages per day instead of 50) but lower latency, Plus, o3-mini can search the web for up-to-date answers.
If you don’t pay for a ChatGPT subscription, free users can test out o3-mini by picking ‘Reason’ in the message composer or by regenerating a response. As OpenAI notes, this is the first time the company has made a reasoning model available to free users.
“While OpenAI o1 remains our broader general knowledge reasoning model, OpenAI o3-mini provides a specialized alternative for technical domains requiring precision and speed,” OpenAI wrote in a blog post. “In ChatGPT, o3-mini uses medium reasoning effort to provide a balanced trade-off between speed and accuracy.”
The company notes that the new model matches o1’s performance in math, coding, and science while delivering faster responses. Meanwhile, answers from the new model were more accurate and clearer than those from o1-mini.
“The release of OpenAI o3-mini marks another step in OpenAI’s mission to push the boundaries of cost-effective intelligence,” OpenAI concludes. “By optimizing reasoning for STEM domains while keeping costs low, we’re making high-quality AI even more accessible.”