OpenAI debuts o3 and o4-mini AI models with advanced reasoning and image integration

OpenAI debuts o3 and o4-mini AI models with advanced reasoning and image integration

OpenAI ramps up its reasoning capabilities with two new AI models that blend visual and text-based intelligence, signalling a push toward more multimodal problem-solving tools.

Advertisement
https://akm-img-a-in.tosshub.com/businesstoday/2023-04/logo.png
Business Today Desk
  • Apr 17, 2025,
  • Updated Apr 17, 2025 9:28 AM IST

OpenAI has launched two new artificial intelligence models, o3 and o4-mini, that aim to advance AI’s reasoning abilities, including the capacity to interpret and manipulate images as part of their problem-solving process.

Described as the company’s “most powerful reasoning model,” o3 sits at the top of OpenAI’s current model stack. It is joined by o4-mini, a more lightweight version that the company says “achieves remarkable performance for its size and cost,” according to an official blog post.

Advertisement

Both models are designed to “think” with images — integrating visual content directly into their chain of thought. That allows them to analyse, zoom into, or rotate images to inform their reasoning. This could enhance tasks involving diagrams, sketches, or whiteboard content, a significant step forward in bridging text and visual data.

OpenAI also announced that these models will have access to the full suite of ChatGPT tools. This includes web browsing, file analysis, code generation, and image creation features typically reserved for premium tiers of its chatbot product. Starting today, users of ChatGPT Plus, Pro, and Team tiers will be able to access these tools via o3, o4-mini, and o4-mini-high. The more advanced o3-pro version is expected to gain tool access “in a few weeks.”

Advertisement

In tandem, OpenAI will begin retiring older models such as o1, o3-mini, and o3-mini-high from premium tiers.

The release comes just days after OpenAI unveiled GPT-4.1, the latest iteration of its flagship generative model, marking a week of major upgrades across its model portfolio.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

OpenAI has launched two new artificial intelligence models, o3 and o4-mini, that aim to advance AI’s reasoning abilities, including the capacity to interpret and manipulate images as part of their problem-solving process.

Described as the company’s “most powerful reasoning model,” o3 sits at the top of OpenAI’s current model stack. It is joined by o4-mini, a more lightweight version that the company says “achieves remarkable performance for its size and cost,” according to an official blog post.

Advertisement

Both models are designed to “think” with images — integrating visual content directly into their chain of thought. That allows them to analyse, zoom into, or rotate images to inform their reasoning. This could enhance tasks involving diagrams, sketches, or whiteboard content, a significant step forward in bridging text and visual data.

OpenAI also announced that these models will have access to the full suite of ChatGPT tools. This includes web browsing, file analysis, code generation, and image creation features typically reserved for premium tiers of its chatbot product. Starting today, users of ChatGPT Plus, Pro, and Team tiers will be able to access these tools via o3, o4-mini, and o4-mini-high. The more advanced o3-pro version is expected to gain tool access “in a few weeks.”

Advertisement

In tandem, OpenAI will begin retiring older models such as o1, o3-mini, and o3-mini-high from premium tiers.

The release comes just days after OpenAI unveiled GPT-4.1, the latest iteration of its flagship generative model, marking a week of major upgrades across its model portfolio.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement