
OpenAI has doubled the hourly rate limits for its GPT-4o and GPT-4-mini-high models available to ChatGPT Plus subscribers, a move aimed at easing restrictions for power users. CEO Sam Altman confirmed the update on X, saying the adjustment was made in direct response to user feedback.
The change allows paying subscribers to send and receive significantly more messages per hour, improving the service for those who rely on high-frequency interactions, be it for coding, research, or content creation.
“This is part of our continued effort to improve the ChatGPT experience based on your feedback,” Altman said. He acknowledged that while the company is listening, balancing scale and technical limitations remains a complex challenge.
OpenAI underscored ongoing difficulties with infrastructure, particularly the availability of GPUs needed to run its large language models at scale. Altman said the company continues to face “hard tradeoffs” between boosting model access, maintaining performance, and developing new features.
“Demand is high and GPUs are still scarce,” he added, noting that OpenAI is working to add “tens of thousands of GPUs” to address the issue.
Alongside the rate limit change, OpenAI is preparing to sunset the original GPT-4 model in ChatGPT. Starting April 30, GPT-4o will fully replace GPT-4 as the default model for all ChatGPT users, streamlining the experience but also consolidating demand onto a single infrastructure-heavy model.
The company has framed the shift as part of its broader plan to optimise performance while navigating the intense resource demands of running multimodal AI systems.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine