Dark
Light

OpenAI’s GPT-5 Aims to Enhance Integration and Minimise Model Switching

May 20, 2025

OpenAI is refining its vision for GPT-5. In a recent Reddit Q&A, Jerry Tworek, the company’s VP of Research, explained that the new iteration will build on current strengths rather than reinventing the wheel. Instead of launching a radically different system, GPT-5 is set to optimise existing tools – including the Codex code agent, Deep Research, Operator, and the memory system – to offer you a more seamless experience.

Although still in the research phase, updates to the Operator screen agent hint at a tool that could soon overcome its early reliability issues. Tworek’s remarks also signal a shift away from earlier, more ambitious integration plans. Earlier attempts to merge the GPT series with the ‘o’ model family encountered challenges, prompting the creation of distinct reasoning models like o3 and o4-mini.

When it comes to handling the growing demand for tokens – the essential units for processing and generating language – Tworek is confident. He sees a future where cost balances clearly with value, as AI systems continue to scale even without radical breakthroughs. Moreover, while these systems will boost productivity, he envisions a role for human oversight to ensure they work in society’s best interests.

In discussions about competitors such as Claude and Gemini, Tworek argued that traditional benchmarks fall short. Instead, the focus should be on real-world utility. By automatically guiding users to the best model for their needs, OpenAI hopes to eliminate the confusion often seen in model selection.

Don't Miss