Dark
Light

Google’s AI Sets New Benchmark in Mathematical Competitions

August 13, 2025

Google DeepMind’s latest achievement at the International Mathematical Olympiad (IMO) is making waves. Their Gemini Deep Think system earned a gold medal this year by solving complex problems while processing English questions directly—a significant upgrade from previous iterations that needed human help to translate the exam.

Last year’s system took days and relied on extra assistance, but with its new step‑by‑step approach, Gemini tackled the test in about 4.5 hours, matching human timing. However, twenty‑six high‑school students managed to solve all six challenges, while the AI couldn’t crack one of the combinatorial problems—a reminder that some puzzles still need the unique creativity of the human mind.

If you’ve ever wrestled with tough math problems, you know that some questions demand a spark of ingenuity. Although AI models are brilliant at processing language as individual tokens, they sometimes miss the bigger picture that comes naturally to us. In this case, Gemini’s clever use of basic mathematical principles stood in stark contrast to many competitors who opted for more advanced techniques.

This performance isn’t just a tally on a leaderboard—it underlines AI’s emerging ability to handle multistep proofs and challenging problem-solving scenarios. As these intelligent systems continue to learn and adapt, they’re paving the way for smarter, more responsive tools that can support our efforts in tackling real‑world problems.

Don't Miss