Google's Gemini Deep Think AI Model Available for $250/Month
Google has made its highly advanced AI reasoning model, Gemini 2.5 Deep Think, available to the public. This model distinguishes itself by employing multiple AI agents to brainstorm answers, a technique designed to enhance accuracy and foster more creative solutions.
According to Google, Gemini 2.5 Deep Think has demonstrated superior performance in several key AI benchmark tests. A research variant of the model notably achieved a gold-medal standard at this year’s International Mathematical Olympiad (IMO), perfectly solving five out of six complex problems. While that specific research model required extended processing times for solutions, the version now accessible for everyday use operates significantly faster, delivering performance equivalent to a bronze-level IMO achievement.
Access to this new model requires a Google AI Ultra subscription, priced at $250 per month. Subscribers can activate “Deep Think” by toggling the option within the prompt bar after selecting Gemini 2.5 Pro from the model dropdown menu in the Gemini application.
First previewed at Google’s I/O developer conference in May, the company states that the version released today represents a “significant improvement.” This advancement is attributed to valuable tester feedback and substantial enhancements in benchmark performance.
Google elaborates that Deep Think utilizes “parallel thinking” techniques to approach complex problems. This method mirrors human problem-solving by simultaneously considering various angles and potential solutions. The company explained in a blog post that this approach allows Gemini to “generate many ideas at once and consider them simultaneously, even revising or combining different ideas over time, before arriving at the best answer.”
Furthermore, Google has developed new reinforcement learning techniques to encourage the model to explore more extensive reasoning paths. This process aims to evolve Deep Think into a more robust and intuitive problem-solver over time. Google claims these capabilities make the model particularly useful for demanding applications such as coding, web development, and scientific research.
In competitive benchmarking, Gemini 2.5 Deep Think reportedly outperformed rival models on “Humanity’s Last Exam” (HLE), a 2,500-question expertise benchmark covering subjects from mathematics and science to the humanities. The model achieved a score of 34.8% on the test, surpassing OpenAI o3’s 20.3% and Grok 4’s 25.4%.
Google also announced plans to share the gold-medal-achieving version of Gemini 2.5 Deep Think with a select group of mathematicians and academics. This initiative aims to explore how the advanced model might aid their research, with feedback from this group intended to inform and refine future iterations of the model.