Deep Cogito v2: Open-Source AI Hones Reasoning, Boosts Efficiency

Artificialintelligence

Deep Cogito has announced the release of Cogito v2, a new collection of open-source AI models engineered to enhance their own reasoning capabilities. The lineup, available under an open-source license, features four hybrid reasoning AI models: two mid-sized versions with 70 billion and 109 billion parameters, and two larger models at 405 billion and 671 billion parameters.

The largest among them, a 671-billion-parameter Mixture-of-Experts (MoE) model, is already being recognized as one of the most powerful open-source AIs currently available. Deep Cogito claims that this flagship model competes effectively with the latest offerings from DeepSeek and is narrowing the performance gap with advanced proprietary systems such as O3 and Claude 4 Opus.

However, the significant advancement in Cogito v2 is not merely in its size or raw power, but in a fundamental shift in how the AI learns. Rather than simply extending its “thinking” time during inference to find an answer, Cogito v2 is designed to internalize its own reasoning processes.

This internalised reasoning is achieved through a technique called Iterated Distillation and Amplification (IDA). IDA works by distilling the discoveries made during a search process back into the model’s core parameters. The objective is to cultivate a stronger “intuition,” enabling the model to anticipate the outcome of its own reasoning without needing to execute the entire search sequence.

This refined “gut feeling” for the correct approach allows the open-source AI models to generate reasoning chains that are reportedly 60% shorter than those of competitors like DeepSeek R1, significantly improving efficiency.

This efficiency also extends to development costs. Deep Cogito states that the combined total expenditure for developing all its models, from initial experiments through to final training, was less than $3.5 million. While a substantial sum, this figure is notably modest compared to the vast investments typically made by many leading AI research laboratories.

The 671-billion-parameter flagship model received particular attention during its training. Its development focused not only on improving the accuracy of its final answers but also on refining the thinking process itself. This approach encourages the model to pursue a more direct path to a solution, discouraging “meandering” or inefficient reasoning. Performance data indicates the effectiveness of this method, with Deep Cogito’s open-source AI matching or surpassing the latest DeepSeek versions on key benchmarks, while also performing closely to proprietary alternatives.

One of the most surprising outcomes of this development is the models’ emergent ability to reason about images, a skill they were never explicitly trained for. The Deep Cogito team provided an example where their open-source AI model compared two images, one of a duck and one of a lion. It demonstrated a deep reasoning process regarding their habitats, colors, and composition, purely through transfer learning. Deep Cogito believes this unexpected property could offer a powerful method to bootstrap training data for future multimodal reasoning systems.

Looking ahead, the Deep Cogito team plans to continue building on the gains from iterative self-improvement in their ongoing pursuit of superintelligence. They have reiterated their commitment that all AI models they create will remain open-source.