Meta AI's TRIBE Predicts Brain Responses to Videos Without Scans

Beehiiv

The landscape of artificial intelligence continues its rapid transformation, with recent breakthroughs spanning from understanding the human mind to designing life-saving pharmaceuticals. These advancements underscore AI’s growing sophistication and its profound implications for science, technology, and society.

Among the most intriguing developments is Meta’s introduction of TRIBE, a massive AI model equipped with a billion parameters, capable of predicting how human brains respond to cinematic content. Developed by Meta’s Fundamental AI Research (FAIR) team, TRIBE analyzes video, audio, and text from movies to anticipate which brain regions will activate in a viewer, all without requiring any direct brain scans. The system notably excelled in the Algonauts 2025 brain modeling competition, demonstrating its ability to accurately predict over half of brain activity patterns across 1,000 distinct regions after being trained on subjects who watched 80 hours of diverse media. TRIBE proved particularly adept in areas where sensory inputs like sight, sound, and language converge, outperforming single-sense models by a significant 30 percent. Its precision was also marked in frontal brain regions associated with attention, decision-making, and emotional responses. While this technology promises unprecedented insights into brain processes, it also raises questions about the potential for creating content engineered to maximize neural-level engagement, potentially intensifying phenomena like “doomscrolling.”

Concurrently, OpenAI has showcased the remarkable progress in AI reasoning capabilities. Their general-purpose reasoning model achieved a gold-level score at the 2025 International Olympiad in Informatics (IOI), a prestigious pre-college programming competition. Competing against top student programmers worldwide under identical time and submission constraints, the AI model secured the 6th overall position and ranked first among all AI entrants. What makes this achievement particularly notable is that the model was not specifically fine-tuned for programming, relying only on basic tools. Its performance represents a substantial leap from just a year prior, when a similar model scored 49 percent, now reaching the 98th percentile. This same model has also claimed gold at the International Math Olympiad and AtCoder, underscoring its versatility across complex problem-solving domains. Such rapid advancements suggest that the era of human dominance in competitive intellectual tasks may be nearing its end, paving the way for future AI models capable of pioneering discoveries in science, mathematics, and physics.

In the realm of medicine, researchers at the Korea Advanced Institute of Science & Technology (KAIST) have unveiled BInD, a novel diffusion model poised to revolutionize drug discovery. Unlike conventional methods that involve iterative design and testing, BInD can design optimal cancer drug candidates from scratch in a single step, without relying on prior molecular data or training examples. This innovative AI not only crafts the drug molecule but also simultaneously determines how it will attach to diseased proteins. Crucially, BInD designs drugs that precisely target only cancer-causing protein mutations while leaving healthy versions unaffected, highlighting its potential for truly personalized medicine. Furthermore, the model can concurrently optimize for multiple criteria, ensuring that designed drugs are safe, stable, and manufacturable—a significant improvement over older AI systems limited to single-criterion optimization. By learning from its successes and employing a “recycling technique,” BInD iteratively refines its strategies, accelerating the development of more effective treatments. As the first AI-designed drugs begin to enter the market, these breakthroughs hint at a coming wave of humanity-altering medical advances driven by advanced AI models.

Beyond these major strides, other significant AI developments include the release of GLM-4.5V, a new open-source visual reasoning model from Chinese AI lab Z AI, demonstrating top performance across numerous benchmarks. In the video generation space, Pika Labs introduced a new model for its social app, capable of generating HD-quality videos with lip-sync and audio in mere seconds. Alibaba’s Qwen3 models have been upgraded with ultra-long context capabilities, now processing up to 1 million tokens, while Anthropic’s Claude AI has gained memory features, allowing it to reference previous conversations for improved coherence. These collective advancements underscore the relentless pace of innovation, pushing the boundaries of what AI can achieve across diverse sectors.