OpenAI corrects 'chart crime' in GPT-5 livestream presentation

Businessinsider

The highly anticipated GPT-5 livestream from OpenAI on Thursday, August 7, 2025, intended to showcase the company’s latest advancements in artificial intelligence, quickly found itself under scrutiny as viewers noted glaring inaccuracies in several presented charts. What CEO Sam Altman swiftly dubbed a “mega chart screwup” and an OpenAI team member termed an “unintentional chart crime” sparked immediate discussion across social media and within the tech community.

One of the most prominent errors appeared in a chart comparing GPT-5’s “coding deception” rate with that of OpenAI’s o3 model. The data indicated GPT-5 had a 50% deception rate, marginally worse than o3’s 47.4%. Yet, the visual representation depicted GPT-5 with a disproportionately smaller bar, misleadingly suggesting superior performance. OpenAI later corrected this in a blog post, revising GPT-5’s actual deception rate to a significantly lower 16.5%. Another problematic chart compared GPT-5, o3, and GPT-4o on a different performance metric. While GPT-5 scored 74.9, o3 69.1, and GPT-4o 30.8, the graphical bars for o3 and GPT-4o appeared almost identical in length, despite a substantial numerical difference, effectively downplaying the true distinctions between the models. A further instance of visual misrepresentation was observed in an accuracy chart where GPT-5’s 52.8% accuracy (with “thinking” mode enabled) was shown as visually higher than o3’s 69.1%, and o3’s 69.1% was depicted at the same level as GPT-4o’s 30.8%.

The swift public reaction, with users on platforms like X (formerly Twitter) pointing out the discrepancies, prompted a rapid response from OpenAI’s leadership. Sam Altman publicly acknowledged the errors, stating, “wow a mega chart screwup from us earlier.” Concurrently, an OpenAI marketing staffer issued an apology online, admitting to the “unintentional chart crime” and confirming that corrected versions of the charts had been promptly uploaded to the company’s official blog.

This incident, occurring during one of the most anticipated AI launches of the year, underscores the critical importance of data integrity and transparent communication for technology companies. It highlights how even minor visual misrepresentations can erode trust and generate skepticism, especially for firms operating at the forefront of transformative technologies like artificial intelligence. The quick admission and correction by OpenAI demonstrate an awareness of accountability, yet the episode raises broader questions about internal quality control and the rigorous vetting of presentation materials, particularly when high-stakes product demonstrations are involved. Some onlookers even speculated whether OpenAI might have used its own AI models to generate the flawed visuals, a notion the company has not addressed.

Despite the chart controversy, the GPT-5 launch itself marked a significant milestone for OpenAI. The model is touted as the company’s most powerful to date, promising substantial improvements in accuracy, speed, and reasoning capabilities. GPT-5 introduces a unified system designed to automatically select the best model for a given prompt, boasts better “safe completions” for more helpful and transparent replies, and exhibits enhanced logic and self-evaluation, leading to a reported reduction in hallucinations. It is being rolled out to all user tiers, including free, Plus, Pro, and Team users, with the aim of making AI experiences smarter, safer, and more personal across various applications, from coding to health-related guidance.

While the “chart crime” served as an unexpected detour during the GPT-5 unveiling, OpenAI’s swift acknowledgment and rectification of the errors provided a measure of transparency. The incident serves as a stark reminder that even industry leaders must uphold the highest standards of data presentation, ensuring visuals accurately reflect the underlying numbers, especially when introducing groundbreaking technologies to a global audience.