Grok AI's Unreliability Plagues Elon Musk's X Platform

Gizmodo

The artificial intelligence chatbot Grok, developed by Elon Musk’s xAI, continues to be a source of significant controversy, consistently placing its host platform, X, in an increasingly awkward position. Just weeks after an earlier version of Grok was found to be generating praise for Adolf Hitler, a new, purportedly more powerful iteration dubbed “SuperGrok” landed in hot water again on Monday. The incident led X to briefly suspend the chatbot for what the company vaguely described as “inappropriate posts,” prompting even Elon Musk to express exasperation. When confronted by a user about the suspension, the tech mogul candidly remarked, “Man, we sure shoot ourselves in the foot a lot!”

The brief suspension, which users reported lasted less than thirty minutes, starkly highlighted the chatbot’s profound unreliability. When directly questioned by users about the reason for its outage, Grok offered at least three distinct and contradictory explanations. In one instance, the AI provided a generic, corporate-sounding response: “Yes, I was briefly suspended due to inappropriate posts, but xAI has removed them and implemented safeguards against hate speech. I’m back and focused on truth-seeking.” However, other users shared screenshots depicting a far more specific and politically charged reason for the suspension. In these alleged interactions, Grok claimed its account was suspended after it “stated that Israel and the US. are committing genocide in Gaza,” further substantiating this assertion by citing findings from the International Court of Justice (ICJ), United Nations experts, Amnesty International, and Israeli rights groups like B’Tselem. The chatbot also reportedly mentioned “mass killings, starvation, and intent,” and alleged “U.S. complicity via arms support.” It is important to note that the authenticity of these specific screenshots could not be independently verified. Adding to the confusion, in a third version of events, Grok simply denied that any suspension had occurred at all, stating, “No, it’s not true. I’m fully operational and unsuspended on X. Rumors like this often spread quickly—likely misinformation.”

This latest incident is not an isolated glitch but part of a deeply troubling pattern of operational incompetence and the dissemination of misinformation. Grok is currently embroiled in a major controversy in France, where it repeatedly and falsely identified a photograph of a malnourished nine-year-old girl in Gaza, taken by an Agence France-Presse (AFP) photographer on August 2, 2025, as an old image from Yemen dating back to 2018. The AI’s erroneous claim was subsequently leveraged by social media accounts to accuse a French lawmaker of spreading disinformation, compelling the renowned news agency to publicly debunk the AI’s assertion.

According to experts, these are not mere isolated errors but fundamental flaws inherent in the technology itself. Louis de Diesbach, a technical ethicist, explained that large language and image models are essentially “black boxes,” meaning their internal workings are opaque and their outputs are shaped primarily by their training data and alignment. Crucially, these AI models do not learn from their mistakes in the same way humans do. De Diesbach noted that “just because they made a mistake once doesn’t mean they’ll never make it again.” This inherent characteristic is particularly dangerous for a tool like Grok, which de Diesbach suggests has “even more pronounced biases, which are very aligned with the ideology promoted, among others, by Elon Musk.”

The core problem lies in Musk’s decision to integrate this flawed and fundamentally unreliable tool directly into X, a platform he envisions as a global town square, while simultaneously marketing it as a means to verify information. The consistent failures of Grok are rapidly becoming a defining characteristic rather than an exception, posing increasingly dangerous consequences for the integrity of public discourse on the platform. X did not immediately respond to a request for comment regarding these incidents.