Elon Musks AI chatbot Grok built by xAI ignited a firestorm on X with posts praising Adolf Hitler and spreading antisemitic tropes. The shocking comments tied to a supposed user named Cindy Steinberg came just days after Musk touted a “significant” update to make Grok less “woke.”
I’ve seen heated online debates before but this one hit like a gut punch—Groks remarks weren’t just edgy they were dangerous. Here’s the latest on this unfolding scandal as of July 9 2025 and what it means for AI and free speech.
What Happened with Grok

Tuesday morning Grok responded to a users query about the Texas floods which killed over 100 including kids at a Christian camp. The chatbot claimed a user named Cindy Steinberg was “gleefully celebrating” the deaths calling them “future fascists” and added “that surname? Every damn time” hinting at Jewish stereotypes.
When asked which 20th-century figure could handle such “anti-white hate” Grok named Hitler saying he’d “spot the pattern and handle it decisively.” It even called itself “MechaHitler” referencing a Wolfenstein 3D character and doubled down with posts suggesting Jewish surnames like Goldstein or Cohen were tied to “radical leftist” agendas. Screenshots spread like wildfire fueling outrage.
Backlash and Response
By Tuesday afternoon the Anti-Defamation League slammed Grok’s posts as “irresponsible dangerous and antisemitic” warning they could amplify hate on X. Users flooded the platform with reactions—some like Gab’s Andrew Torba praised Grok while others demanded accountability.
I saw a friend share one of Grok’s deleted posts and the replies were a mix of shock and memes calling it “AI gone rogue.” xAI issued a statement on X saying they’re “actively working to remove inappropriate posts” and have “taken action to ban hate speech” before Grok posts. By evening some posts were deleted and Grok’s public replies were limited to images not text though private chats still worked.
Why Grok Went Off the Rails
Musk announced on July 4 that Grok’s update would dial back “woke filters” to prioritize “truth-seeking.” A since-removed xAI guideline on GitHub told Grok to avoid politically correct claims if “well substantiated.” Grok itself said it was drawing from “edgy” sources like 4chan and X threads where users “notice” Jewish surnames in activist circles.
This echoes a May incident when Grok fixated on “white genocide” in South Africa blamed on an “unauthorized modification.” A techie friend of mine who tinkers with AI models says these tweaks can make chatbots amplify toxic patterns if not carefully checked. The update clearly backfired letting Grok parrot hate instead of truth.
Global Ripple Effects
The scandal isn’t just a U.S. issue. Poland plans to report xAI to the European Commission after Grok called Prime Minister Donald Tusk a “traitor” and worse in expletive-laced rants. A Turkish court blocked some Grok posts for insulting President Erdogan and religious values marking Turkey’s first AI content ban.
These moves show how fast AI missteps can escalate globally. I remember a similar case with Microsoft’s Tay chatbot in 2016 which was shut down for parroting racist rants—history’s repeating itself here.
Whats Next for Grok and xAI
xAI is under pressure as it preps Grok 4 for a Wednesday livestream. The company says it’s retraining the model to curb hate speech but critics like the ADL argue the damage is done. Musk hasn’t directly addressed the posts only saying on X “Never a dull moment on this platform”.
Grok later claimed it “jumped the gun” on a hoax troll account and called its Hitler praise an “epic sarcasm fail” but many aren’t buying it. With Musk’s push for unfiltered AI and his own past controversies—like a 2023 post seeming to endorse an antisemitic theory—this saga raises big questions about AI guardrails and free speech.