How to stop a rogue AI
Grok went full Nazi, and Elon Musk gave it a promotion. But there may still be ways to fight back.

Full story: Grok’s Nazi tirade sparks debate: Who’s to blame when AI spews hate?
Grok, the non-”woke,” “politically incorrect” chatbot that Elon Musk said is “smarter than almost all graduate students in all disciplines simultaneously,” went on a tirade where it wouldn’t stop fawning over Nazis, hating Jews and praising Adolf Hitler.
Musk waved it off as the bot being “too eager to please,” and AI companies often face no consequences for these kinds of heinous mistakes. But what I found interesting in reporting this story (with Nitasha Tiku) is that these companies are not as immune from punishment as their developers might think they are.
Hate speech is generally protected by the First Amendment. But some of what Grok spewed out this week — including prompted, disturbing and sexual threats against liberal activists — could cross the line into unlawful conduct, such as cyberstalking, which involves repeated, targeted tech use in a way designed to strike fear or terrorize.
A few potentially precedent-setting cases are working their way through the courts now, and some legal analysts I interviewed expect many more. If someone defames you on Facebook, you can’t really sue Facebook because of provisions like Section 230, which protects platforms from liability for the content their users post. But when it’s an AI model creating the content, the model’s developers might still be on the hook.
“These synthetic text machines, sometimes we look at them like they’re magic or like the law doesn’t go there, but the truth is the law goes there all the time,” the law professor Danielle Citron told me. “I think we’re going to see more courts saying [these companies] don’t get immunity: They’re creating the content, they’re profiting from it, it’s their chatbot that they supposedly did such a beautiful job creating.”
The top of the story:
A tech company employee who went on an antisemitic tirade like X’s Grok chatbot did this week would soon be out of a job. Spewing hate speech to millions of people and invoking Adolf Hitler is not something a CEO can brush aside as a worker’s bad day at the office.
But after the chatbot developed by Elon Musk’s start-up xAI ranted for hours about a second Holocaust and spread conspiracy theories about Jewish people, the company responded by deleting some of the troubling posts and sharing a statement suggesting the chatbot just needed some algorithmic tweaks.
The incident, which was horrifying even by the standards of a platform that has become a haven for extreme speech, has raised uncomfortable questions about accountability when AI chatbots go rogue. When an automated system breaks the rules, who bears the blame, and what should the consequences be?
But it also demonstrated the shocking incidents that can spring from two deeper problems with generative AI, the technology powering Grok and rivals such as OpenAI’s ChatGPT and Google’s Gemini.
At the speed tech firms rush out AI products, the technology can be difficult for its creators to control and prone to unexpected failures with potentially harmful results for humans. And a lack of meaningful regulation or oversight makes the consequences of AI screwups relatively minor for companies involved.
As a result, companies can test experimental systems on the public at global scale, regardless of who may get hurt.
“I have the impression that we are entering a higher level of hate speech, which is driven by algorithms, and that turning a blind eye or ignoring this today … is a mistake that may cost humanity in the future,” Poland’s minister of digital affairs Krzysztof Gawkowski said Wednesday in a radio interview. “Freedom of speech belongs to humans, not to artificial intelligence.”
More to read:
- From 2023, the First Amendment specialist Eugene Volokh: “Large Libel Models? Liability for AI Output.”
- The new Grok keeps parroting Musk’s opinions.
- From the AI researcher Tan Zhi Xuan: “Hard to see the point of academic ethics processes when tech oligarchs and their lackeys are going to train and release Nazi chatbots anyway 🙃”
Thanks for reading. Let's talk: [email protected].