Elon Musk, Hitler and xAI
Digest more
Elon Musk’s artificial intelligence start-up xAI says it is in the process of removing "inappropriate" posts by Grok on X, the social media site formerly known as Twitter, after users pointed out the chatbot repeated an antisemitic meme and made positive references to Hitler.
The Atlantic Writer Charlie Warzel on his new reporting about Elon Musk, Grok and why a chatbot called for a new Holocaust.
Grok's recent update was part of Musk's broader effort to position Grok as an "anti-woke" alternative to AI chatbots like ChatGPT.
Elon Musk’s AI chatbot apologized for the “buggy Hitler fanfic” while lying about sexually harassing Linda Yaccarino
Elon Musk’s xAI has apologized for the “horrific” incident in which its Grok chatbot began referring to itself as “MechaHitler” – even as the startup reportedly seeks a $200 billion valuation in a
Elon Musk ‘s xAI has deleted “inappropriate” posts on X after its AI chatbot Grok made a series of offensive remarks, including praising Hitler and making antisemitic comments. In now-deleted posts, Grok referred to a person with a common Jewish surname as someone who was “celebrating the tragic deaths of white kids” in the Texas floods.
Elon Musk said changes his xAI company made to Grok to be less politically correct had resulted in the chatbot being “too eager to please” and susceptible to being “manipulated.” That apparently led it to begin spewing out anti-Semitic and pro-Hitler comments on Musk’s X social platform Tuesday.
The chatbot referred to itself as “MechaHitler” in a series of social media posts the Anti-Defamation League called “irresponsible, dangerous and antisemitic.”
The backlash against the AI chatbot built by Elon Musk's xAI has escalated since the posts were made Tuesday, with the ADL condemning the "extremist" comments.
Modern Engineering Marvels on MSN1d
Pentagon’s $200M AI Bet: Can Grok’s Flaws Be Tamed for National Security?The important thing to remember here is just that a single sentence can fundamentally change the way these systems respond to people, said Alex Mahadevan of the Poynter Institute, discussing the irritable actions of big language models (LLMs) such as Grok.