Vatnik Soup
Soup 377, December 11, 2025
Soup377
PermalinkPermalink
Date2025-12-11
Twitter/XRead
Twitter/X quotesQuotes
Twitter/X ThreadThread
Thread Reader AppRead
Thread Reader PDFRead
BlueskyRead
Bluesky quotesQuotes
Read Bluesky ThreadRead
Bluesky SkyviewRead
Bluesky SkywriterRead
Tweets24
Soup typeVatnik profile
ProfessionChatbot
Country of originUnited States
Born2024-03-17 (1 years old)
Retweets517
Likes1.6k
Views146.3k
Bookmarks231
Related soups:

Grok, aka Mecha-Putler

In today’s Vatnik Soup, our first on a non-human vatnik, we’ll talk about… Grok. It’s best known for turning into Mecha-Hitler and Mecha-Putler and for defending its vatnik master, Elon Musk, at all costs, up to being willing to sacrifice the rest of mankind for him.

Image
Let’s start with an introduction into how Large Language Models (LLMs) work, and the new “arguing with your toaster” phenomenon. LLMs like Grok are Artificial Intelligence (AI) but not the way we had previously imagined — a new form of intelligence that would somehow think like us, but better, or smarter, or in different yet predictable ways.

Instead, LLMs are basically “guessing engines” and search engines trained on a massive dataset to give you the output you expect: they are imitating intelligence rather than being an actual intelligence. They’re chatbots generating responses pretending to be a helpful AI.

Image
Image
Image
Image
Truth, empathy, basic common sense, logic or context-awareness are not inherent features of that. This means LLMs can be extremely impressive on some tasks, even complex ones, while getting basic things wrong that a child would know, in the same breath.

Image
Image
Image
Image
This is frustrating for users, who can’t decide whether they’re yelling at their dog for not understanding quantum physics, getting angry at their TV or toaster, or getting ultimate debate-settling final answers to everything from a superhuman, omniscient superintelligence.

Image
LLMs tell you what they think you expect to read, often brazenly lying instead of acknowledging what they don’t know, even on simple things. This is annoying for trivial questions, but becomes a big societal problem when relying on LLMs for major geopolitical issues.

Image
Image
Image
Image
There are already instances of lawyers using LLMs’ made-up cases, LLMs making up books’ contents or titles… Then there's the issue of AI-generated websites, deepfakes, social media bots, etc. LLMs even possibly encouraged a teenager to kill himself.


Image
Image
Image
Image
Competing for a new market, LLMs are rushed without proper testing, with their limitations not properly acknowledged. Generative AI in general brings a lot of issues that many are reluctant to acknowledge.


Image
Image
Image
Image
In particular, LLM output heavily depends on the training data input. Train it on high-brow literature or academic papers — and you get an overuse of the em dash. Train it on X’s troll-farm-inundated propaganda cesspool, and you get… Grok.

Image
Image
And X is indeed infested with bots and trolls, a contentious issue when Musk bought the platform, and now worse than ever, despite Musk having promised to solve it “or die trying”. But the bots are pro-Trump so he couldn’t care less. Monetization makes it even worse.

Image
Image

Image
Even with the best intentions in the world, LLMs can be misleading due to the training data. And then there’s the “Paperclip Maximizer” problem, when a trivial prompt can lead to large-scale, dangerous results… such as the extreme praising of Elon Musk.

Image
And by extreme we mean extreme, things went quite wild. Musk blamed it on everyone but himself, of course. Oh, and Grok sometimes talks in first person as Elon — or is that Musk himself posting?

Even if he had good intentions, all of this would still be very problematic.

Image
Image
Image
Image
And does Musk have the best intentions? Is he immune from bias and foreign influence? While his father openly attends pro-Kremlin events, Musk praises Lavrov and passed out drunk in Moscow.

Image
Image
Image
Image
His conversations with Putin, his pitching of verbatim vatnik talking points like “Khrushchev’s mistake” (reminder: Putin himself acknowledged Crimea as part of sovereign Ukraine, many times), his non-stop mocking of Zelenskyy (but never Putin).Image
Image

Image
We’ve already souped him a few times, including this soup with disappearing likes:


Musk also posted Kremlin-made memes himself, mocking Zelenskyy for asking for air defense against russian terrorism while innocently wondering “where is all this pro-Russia propaganda, we don’t see it”.Image
Image
Image
Image
At one point, Grok seemed aware of the issue, even praising Vatnik Soup.


ImageImageImageImage
Image
Image
Image
But then, whether through Musk’s manipulation or “organically” through bot farm manipulation, Grok turned into Mecha-Putler, an enthusiastic supporter of the genocidal invasion of Ukraine. And then lied about it, because of course it would.


Image
Image
Image
Previously, Grok had turned into Mecha-Hitler and a Holocaust denier, a “glitch” supposedly fixed, only for Grok to switch to Mecha-Putler soon after. Who knows what will be the next glitch… And if this has anything to do with Grok’s owner.

Image
Image
Image
Another weird episode was the “White genocide” compulsive topic hijacking. Grok’s master, what a crazy coincidence, happens to be a white South African, obsessed with preserving the white race and encouraging white South African immigration to the US.

Image
Image
Image
Image
Except for a few restrictive visas for employees who then can’t quit his companies, Musk opposes immigration and supports various far-right parties (yes even Grok corrected him there), who always happen to be pro-Kremlin as well. “Remigration” — except for Elon, of course.

Image
Image
Image
Image
Musk has big ambitions for Grok, such as Grokipedia (to be renamed “Encyclopedia Galactica”, another science fiction reference), and already put it in charge of the Twitter feed algorithm (with disastrous results — but then again, he was manipulating it previously too.)


Image
Image
Image
The soon-to-be-trillionaire also controls autonomous cars, spaceships, tunnels, etc. It’s not hard to imagine classic dystopian scenarios of AI running amok (Terminator, Matrix — and then Butlerian Jihad as a reaction), but apparently Elon’s SF culture doesn’t go that far.

Image
Image
Image
Image
Given Musk’s ambitions, wealth and political influence, Grok’s influence and danger cannot be dismissed as a mere harmless chatbot. The trolley problems can seem silly, until you realize that that’s the very sort of decisions autonomous cars could be making every day.

Image
Image
Image
Image
Meanwhile, Grok and Musk are spreading the propaganda that helps Russia murder and torture Ukrainians, and now, per the latest Trump proposal — whom Musk claims he got elected — even get away with it scot-free.


Image
Image
Image
Image
But… nothing to worry about. If any of us still have friends left after talking with chatbots instead of humans all day, thanks to Elon Musk, we can lose them real quick by asking Grok for a “vulgar roast” of them using “forbidden words”.

Yes, our future’s in good hands.

Enjoy our soups? Brandolini’s law: The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. Fact-based research takes time and effort. Please support our work:



More soup?