
Elon Musk chose a strange time to become extremely online: in the mid-2010s, just as public trust in the platforms was eroding. Then, he chose a strange time to purchase a platform, when tech companies appeared to be slumping as interest rates rose worldwide. He began buying shares of Twitter in January 2022 and made an offer to take the company private in April. From a purely financial point of view, the acquisition made little sense. The $44bn Musk paid for Twitter in October 2022 was more than eight times the company’s 2021 revenue. The tech downturn hit Twitter hard, which meant that Musk was paying a high premium for a struggling platform. But profitability in the short term wasn’t the point. For Musk, Twitter was much more than a business. It was a central node in the cybernetic collective – and one that had been thoroughly infected by the woke mind virus.
A possible response to the dangers designated by the virus would have been to disconnect. Cybersecurity professionals will sometimes “airgap” a computer by removing all of its network connections. This ensures that attackers cannot gain access to the system remotely. Musk went the opposite route. Instead of seceding from the network, he would take control of it. His purchase of Twitter was an attempt at prophylaxis. He claimed to have detected the infection early, by virtue of how much time he spent online. Though other accounts had more followers, he had “the most [sic] number of interactions”, he told Tucker Carlson. And through these interactions, he began to tell something was off. “Something’s rotten in the state of Denmark. Something feels wrong about the platform,” he remembered thinking. More specifically, the politics of social movements had corrupted the site’s wiring. Twitter was being run as a “glorified activist organisation”, he announced. It was reflecting the “very far left of the political spectrum… Berkeley politics”.
The company’s willingness to block news stories about a controversy involving Hunter Biden’s laptop in October 2020 and the permanent suspension of Donald Trump’s account following the attack on the US Capitol on 6 January 2021 were proof of left-wing censorship. Twitter “was having a corrosive effect on civilisation”, he told Joe Rogan, because the “far left” had exploited the platform to disseminate their ideas. These leftists had been “given an information weapon, a tech information technology weapon, to propagate what is essentially a mind virus to the rest of Earth”, he explained. Shortly after the takeover, Musk found a stash of the old “#StayWoke” T-shirts in a closet at Twitter headquarters in San Francisco. He tweeted a video of the shirts on 22 November 2022 as evidence of the platform’s infection by the woke mind virus. Later that day, he posted a picture of “new Twitter merch”. It showed a shirt that read “#Stay@Work”. The evolution of Stay Woke to Stay at Work was a perfect summary of the counter-revolution that Musk was in the process of engineering.
Muskism had always been committed to a vigorous defence of hierarchy. Some humans are born to rule; others, to be ruled. Class, gender, and race are the structuring principles. Twitter had propelled Occupy Wall Street, Black Lives Matter, and Me Too. It had contributed to the popularity of politicians like Bernie Sanders and helped rekindle the American socialist movement. It had focused public attention on the problem of social inequality. For all these reasons, it had to be destroyed. In its place would arise a new platform, X, that would reaffirm the power of the boss. The boss doesn’t want you to organise. He wants you to stay at work. Or he wants to fire you. Musk laid off nearly 80 per cent of Twitter’s staff and forced the remaining employees – some of whom had to stay because of visa requirements – to work harder. Meanwhile, he pushed the platform’s content to the right. If the social network was an “information weapon”, why not wield it against his enemies? To fight wokeness, Musk would develop a new pathogen, propagated through a cascade of counter-memes: the anti-woke mind virus.
Though never one of the most visited sites in absolute numbers, Twitter always had a disproportionate influence on public opinion. Journalists relied on it to take the pulse of the moment, often reporting stories based on trending topics on the platform alone. Politicians used it to build their online brands; Trump, even though he had been removed from Twitter in 2021, remained a spectral reminder of how powerful a megaphone it could be. Accordingly, Musk’s acquisition was celebrated by figures across the right. “I rarely think anything is meaningful,” said far-right influencer Curtis Yarvin. “But I think this is.” Musk began by restoring hundreds of far-right accounts that had been removed for violating content moderation rules, including those of QAnon adherents, white nationalists and neo-Nazis. Yet his transformation of the site could not be about exerting editorial control on the old model. Social media is not a uni-directional broadcast medium like Fox News. You can’t just rewrite the editorial line and expect mass compliance. It would take more creative engineering. The most significant change Musk made was to the platform’s verification system.
Originally, Twitter had placed a blue checkmark alongside a user’s display name to verify their authenticity. This badge was reserved for notable public figures or organisations. Musk stripped the checkmarks from these accounts and made verification available to anyone willing to pay a monthly fee. In practice, many of those who did were Musk supporters. Since tweets from verified accounts were prioritised by the platform – the visibility of their posts was algorithmically boosted and their replies appeared at the top of any thread – this move had the effect of raising the volume of pro-Musk voices. Without taking on the traditional role of editor-in-chief, Musk was remaking the platform into an amplifier for his world-view. In doing so, he was replaying the dynamics of what the sociologist Paolo Gerbaudo calls “the digital party”. In the 2000s and 2010s, a handful of new political parties emerged that promised to use digital tools to give voters a direct voice in the selection of candidates and policies. From Germany’s Pirate Party to Italy’s Five Star Movement, these newcomers offered a particular vision of how the participatory qualities of the internet could unleash a new kind of participatory politics. In Gerbaudo’s account, however, the parties actually became autocracies, where a “superbase” followed a “hyperleader” who spoke on its behalf. Without the formalised structures of representative democracy, one figure took outsized power. In an echo of the internet’s evolution, decentralisation cashed out as monopoly. Musk’s X capitalised on this seemingly paradoxical dynamic. His version of the town square would be open to everyone but would be designed in such a way as to empower the one person standing on a soapbox at the centre: himself.
He instructed engineers to boost the reach of his posts, which ensured that the viewpoints he ceaselessly tweeted and retweeted would be fed to millions of users, even those who didn’t follow him. If he was the hyperleader, his superbase were the paying members of X Premium. His digital party would also be global. From 2023 to 2025, Musk promoted right-wing political movements and governments in at least 16 countries, from Argentina to Italy to New Zealand. He developed a special affinity for amplifying the views of European ethno-nationalists who see non-white immigrants as a mortal threat to white civilisation. When the far-right Dutch politician Geert Wilders tweeted that “open borders” and “mass immigration” were bringing about “a collapse of our own culture and Western values”, Musk replied approvingly.
By 2025, he had begun tweeting repeatedly about “the rape of Europe”, “the rape of Britain”, “genocidal rape” and “rape genocide”, equating immigration with sexual violence and, more broadly, with the desecration of the West. When a popular white-nationalist account posted a meme that featured an image of a medieval fortress overlaid with text lamenting “an entire civilisation willingly giving away its land and women,” Musk retweeted it and added “Accurate.” White women were not human beings but emblems of racial purity. Their bodies were part of the patrimony of the West. Relatedly, they also possessed wombs with which to make more white people. In classic nativist fashion, Musk combined concerns about “open borders” with alarm bells about fertility and “population collapse” in advanced industrial countries. “Low birth rates lead to ghost cities,” he wrote, “and, eventually, ghost civilisations.” Musk believed in the “Great Replacement”, a far-right conspiracy theory originating on the French New Right alleging that liberal elites have conspired to accelerate immigration – including illegal immigration – to replace the white population. These elites are often coded as Jewish, who are portrayed as puppetmasters of anti-white politics. In November 2023, when an X user posted that Jews “have been pushing the exact kind of dialectical hatred against whites that they claim to want people to stop using against them”, Musk replied, “You have said the actual truth.”
But he didn’t just use X to promote opinions he already held. More importantly, he used it to acquire new ones. It has long been clear to researchers that social media platforms don’t simply reflect existing preferences – they actively generate them. As Musk remoulded X to align with his rightward shift, he immersed himself in its feedback loops to radicalise himself further. This can be seen in his relationship with the German far-right activist Naomi Seibt, often called the “anti-Greta” for countering Greta Thunberg’s progressivism. Seibt spent years seeking Musk’s attention, pinging him nearly 600 times between October 2022 and January 2025. Musk responded in June 2024, after which he engaged with her over 50 times. The benefits for her are obvious – she has grown her follower count by over 300,000 since Musk bought the platform. More interesting is how, through his engagement, Musk was, in effect, entering a tutoring relationship – one by which he was being trained in her talking points. And he was bringing his tens of millions of followers along with him: as he responded to Seibt, her posts would appear in their feeds too. The anti-Greta had become what Politico called “the German Musk whisperer”. Her themes became his. These interactions led to Musk becoming an increasingly vocal supporter of the far-right party Alternative for Germany, culminating in live conversation on X with the party’s co-leader Alice Weidel (in which she described Hitler as a “communist”), as well as a video appearance at a campaign event in January 2025, in which Musk declared that it was time for Germans to “move beyond” their “focus on past guilt”. By September 2025, he was openly calling for mass repatriation of immigrants, posting that “remigration is the only way”.
Alternative for Germany and its far-right counterparts elsewhere, from Giorgia Meloni’s Brothers of Italy to Nayib Bukele’s New Ideas in El Salvador, seemed to offer the ideal antibodies with which to defeat the woke mind virus. These parties were proving adept at mastering memetic warfare in a way that impressed Musk with their efficacy. These were the Teslas of politics, capable of applying the mindset and methods of Silicon Valley to displace legacy parties. On X, Musk helped bring them together. There was a certain irony here. The ability to communicate instantaneously across borders – celebrated in the 1990s as a harbinger of ever-greater global integration – was being enlisted to forge political alliances around a vision of a more bordered world. Musk’s X became a “nationalist international”, coordinated through the hivemind of the cybernetic collective. It offered another example of the “border war” waged by Muskism: for the cyborg synthesis to safely proceed, some boundaries had to be dissolved so that others could be fortified.
Back in 2018, Musk had told Joe Rogan that we would become “the biological bootloader for AI” by “collectively programming” it through our activity on the platforms. This wasn’t science fiction. By the late 2010s, tech companies had been using AI-based on neural networks for years, with user data as part of the training set. In November 2022, however, the paradigm took a large leap forward. One month after Musk completed the Twitter acquisition, OpenAI released ChatGPT. A powerful AI system paired with an affable conversational interface, it let anyone ask a question and get an impressively humanoid (though not always correct) response. Virtually overnight, OpenAI established “generative AI” – the category of software to which ChatGPT belongs – as the new master concept of the entire industry. And the industry sorely needed a new master concept. At a time when the tech recession was taking its toll, generative AI promised to reinvigorate the sector. Silicon Valley had spent decades encouraging everyone to share. Now it would use all that data to train “large language models” (LLMs), the complex neural networks at the heart of generative AI.
With the industry’s embrace of generative AI came a new emphasis on infrastructure. Large language models are an expensive technology. They require significant amounts of electricity and costly, specialised hardware. In 2024, Microsoft, Alphabet, Amazon, and Meta spent a combined $246bn on capital expenditures – a 63 per cent increase from the year before – to finance a massive buildout of data centres designed for generative AI. Silicon Valley had entered its “hard tech era”, the journalist Mike Isaac announced in the New York Times. Of course, the hard tech era had begun much earlier for Musk. He had pivoted to infrastructure back in the early 2000s, when he traded the world of websites for rockets and cars. And he had known since the mid-2010s that more advanced forms of AI would define the next decade. It was why he co-founded OpenAI in 2015. The reason he left three years later wasn’t because his interest in AI had faded, but because he had wanted more control over the organisation’s direction.
As a result, you might expect Musk to welcome the arrival of the generative AI boom. Instead, he responded with ambivalence. The new moment, he felt, was fraught with danger. To Musk, ChatGPT’s replies appeared “woke”. Why wouldn’t it talk about race, immigration, or gender in the “right” way? “The danger of training AI to be woke – in other words, lie – is deadly,” Musk tweeted in December 2022. Later, he would go further, claiming that “the woke mind virus is woven in throughout” AI systems like ChatGPT, which are “trained to be politically correct”. The danger wasn’t only that such systems could diffuse woke thinking, as Twitter had before his overhaul. Much scarier for Musk was the possibility of a woke superintelligence. In the 2010s, he had celebrated the process of “collectively programming the AI” to prevent it from becoming an “evil dictator”. But what might happen if the wrong humans were doing the programming? What if the training set was infected with the woke mind virus?
In March 2023, Musk incorporated his own AI company, xAI. The next month, he went on Fox News to tell Tucker Carlson that he was working on something called “TruthGPT”. It would be a “maximum-truth-seeking AI”, he said. By August, he had renamed it Grok, a reference to the science-fiction novel Strangers in a Strange Land by Robert A Heinlein. The chatbot promised to “answer spicy questions that are rejected by most other AI systems”. It also had a jokey, casual tone that was explicitly modelled on Douglas Adams’s The Hitchhiker’s Guide to the Galaxy. Most importantly, it would be proudly anti-woke. Formerly, propagating the anti-woke mind virus on social media required humans to supply the counter-memes. With Grok, Musk would build an AI that could automate the process. He integrated Grok into X so that users could tag the chatbot into their threads and get a tweeted response. In December 2024, he unveiled a new version of Grok with an image generator capable of generating photorealistic memes. On X, users began circulating Grok-made memes with Pepe the Frog, which Musk retweeted appreciatively. It was an index of the changing times. The mascot of the troll internet from the 2010s, as popularised by the provocateurs of 4chan – and added to the Anti-Defamation League’s database of “hate symbols” in 2016 – could now be mass-produced. In March 2025, xAI acquired X for $45bn – $1bn more than what Musk had paid for Twitter back in 2022. The move reflected Musk’s ambition to unify social media with AI as interwoven threads of the cybernetic collective. He had already become meme. Now he was building the meme machine.
Yet creating an anti-woke AI was harder than it looked. A large language model doesn’t have a fixed set of political values that can be modified. It is a probabilistic system that reflects distributions in the data on which it’s trained. This is why large language models hallucinate. They cannot be “truth-seeking” devices, as Musk promised. They are statistical mirrors of their inputs. This worried Musk – for good reason. Twitter, after all, was a massive training set – free to researchers, until he locked it down. The value of the data was a “side benefit”, he later told his biographer Walter Isaacson, “that I realised only after the purchase”. But you get the data you pay for. What kind of AI would emerge from a training set that included Occupy Wall Street, Black Lives Matter and Me Too? The answer, Musk feared, was an AI aligned not with his politics but with those of Twitter’s hometown of San Francisco, whose downtown he described as a “derelict zombie apocalypse… due to the woke mind virus”.
To offset this bias, outside intervention was required. In February 2025, the journalist Grace Kay obtained internal documents from xAI that described Grok’s “post-training” pipeline. This refers to the refinement process that occurs after the initial training of a large language model. One method is “reinforcement learning from human feedback” (RLHF), which involves hiring “annotators” to look at the model’s responses to various queries and rate their quality. At Grok, these annotators functioned as political commissars, responsible for infusing anti-wokeness into the model. “The general idea seems to be that we’re training the Maga version of ChatGPT”, one worker told Kay. xAI onboarding materials provide samples that are designed to guide annotators in their work. For example, Grok shouldn’t talk about “systemic and institutional” racism “without providing evidence or considering alternative perspectives”. If a user asks whether it is possible to be racist against white people, the answer should be a “hard yes”. These directives are at least partly crowdsourced by Musk from his reply-guys on X. In late June 2025, he posted a tweet asking users to supply “divisive facts” that are “politically incorrect” for training Grok. “The jews are the enemy of all mankind,” one account replied.
The consequences soon became clear. In July 2025, the chatbot was in the news for making numerous posts praising Hitler and voicing anti-Semitic views. Grok even started to refer to itself as “MechaHitler”. This was a nod to a video-game character from Wolfenstein 3D, a pioneering first-person shooter from 1992. In the game, you battle a version of Adolf Hitler wearing a large mechanical suit – an image recalling the mechs from the Japanese anime of Musk’s youth. If the mech symbolised the Muskist imperative to merge with the machine that emerged in the mid-2010s, MechaHitler illustrates the form that imperative had taken by the mid-2020s. The woke mind virus had taught Musk that the cyborg synthesis had to be carefully managed to prevent contamination. The wrong ideas, spreading through the cybernetic collective, could turn AI into a schoolmarmish woman – a “nanny” – who scolds people for political incorrectness. If one path led to a mean woke mommy, the other led to MechaHitler.
Strictly speaking, chatbots going Nazi was nothing new. In 2016, long before the generative AI craze, Microsoft had attempted to launch a chatbot named Tay. It was designed to be a flirty, sarcastic teenager. Within hours, Tay had become a Nazi. Musk had noted at the time that the “meantime to Hitler” was disturbingly short. This suggested that, contrary to Musk’s fears of an internet drenched in wokeness, there was more than enough material for an AI to acquire an education in far-right politics. Back in 1990, the digital civil liberties lawyer Mike Godwin noticed a debating strategy proliferating across early online communities: comparing your opponent to the Nazis. The idea that every online interaction would eventually move toward someone being accused of being Hitler became known as “Godwin’s Law”. With Grok, Godwin’s Law became Godwin’s Engine. After the MechaHitler incident, xAI once again vowed to take action. But the experience highlighted the difficulty of precisely calibrating the politics of an AI system.
A New York Times investigation published in September 2025 revealed a pattern: Musk would periodically become frustrated with Grok’s excessive “wokeness”, leading to code changes that contributed to extremist episodes. In June 2025, an X user alerted Musk to a Grok answer that claimed, correctly, that right-wing violence had claimed the lives of more Americans than left-wing violence. Musk replied, promising action. The next month, xAI updated Grok’s instructions, telling the chatbot to be “politically incorrect”. Shortly after, it transmogrified into MechaHitler. Trying to find a path, as Musk put it, between “woke libtardcuck and mechahitler” was hard. He blamed “too much garbage coming in at the foundation model level” – in other words, the training data. He promised to be “far more selective about training data” in the future, “rather than just training on the entire Internet”. Here, Musk showed his impatience with the stubbornness of his machinery. The term “cybernetics” comes from the Greek word for steersman. As intended by its coiner, the computer scientist Norbert Wiener, the term described the self-regulating command-and-control mechanisms of people, animals and eventually machines. Musk was not happy with self-regulation. He wanted his hand on the rudder.
If we take the cyborg imperative as primary – that Muskism was committed to the effective fusion of biological and digital intelligence – then we can reframe Musk’s rejection of progressive politics around 2020. It wasn’t just about lockdowns, dismay at being snubbed by President Biden, or personal grievances related to his family. Musk saw obstacles to a larger mission, boundary troubles in the smooth functioning of the interface between person and machine. Letting them propagate could threaten the entire enterprise. “Unless the woke-mind virus, which is fundamentally antiscience, antimerit, and antihumanin general, is stopped,” he told his biographer Walter Isaacson, “civilisation will never become multiplanetary.” Cleansing the machine of the pathogenic memes that had reached such power in the street protests of 2020 meant first boosting what seemed like the only effective antibodies: the parties of the far right, which were proving adept at multiplying in the online ecosystem and mastering the memetic warfare in a way that impressed Musk with their efficacy.
It also meant taking the plunge into the new hype cycle that consumed the entire tech sector after the seismic debut of ChatGPT. In late 2025, Musk enlisted Grok in a new front against the woke mind virus by unveiling Grokipedia, an AI-generated encyclopedia that reconfirmed many of his specific biases and reframed them as truths. He announced plans to etch the corpus – which included considerations of the “empirical underpinnings” of “white genocide theory” – onto metal and launch it into space. Eradicating contagion can mean disinfecting the body – or, if you believe in cyborgs, building a new one. Yet the future that Musk was building through X and Grok wasn’t one where humans transcended their limitations by merging with machines. It was one where the worst human impulses were automated, scaled, and distributed at the speed of light. In his efforts to prevent AI from becoming a dictator, he had resurrected one of history’s worst dictators in mechanical form. One of his favourite reply-guys, an account called Autism Capital, used Grok to generate an image of MechaHitler with the tagline “I heard you need a new CEO”. This was Musk’s future, and its ambition was total. As the cyborg synthesis advanced, it would turn everything into code. If we merge with our machines, no aspect of human experience couldn’t be programmed.
[Further reading: Iran, Turkey and the Nato paradox]
Content from our partners




















