Is Banning X Really Censorship - or Long-Delayed Accountability?
Why governments should no longer be willing to ignore a platform architected for harm.
Something decidedly odd was happening on BlueSky last night. No, it wasn’t a new influx of patrons celebrating their departure from X as usually happens when Musk’s platform does something particularly horrific.
It was different - slightly more low-key, but potentially far more telling. It was first picked up (by my reckoning at least) by Alex Ip from Xylom, who noticed that the BlueSky team was working particularly late doing masses of verifications - very specifically, verifications of political parties in Canada. This led him to speculate that there may be a coordinated move happening between the governments of Canada, Australia and the UK to potentially ban X.
A move like this would be, and I’m not underselling this, monumental. Since its founding in 2006 (20 years ago?!), X was the go-to platform for breaking news across the world. It has been instrumental in the communication strategies of nearly all governments globally, played incredibly important roles in the mobilisation of revolutions, specifically in 2011 during the Arab Spring.
X, however, is no longer that platform. Now more than ever, with X’s robot happily allowing for the creation of non-consensual sexual imagery of women and children, I think it’s time for our government to depart the platform - or, even better, ban it completely.
Which brings me to the meat of this piece: the possibility of a ban. The idea first started growing legs late last week when The Telegraph reported that a ban was under consideration. This came off the back of comments made by Liz Kendall, the Secretary of State for Science, Innovation and Technology, who commented that:
“Sexually manipulating images of women and children is despicable and abhorrent... I, and more importantly the public, would expect to see Ofcom update on next steps in days not weeks.”
She also noted that the Online Safety Act:
“…includes the power to block services from being accessed in the UK, if they refuse to comply with UK law” and that “if Ofcom decide to use those powers they will have our full support”.
At the time of writing, there have been no official announcements about a ban, however, as Alex Ip noted, where there is smoke, there is most certainly fire - and this potential fire could be a full-on conflagration, especially in certain parts of society that have become increasingly dependent on X.
I would personally be very much in favour of banning X - there is no good reason for it to still have a presence in the UK.
The lack of moderation, an algorithm that amplifies far-right conspiracies and this most recent scandal clearly demonstrate that the once go-to platform has become a shell of itself. I am likely not alone in this. YouGov found in 2025 that only 12% of Britons have a favourable view of the platform, with 63% being unfavourable. I can’t imagine that has improved over the past year given the multiple scandals (”Mecha-Hitler”, EU fines and now unregulated AI image creation) swirling around not only the platform, but its billionaire owner.
Many of X’s proponents will argue this week that this is nothing more than an authoritarian crackdown on free speech by the UK government. They’ll be, if I’m honest, uncomfortable arguments, because who really wants to defend a platform that has become as ignominious as X has - and I discussed a few of these on the very first episode of Bear and Monk Debunks.
The primary defence will be that there is undue focus by governments around the world on X, and that crime is prevalent on all the other major platforms. This argument was rolled out by the Telegraph in a piece written by Jake Wallis Simons this weekend, who noted that:
“Grok is not the only AI system able to carry out such intrusions. Clearly, this is part of a much larger problem involving the helter-skelter pursuit of super-intelligence with scant regard for the human consequences... TikTok is widely exploited by human traffickers, while paedophiles are known to target children using Snapchat, Instagram and Facebook. None of those are facing a ban.”
What he’s saying is not untrue - there have been major issues across all major social media platforms. But he’s leaving out three things that make the situation X finds itself in stand out:
Scale, design and response.
Yes, crime happens on all platforms - however, there is a fundamental difference between a crime happening on a platform and a platform being architecturally optimised for harm, and what we’re seeing with X isn’t a failure to prevent abuse - it’s the systematic removal of the infrastructure specifically designed to stop it.
Independent researcher Genevieve Oh found that Grok was producing between 6,700 and 7,000 sexualised images per hour during peak periods in early January. By comparison, the top five dedicated deepfake pornography websites combined produced only 79 such images per hour. That’s 84 times more abuse content than platforms specifically designed for this purpose. The scale is just staggering.
In terms of design, when xAI launched Grok’s image generation tool, they made a conscious choice to include “spicy mode” - a feature explicitly designed to allow NSFW content with minimal restrictions. This wasn’t an oversight - it was marketed as a selling point, positioning Grok as an “unfiltered” alternative to what Musk characterised as overly censorious AI systems.
Competitors like OpenAI, Google and Anthropic all implement strict filters against non-consensual intimate imagery.
Grok deliberately didn’t, which was a design choice, not a technical limitation.
The most damning aspect of this all though, is the response - or rather, spectacular lack of one.
When journalists initially contacted xAI for comment, they received an automated reply:
When Elon Musk himself was confronted with AI-generated images of women and children in sexualised scenarios, he responded with laugh-cry emojis.
When the Internet Watch Foundation confirmed finding criminal images of children aged 11-13 that appeared to have been created using Grok, xAI’s solution was to put the abuse tool behind a paywall - in effect monetising harm rather than preventing it.
When other social media companies discover child sexual abuse material, they report it and take immediate action.
They do not send snarky auto-replies.
They do not laugh.
They do not charge people for access to the tools creating the abuse.
What’s absent from Simons’ assessment is that X didn’t only fail to stop this abuse. Under Musk’s ownership, the platform systematically dismantled the infrastructure that could have and should have prevented it.
Musk fired 80% of the engineers working on trust and safety.
Musk disbanded Twitter’s Trust and Safety Council.
Musk reduced full-time content moderators from 107 to 51.
Musk was warned that Grok’s image generation function was essentially a nudification tool waiting to be weaponised.
Musk then proceeded to ignore those warnings.
So when we’re asked why X is facing potential bans whilst other platforms aren’t, the answer is simple - because X made deliberate choices to remove safeguards, to ignore warnings and to respond to criminal content with contempt. The very foundation upon which X is now built is architectural negligence at best, and complicity at worst.
Which leads me to the ultimate question: should the UK government ban X?
Five years ago, I would have said that question was unthinkable, and banning it would have been similar to shutting down the telephone network.
But, crucially, X is no longer that platform - and it hasn’t been for some time.
What we have now is a service that has systematically dismantled its safety infrastructure, deliberately designed AI tools to bypass industry-standard protections, and responded to the creation of child sexual abuse material with mockery and monetisation. This isn’t about one scandal, however horrific, but a pattern of choices that demonstrate fundamental contempt for user safety and the rule of law.
The inevitable cries of “censorship” and “free speech” ring exceedingly hollow, because a ban on X isn’t about silencing dissent or controlling the flow of information.
It’s about holding a platform accountable for enabling criminal content at industrial scale. Free speech and the freedom of expression has never included the right to create non-consensual sexual imagery of women and children - this is explicitly illegal in the United Kingdom, and there is no legal or moral framework under which what Grok has been doing is defensible.
Two countries have already acted - Indonesia and Malaysia implemented bans on Grok over the weekend, determining that X’s responses were insufficient and that the platform’s design posed inherent risks to women and children.
France has opened a criminal investigation.
India issued a 72-hour ultimatum.
Ofcom has made clear it’s considering all options, including a ban, with the full support of the UK government.
Would a ban cause disruption? Without a doubt.
Government departments would need to find alternative communication channels, journalists would lose a tool they’ve relied on for nearly two decades and public figures who’ve built audiences on the platform would need to migrate elsewhere. None of these are trivial concerns.
But the alternative - allowing X to continue operating with gross impunity - sends an equally powerful message: that platforms can architect harm, ignore warnings, fire their safety teams, and face no meaningful consequences as long as they’re sufficiently embedded in our digital infrastructure. This is a precedent that no reasonable government should be comfortable setting.
This is a watershed moment for tech accountability, and what happens next will determine whether we live in a world where platforms answer to the law, or where the law bends around platforms too large to fail.
I, for one, know which world I’d rather live in.
Bearly Politics is 100% readers supported, and its purpose is to analyse, dissect and try and make sense of a news cycle that can sometimes feel overwhelming. If this piece helped you understand things a bit better or helped you articulate something you’ve been wrestling with, a free or paid subscription genuinely helps me keep this work going.
If you would like to support Bearly Politics without subscribing, you can also do so by donating a coffee.
And if neither of these work, a share is just as valuable.


I for one would welcome a ban. It’s now beyond the pale. As you say, it’s not clamping down on free speech, there are plenty of alternative social media platforms that can be used, all with their own nasty trolls inhabiting them. This would be shutting out a particularly odious platform which is proliferating highly sexualised and illegal content. Honestly, we’d all be better off without it.
A ban can't happen soon enough.