Why is the Internet so Very F&*%ed?
How we built an unregulated internet - and ended up with a digital Cthulhu
Note: This piece is voiced over rather than written in an attempt at practising speaking at a still intimidating mic and thinking out loud. Listening is completely optional - the post sits below as normal and is based on an edited version of the transcription.

For the past year or so, whenever I’ve been on the internet - scrolling through social media, reading news, just existing online - a question keeps crossing my mind:
“Why is the internet so very fucked?”
I’m not talking about one specific instance or even the last couple of weeks in the X saga, but rather a pervasive feeling of degradation. A feeling that now feels permanent and ambient and just utterly exhausting. X is just the current clearest example and probably the worst of it, but overall things just feel very, very grubby.
To understand how we got here, we have to look at history again. Specifically, we have to go all the way back to the 90s.
It was a time of Spice Girls and Friends and Tony Blair. This was about the time where the internet had been going for a little while. It was first invented at Stanford in the 1960s and the first website was created at CERN, but the 90s was basically where there was that first thought that maybe we should regulate things.
The problem was that regulation at that stage had to toe quite a careful line. If you regulate too early and too hard, you potentially have the chance of killing the medium. That’s a pragmatic thing, something I completely understand. This was mostly led in the United States and was done through Section 230 of their Communications Act all the way back in 1996. It was at that point that the decision was made that internet providers would not be treated as a publisher or speaker of user content.
This is quite opposite to what we would expect in editorial control of traditional media. In traditional media, there is legal responsibility that is conveyed on the publisher or the broadcaster. There was that divergence that happened in the 1990s, and for the time, I would say that was perfectly appropriate.
Those laws assumed that platforms didn’t shape what people saw and that they would only really be hosting with no intervention to amplify.
I think it’s fair to say that assumption is now completely false.
Over time, and especially once social media became a real thing, there was a huge shift from chronological feeds to algorithmic ranking.
Platforms were actively curating, boosting, suppressing, and recommending content, and engagement optimisation became the core business model of social media and more broadly of media on the internet. The problem is that when the internet changed, the law didn’t. The internet evolved but all regulations stayed the same.
Hosting became curation, which became behavioural prediction, which in turn became monetisation.
Algorithmic recommendation systems now determine what becomes visible for billions. We already know from internal research from Meta and Twitter and X that was historically leaked that engagement was highest when content was aimed towards outrage, extremity, or sexualisation.
In this context, the platforms still maintained the legal posture of neutrality, but what that has created is a situation in which abuse has become not a bug, but rather an emergent feature of engagement economics. When you know that outrage drives clicks, that sexualised content drives attention, and that humiliation spreads faster than nuance, as a business, you are going to be focused on what brings in the most money. That becomes hugely problematic.
It’s also where we now see X as the perfect case study, because the situation with X is what happens when amplification, monetisation, and weak accountability collide.
A lot of the relevant factors in how X came into the situation that it’s in really comes down to the massive reduction in moderation, verified accounts being given algorithmic priority, and monetisation tools applied to engagement-heavy content, regardless of harm. A large thing that’s also happened with X is that any sort of public framing of moderation or attempts at regulation is now framed as free speech censorship.
Now, governments are currently trying to catch up, but they really are very late and they are very constrained. For the most part, the regulation now targets systems and not individual posts.
When we think of the UK Online Safety Act, it’s taking the duty of care model approach, focusing on foreseeable risks and systemic harms.
But even if the Online Safety Act in the UK was perfect, there is a jurisdiction problem. Because even if it was 100% foolproof and could cover every single thing, X doesn’t sit in the UK. It functions in the UK, but X sits in the United States.
The most important decisions that could regulate X sit with US lawmakers. Now, I don’t know if you’ve noticed, but the United States has gone slightly apocalyptic over the past year or so, and Congress isn’t really functioning particularly well.
We’ve got laws in the UK, but do they even count? Are they going to have an effect?
We’ve seen before when Musk was sued by the EU for 120 million euros, he just shook it off.
Ultimately, if there has to be regulation, it would have to come from the United States. And we’re just so far from that. In addition to that, platforms also hold an incredible amount of power. When Musk first bought X, my thought was, you idiot, you’re going to run this into the ground, you’re going to lose a bunch of money. I personally missed why he was doing it.
He bought X with the sole purpose of gaining narrative control. He bought it for that influence. He bought it to shape the stories, to really drive the discussion forward. The money that he loses there is almost irrelevant.
What makes that even more dangerous is that X can completely shape the narrative around any sort of regulatory or censure attempt into censorship. We’re seeing that happen already where the narrative within X about the possible intervention by Ofcom is being painted as pure censorship. The whole conversation about social media regulation is being held on social media.
What has happened is that we created a Lovecraftian monster, which we’re now looking to moderate and to control and to rein in, and it’s looking back at us and saying, no, go away, fuck off. But there are things that can be done and there are other pressure points that can be prodded at.
Because users are already changing behaviour. Not necessarily morally, but usually out of exhaustion. Obviously, as you all know, I left X as a publishing platform last year, and I completely deleted my accounts in the last week.
I’m not alone. Many people are leaving these hostile environments for places that are safer, that are more moderated. X itself showed a drop of about 11 million users in the EU between 2023 and 2024.
In the US, millions of accounts were deactivated after the 2024 election.
And if you look at the other side, BlueSky, which is moderated, which does feel more secure, which does feel less abusive, has gone from nine million users in 2024 up to 42 million users at the start of 2026. We’re getting to the point now where safety becomes a feature and not a cost.
We recently watched Swiped on Disney+, about the Tinder bumble move and how there was a conscious shift towards safety and moderation norms and how in Bumble’s case, safety and moderation became what was attractive.
Now, I’m not saying that we should choose self-regulation instead of law, but what I am saying is that we should consider aligned pressure. Because while regulation shapes incentives, user behaviour reinforces them.
Platforms respond much faster to engagement loss than they do to regulation. And if you take that approach of using market and legal pressure, you could actually get to the point where you can see behavioural change.
Coming back to the question that I opened with about why the internet is so fucked, I think it comes back to the fact that we regulated it in an appropriate way when it was small, but that regulation didn’t update as the internet matured. We didn’t update the rules when it was clear the internet was becoming exceptionally powerful.
What we are now in a position of is governing a borderless system with slow institutions.
I have to add here that I don’t believe that the internet is ungovernable, but it is overdue adult supervision. Legislation does matter and user behaviour matters, but platform design also matters. I don’t think that rehabilitation is going to come from one law or one platform, but it is going to come from us thinking in a different way about the internet.
My hope is that we do so rather sooner than later and that the lessons that we’ve learned from the last couple of weeks start to really cut through.
Bearly Politics is a 100% reader supported publication and its purpose is to explore power, critically analyse the pervasive issues of the day and for the most part try and make sense of an increasingly senseless word. If you’re in a position to do so, a paid subscription is one of the ways this work is supported, however, this is not expected but always appreciated.
If you are not in a position to financially contribute, claiming Universal Credit or a pensioner, and would still like access to the Bearly Politics archive, you are very welcome to email me on iratusursusmajor@gmail.com and I will happily comp you a subscription.


I’m of an age when I have fears for the future of mankind. We all have an element of addiction in our makeup, be it alcohol, smoking, football, food or online. I’m just as guilty of it as millions of others. However, I foresee a future when every decision about humanities continued existence will be decided by the malign actions of those controlling social media. That being said, it’s not a future I want and am relieved that I’m nearing 80 years of age.
The key to understanding the failure of regulation to keep up is that, when the Web burst forth in popularity in the mid-90s, the concept of Platforms didn't exist. If you wanted a presence on the Internet you had to build a web site. You could get an account with a hosting company, and register a domain name, but to make anything work you had to build your web site before your URL would display anything. And in those early days, tools to build web sites were primitive to non-existent, so it was hard work - you basically had to hand-craft HTML, so was a job for nerds (being a software engineer, I was one, and I'd been using the Internet since before the Web had been dreamt up!).
In my understanding, the law was to protect hosting companies from being responsible for anything their customers put on their own web sites - pretty reasonable since those companies weren't expected to monitor every page on every customer's site.
Prior to the Web there had been platforms; they offered dial-up services, with Compuserve and AOL being the principal ones. But their use was pretty much restricted to those with a techy bent, so were all very wholesome.
I'm pretty sure that Facebook was the first real platform, where you could just create an account and post stuff, without having to jump through hoops or needing any technical knowledge. It was certainly the first to be really widely adopted. I never liked it and have never used it (I created an account in the early days, but then soon deleted it), because I could see that it was really about harvesting users' information, and I didn't want to play.
In hindsight, the regulations should have distinguished platforms from hosting, but didn't. Platforms should be considered to be publishers, and have responsibility for content their customers post. They can't claim it's too much to monitor everything that's posted, because their algorithms already do that. Unfortunately, given the power of the platform owners, I don't think there's a cat in hell's chance of that happening.