Can Humans Still Govern Ourselves?

Can Humans Still Govern Ourselves?
Photo by Kelli Dougal / Unsplash

It’s hard to shake the feeling that we’ve entered an era where we can no longer govern ourselves without some kind of technological babysitter.

This isn’t the age-old pessimism about human nature, and it’s not an uncritical embrace of “technology will save us” optimistic fuckery. It’s more like we’ve reached a tipping point — a combination of extreme intellectual and technological advancement paired with an equally extreme cultural and communication breakdown. 

And there’s no turning back.

Humanity has developed incredible tools. We have supercomputers in our pockets, advanced algorithms that can process trillions of data points in seconds, and AI models that can almost (almost doing some heavy lifting here, I’ll admit) replicate human-like thinking.

And then we have us: flawed, fucked up humans, weighed down by our biases, our tribalism, and our inability to really hold onto the basic cultural and communal norms that allow us to function together. We’ve built machines capable of logic far beyond our own collective reasoning, but we haven’t built the cultural or ethical framework to wield these tools. The result? A kind of “cultural technical debt.”

This debt shows up as a growing gap between what we’re capable of doing technologically and what we’re able to handle responsibly as a society. Misinformation spreads faster than fact, greed and corruption go unchecked, and complex global issues get reduced to finger-pointing on Twitter. Even our most advanced democracies can barely maintain trust in their own institutions. Somehow, we’re still surprised when things keep spiraling into chaos.

There’s a possible future where trustless tech systems are built into governance itself. 

Where we build societies with trust coded into the framework, rather than relying on fallible human institutions. 

Where public are distributed and monitored without the fingers of corruption; where elections can be counted transparently and instantly; where regulatory decisions are based on data rather than lobbying dollars. In theory, we could achieve a kind of calculated fairness that transcends human error.

The problem: even with all this tech, we’re still the ones designing it. We’re still the ones programming the AI, writing the blockchain protocols, setting the parameters for what counts as “fair.” We’re asking technology to save us from ourselves, but we’re the ones building the technology in the first place. Until we can figure out how to integrate this tech with our own ethics — our messy, biased, contradictory ethics — there’s a risk that these systems will inherit all our flaws, just on a bigger, faster scale.

The argument stands that we may no longer be able to govern ourselves without these tools. But we also may not be ready to govern ourselves with them. We’re stuck in a technological purgatory of our own making — one foot in a world where we need AI to keep our worst instincts in check, and the other foot in a world where we don’t fully trust it to do so.

Read more