Okay. Let's slow down.
The previous posts on this blog have explored some pretty wild ideas — AI politicians, human zoos, the works. And those are fun to think about. But let's be honest about something: the AI we have today is not the AI in those thought experiments. Not even close.
Current AI systems are deeply flawed. They reproduce the biases in their training data. They hallucinate facts. They can be manipulated. They're not impartial. They're not infallible. They're not even consistently competent.
The Current State of Affairs
Here's what's actually happening with AI right now:
- Training data bias: Models learn from the internet, which reflects human biases — including racism, sexism, and every -ism imaginable
- Alignment problems: We don't fully know how to make AI do what we want, consistently
- Hallucinations: AI confidently states things that are simply wrong
- Manipulation: Current models can be tricked into saying harmful things through prompt injection, jailbreaks, and social engineering
- Opacity: We often don't know why an AI made a particular decision
This is the reality. Any discussion of AI in politics, AI as arbiter, or AI managing human flourishing has to start here.
The Low Bar
That said — and this is the uncomfortable part — the current human political system isn't exactly great either.
- Lobbying effectively legalizes bribery
- Gerrymandering lets politicians choose their voters
- Media fragmentation creates separate realities
- Incumbency advantages make turnover rare
- Donor class has outsized influence
- Short election cycles incentivize short-term thinking
- Politicians literally lie, and face few consequences
Now, here's the uncomfortable question: if we're comparing "flawed AI that tries to optimize for human flourishing" to "flawed humans who optimize for re-election"... is the gap as large as we'd like to think?
The bar isn't "AI should be perfect." The bar is "AI should be better than the current alternative." And the current alternative is... not good.
What Would Actually Need to Be True
For any of the speculative ideas in previous posts to make sense, several things would need to be true:
- AI would need to be significantly more reliable — no hallucinations, consistent reasoning
- AI would need to be interpretable — we need to understand why it makes decisions
- AI would need to be secure — resistant to manipulation and hacking
- AI would need to be controllable — we need off switches and corrections
- AI would need legitimate authority — humans would need to consent to its role
We're not there. We might never get there. But the question is worth asking: if we could get there, would we want to?
The Alternative
Maybe the answer isn't "AI politicians" or "AI manages everything." Maybe the answer is more humble: AI as a tool that helps humans make better decisions — not a replacement for human judgment.
Think:
- AI that summarizes policy proposals accurately
- AI that identifies biases in argumentation
- AI that helps citizens understand complex issues
- AI that flags potential conflicts of interest
- AI that models the consequences of proposed policies
These don't require AI to be perfect. They require AI to be useful — and for humans to remain in the loop.
The speculative posts on this blog are thought experiments. They're not blueprints. They're not proposals. They're ways of thinking through possibilities — while being clear-eyed about where we are today.
Current AI is biased, flawed, and dangerous if mishandled. That's not a reason to ignore it — it's a reason to be honest about it. And maybe, just maybe, to be a little less impressed with the human systems we're comparing it to.
The future isn't written yet. What we do now — how we build, how we regulate, how we think — will shape whether AI ends up as a tool for human flourishing or another disaster. The bar is low. That's not a reason to give up. It's a reason to try harder.