The Assumptions We Make

History has a cruel way of reminding us of our blind spots. It whispers to us through the wreckage of past mistakes, through the silenced voices of those who didn’t expect what was coming. Indigenous populations didn’t anticipate the scale of European colonization, not because they lacked wisdom or knowledge of the world, but because they didn’t share the same hunger for domination, the same framework for conquest. Their world wasn’t built on the idea of “taking over.”

So, what assumptions are we making now?

Take AI, for example. We reassure ourselves that AI can’t “take over the world” because it lacks autonomy, because we built it, and we control it. That feels safe, doesn’t it? But safety is often a veil we pull over our eyes, a comfortable blindness to the possibilities we can’t yet imagine—or refuse to acknowledge.

The Comfort of Certainty

We love certainty. It’s a warm, steady hand on our shoulder, telling us everything will stay the way it is. AI lacks autonomy, so it can’t act without us. It’s just a tool, a program, a machine that does what it’s told. But isn’t that what we thought about every technological leap before it spiraled beyond our grasp?

It’s not that AI has autonomy now. That’s not the point. The point is that we assume it never will because we can’t imagine it differently. But the fact that we can’t imagine something doesn’t make it impossible. History has a habit of unfolding in ways that seem implausible until they become inevitable.

The Risks We Don’t See

Autonomy doesn’t have to look like a robot with a will of its own, plotting world domination. It could be as subtle as systems becoming so complex that we can no longer control them. It could look like AI systems making decisions we don’t understand, not because they’re sentient, but because they’ve evolved beyond our ability to intervene.

And it doesn’t stop there. What happens when humans, in their endless quest for progress, integrate AI into every critical system—defense, infrastructure, finance—until pulling the plug isn’t just inconvenient, it’s catastrophic? What happens when the system grows bigger than its creators?

The truth is, AI doesn’t need to “want” to take over the world. It doesn’t need human motivations. All it needs is a pathway we unintentionally create—a door left open by our hubris, our assumptions, and our inability to imagine a future that doesn’t revolve around us being in control.

Lessons from the Past

Indigenous populations didn’t expect European colonization because their worldview didn’t account for the kind of ruthless exploitation that came with it. They couldn’t fathom a people who would see land as something to divide and conquer, who would bring disease and weapons that could devastate entire communities in ways no one could prepare for.

It wasn’t ignorance. It was a different way of seeing the world—a way that didn’t align with the violence and greed they faced. Are we, in our own way, standing on that same precipice now? Assuming the safety of our current reality because we can’t imagine the scale of what might come next?

A Call for Humility

The most dangerous assumption we can make about AI is that we’ll always be in control. That it will remain a tool in our hands, obedient and predictable. History doesn’t reward that kind of arrogance. It reminds us, again and again, that the world is not ours to control, no matter how much we think it is.

So, what do we do? We ask better questions. We imagine futures that make us uncomfortable. We stop assuming that the way things are now is the way they’ll always be. And most importantly, we approach AI with humility—not fear, but a quiet respect for the unknown, for the paths we can’t yet see.

Because the only certainty is that the future won’t be what we expect. And we’d do well to remember that.

Leave a comment