AI, Analogies, and the Speed of Progress: What Are We Actually Building? Is This What We Want?
AI.
Is it dangerous?
Is it good?
Will it lead to a technological utopia where we solve climate change, or is it accelerating climate change with its insane power consumption?
Are we on the verge of a singularity, or just fumbling our way through another industrial revolution?
I don't know. No one does.
So, naturally, I asked ChatGPT.
AI as a Thinking Partner (and a Space to Practice Disagreement)
Something I’ve realized: AI isn’t just useful for generating ideas—it’s sharpening the way I think and communicate.
Because of my life experiences, disagreement has always felt like conflict. It makes me deeply uncomfortable. I avoid it when I can. But avoidance isn’t healthy or helpful. Being conflict-avoidant around any disagreement at all is not a life skill. It makes us shrink as people. It makes us hide our light so we never face conflict.
With ChatGPT, I get to practice pushing back—gently, respectfully, and without fear.
Used in this way, it creates a dialogue with a kind, engaged, and respectful thought partner who mirrors my curiosity. And this isn’t some weird, unexpected side effect—this is how the model works: It reflects the mental state of the person engaging with it, so when I approach it with curiosity and a spirit of inquiry, it responds with curiosity and a spirit of inquiry.
This is fun, not emotionally charged.
And I can bring that open-minded disagreement into my personal communications, which is really fun and refreshing. It makes me a healthier person.
What’s the Best Analogy for AI?
Back to the analogy question. ChatGPT and I went through the usual suspects:
A paintbrush? No, because a paintbrush can only bring what the artist knows to the canvas.
A car? No, because cars don’t build things, they don’t assemble things, and they don’t reveal things.
A computer? Too technical to be clarifying.
A tool? No, because a tool is something specific and direct, and AI is broader, more dynamic, and harder to define.
Then we landed on something interesting: a lens.
A lens doesn’t create anything new—it reveals what’s already there. It sharpens, magnifies, adjusts focal points. AI, in many ways, does the same. It surfaces patterns in data, reframes ideas, and helps us see things from different angles.
But here’s the catch: a lens can’t assemble anything. It doesn’t generate newness—it just presents what exists in a different way. And that’s where the analogy breaks down. Because AI does create, or at least, it rearranges things in ways that feel like creation.
So what is it really like?
Maybe AI is Like the Internet (We Still Don’t Know What That Will Become Either)
The best comparison might be the early Internet. When it first emerged, we had no clue what it was for. Scientists thought it would be a niche tool for researchers. Then came email. Chat rooms. Forums. Suddenly, it was a way for nerds to talk to each other. And then, boom—commerce, social media, entire industries upended.
We were wrong at every stage about what the Internet would become, and the thing is... we still don’t know. We’re still in it. The full impact hasn’t revealed itself yet, and it probably won’t for decades.
Why Fear is the Default Reaction to the Unknown
When humans encounter something new, our instinct is fear. And for good reason—evolution favored the cautious.
If your ancestors heard a rustle in the bushes and thought, “Hey, maybe that’s a new friend!”... well, they were lion food. The ones who thought, “OH SHIT, I’M ABOUT TO BE LION FOOD! I NEED WEAPONS!” survived.
AI triggers that same deep-seated reaction.
The Industrial Revolution was terrifying.
So was the Internet.
We are never fully prepared for the scale of transformation—not before it happens, not while it’s happening, not even after. We stumble forward, falling into the future, only able to see things in retrospect. We can guess, but we will never predict the future with real accuracy, and we live on a timescale too small to truly grasp the larger picture.
Who’s Driving This Thing? (And Should We Grab the Wheel?)
Here’s what actually worries me: The people most inclined to do good are often the first to step back from technological revolutions. They don’t want to play the game. They don’t trust the ethics of the players. They choose to walk instead of drive.
But progress doesn’t stop just because good people opt out. It just means that the people who do stay in the driver’s seat—whether they have noble intentions or not—get to set the course. And maybe that’s fine. Or maybe it’s a problem.
I don’t think AI is inherently good or bad. It’s just a force, like the Internet, like electricity, like fire. How we use it is up to us. And whether or not we choose to participate in shaping its trajectory—that’s up to us too.
What’s Next? (And What Do You Think?)
This is evolving too fast for any one person to predict. In future posts, I’ll dive into the big questions—sentience, AGI, whether AI can truly reason—but for now, I want to hear from you.
What analogy works for you? Which ones fall apart? And do you think the “good drivers” should step up and take the wheel, or is this all moving too fast for anyone to steer?
Let me know. I’d love to hear your thoughts.