Are We There Yet?
When a single person can operate like an army, the real question isn’t capability - it’s judgment.
By Steven Muskal, Ph.D. // stevenmuskal.com
Every long car ride eventually produces it - that impatient voice from the back seat: “Are we there yet?” It’s a little naive, completely human, and deceptively revealing. It assumes a clear destination exists, that arrival is recognizable, and that something fundamental changes the moment you get there. I’ve been asking myself that same question about artificial intelligence. The honest answer is inconvenient: it depends entirely on what you mean by “there” - and most of the people asking aren’t sure they know.
The Bionic Moment
Lately, I’ve been thinking about AI as a kind of superpower - not in some abstract, futurist sense, but in a concrete, daily, occasionally startling way. It keeps reminding me of The Six Million Dollar Man. Colonel Steve Austin was a NASA astronaut and test pilot, but what made him extraordinary after his accident wasn’t who he was before it. It was what he became when augmented. The bionic arm gave him strength. The replaced legs gave him speed. The optical implant gave him perception far beyond normal human range.
AI feels like that.
It extends cognition the way those bionics extended physical capability. It allows a single individual to operate with the output of a team - sometimes a department, occasionally what feels like an entire organization working in parallel. In my own work, I’ve never been more generative. Ideas surface faster. Execution compresses. The distance between concept and reality has shortened dramatically. AI isn’t just a tool. It is leverage, and leverage-as anyone who has seriously deployed it knows - changes everything about what’s possible.
Judgment Is the Real Superpower
But the analogy doesn’t stop with Steve Austin. It extends to Jaime Sommers, the professional tennis player who became The Bionic Woman, first as a recurring character in the original series and then in her own spin-off. She was already a high-performing athlete with mastery over her body and her craft. Her transition, however, was not seamless. The challenge wasn’t simply controlling enhanced physical capabilities - it was integrating them with judgment. Knowing when to act. Knowing when not to. Knowing when full force is exactly wrong.
That’s precisely where the real tension with AI sits.
Humans have judgment shaped by experience, context, and consequences. AI has approximations of judgment — patterns, probabilities, and surrogates trained on data. These approximations are genuinely useful. But approximations are not the real thing, and in high-stakes situations, that gap matters enormously. I’ve argued before, and will again: we’ve effectively reached a form of Artificial General Intelligence - not because machines have independently achieved human-like reasoning, but because individuals paired with AI can function as though they have. A single person with deep domain expertise and a working relationship with modern AI tools can now operate like a battalion. Maybe an army. But only if the human remains in control. That qualifier is doing more work than it might appear.
The Refactoring of Work
Every major technology wave reshapes labor. This one is no different in kind - only in speed and asymmetry.
We’re entering a period of wholesale refactoring and redeployment of human capability. AI isn’t merely automating tasks - it’s amplifying individuals. And that amplification disproportionately benefits those who already possess experience, context, and domain knowledge. Which leads to a dynamic that’s routinely underappreciated: the aging population.
Gen X and Baby Boomers are moving into retirement in large numbers. This has long been framed as an economic pressure point - fewer workers supporting more dependents, rising healthcare costs, a shrinking productive base. But AI changes that equation in a subtle and important way: it extends the productive lifespan of experienced individuals. People who might otherwise step back can continue contributing at a high level, pairing decades of accumulated judgment with newly amplified capability. In theory, this is a genuine economic buffer - a mechanism for retaining hard-won wisdom while increasing output. Onward and upward, as it usually goes with technology.
There’s a catch, of course.
The Experience Gap
The people entering the workforce today are remarkably capable. Many are more fluent with AI tools than their older counterparts - they move fast, adapt readily, and are comfortable in a paradigm that still disorients many established professionals. What they lack, by definition, is experience. And experience isn’t simply accumulated knowledge. It’s judgment: pattern recognition, knowing when something looks right but isn’t, understanding second-order effects, recognizing edge cases because you’ve actually lived through them.
AI can accelerate execution. It does not confer wisdom.
This creates a potential structural imbalance - a system top-heavy with experience and judgment on one end, and high-speed, high-capability but lower-context operators on the other. If that gap grows, it becomes not just an economic problem but a cultural one. Because judgment is the layer that determines whether capability is used well.
Who Embraces AI, and Who Resists It
There’s another pattern worth examining directly.
The most enthusiastic adopters of AI tend to be those who can extract the most value from it: higher earners, builders, operators, people who already have leverage and are actively seeking more of it. Conversely, resistance tends to cluster among those who experience AI as a threat to existing livelihoods rather than an amplifier of existing capabilities. This isn’t surprising. But it has real consequences. If AI amplifies existing advantages, then unequal adoption doesn’t just widen income gaps - it concentrates influence. And when influence concentrates, sentiment follows a familiar trajectory: skepticism hardens into resistance, and resistance has a way of generating its own momentum.
That’s where things get genuinely interesting.
The Tipping Point Problem
Societal change rarely requires a majority. It requires a threshold.
Political scientist Erica Chenoweth, in her landmark research on civil resistance campaigns conducted with Maria J. Stephan - published in their 2011 book Why Civil Resistance Works found that no nonviolent campaign that achieved 3.5% of a population in peak active participation ever failed to achieve its core goals. In a different domain, computational social scientists at Rensselaer Polytechnic Institute demonstrated in 2011 that when roughly 10% of a population holds a conviction with genuine commitment and acts on it consistently, that belief cascades through the broader network and reshapes majority opinion.
The exact percentages matter less than the underlying principle: you don’t need everyone. You need enough people who care deeply and act consistently.
We see this dynamic constantly in business. Minority positions control tables. Influence doesn’t distribute evenly, it concentrates. Apply that logic to AI adoption, and the picture becomes sharp. A relatively small group of highly capable people who fully embrace AI and dramatically amplify their individual output can reshape industries, norms, and expectations before the broader system has registered what’s happening. At the same time, a different committed minority organizing around resistance to AI can cascade that sentiment through the culture just as effectively. Same mechanism. Wildly different outcomes.
Pressure in the System
The real question isn’t whether a tipping point arrives. It’s what conditions exist when it does.
When institutions are trusted and opportunity feels genuinely accessible, these inflection points tend to produce adaptation - messy, contested, but ultimately constructive. When inequality is high and systems feel captured by narrow interests, the same dynamics produce instability. Scott Galloway has argued this case in multiple forms: extreme inequality and perceived concentration of power function as accelerants. They increase the likelihood that change manifests not as progress, but as backlash.
The difference between evolution and disruption isn’t the technology. It’s the pressure in the system when adoption reaches critical mass.
So, Are We There Yet?
If “there” means a stable, fully integrated AI-driven society: no. If “there” means a point where individuals can already operate with unprecedented leverage: unambiguously yes. If “there” means we’ve worked out how to balance capability with judgment, experience with speed, and power with responsibility: not even close.
We’re somewhere in between - past the starting point, moving faster than most people have processed, the destination still genuinely uncertain. The scenery has changed. The speed has increased. The road ahead isn’t evenly paved, and the ride is not equally comfortable for everyone on it.
The real question isn’t whether we’ve arrived. It’s whether we’re paying attention to how fast we’re going, who’s driving, and who might be getting left behind.
Because on this particular journey, asking “Are we there yet?” matters far less than asking “Are we doing this right?”
Steven Muskal, Ph.D. // stevenmuskal.com
For a music video mix, the last couple weeks I have been super busy - off the charts! I will double up on next post….


