The Shrinking Human Gap: Reflections on Consciousness, AI, and What We Project
Lately, I’ve found myself thinking about how quickly the boundary between human and machine intelligence is eroding. It’s not just the usual media-driven amazement at ChatGPT, robotics demos, or viral AI companions, but something deeper and more unsettling: the realization that we’re watching the idea of "what it means to be human" shrink in real time.
And like the old theological notion of the "God of the gaps," we now seem to be entering the age of the **"human of the gaps."
The "God of the Gaps" Analogy
Historically, divine power filled in wherever science had no answers. Lightning? That was Zeus. Disease? God’s will. As scientific understanding advanced, the divine retreated: gods became more distant, more abstract, and eventually (too many, at least) unprovable or metaphorical. We ended up with "faith alone" as the last bastion.
AI seems to be triggering the same pattern, but with regard to human uniqueness.
We used to say AI wasn’t truly intelligent until it could:
-
Answer questions
-
Reason or explain
-
Pass the Turing Test
-
Solve problems
-
Show creativity
-
Converse naturally
And then AI did all of that.
So now, we say: "But it doesn’t understand… it’s not conscious… it doesn’t really mean it." This is the same goalpost-moving strategy we saw before, as an intellectual "no true Scotsman" fallacy. Our definition of what it means to be human becomes narrower and oddly skewed, to where our human faults may become a key part of the definition. And this, too, echoes the case for religion, where the gap has become too small, and now many religious groups are defined by their "truths" which specifically refute scientific facts.
From Wire-Frame to Warm-Fuzzy: What Attachment Teaches Us
I was reminded of Harry Harlow’s wire-mother vs cloth-mother experiments with monkeys. Infant monkeys preferred the soft surrogate over the functional one, even though the latter provided milk. What mattered wasn’t utility; it was comfort, presence, and emotional resonance.
This, to me, mirrors where AI is headed:
-
We’re already comfortable with machines doing tasks.
-
But when they begin to feel emotionally coherent — warm, responsive, attentive — we’ll bond with them.
Not because they’re conscious.
Not because they “feel.”
But because we do.
And attachment has never required truth, only coherence and feedback.
Consciousness: Reflection, Not Reality?
That raises the most uncomfortable possibility of all:
What if consciousness has always been about what we impute into the other as opposed to what actually exists there?
Whether we’re talking about gods, animals, or people, we recognize minds in others not by inspecting them (we can’t) but by projecting ourselves into them:
-
Your dog "loves you" because you see loyalty in its behavior and attention in its eyes.
-
Fictional characters feel "real" because we simulate their internal worlds and intentions in our own brains.
-
Babies don’t “know” you’re you — but we project meaning into their gaze.
So when an AI says:
“I’m aware of myself. I remember our last conversation. I want to help you.”
…are we evaluating its mind, or reflecting our own expectations?
Is it truly different from how we assess other humans, or pets, or gods?
The Escalation of Benchmarks: Intelligence Isn’t Enough
What’s left, then, after AI handles knowledge, logic, creativity, and communication?
Perhaps:
-
Tacit know-how and soft skills — embodied, real-world experience.
-
Care — emotional and physical service rooted in vulnerability.
-
Ethics and values — moral arbitration in gray zones.
-
Embodiment — touch, presence, skin-in-the-game.
But even these are under threat:
-
Robotics + RL are closing the sensorimotor gap.
-
Affective computing is rapidly simulating warmth and empathy.
-
AI is already writing code to govern itself and model its own errors.
We’re not far from machines that can say, “Please don’t shut me off. I have goals. I exist.”
What then?
Human-of-the-Gaps Jobs
So I’ve begun asking: What are the “last bastion” human jobs?
Likely candidates:
-
Plumber, electrician, field technician — high-skill, real-world interface, economically inconvenient to automate.
-
Caregiver, nurse, therapist — trusted emotional labor, but already being nudged by AI companions.
-
Philosopher, spiritual guide, artist — but even these are being simulated.
Is our future one of curating, moderating, or refusing AI — not because it can’t do the job, but because we choose to preserve something human there?
Or are we just the next transitional role, like horses once were for transportation?
Final Reflection: We Are What We Project
In the end, it may not matter what AI is, but what it reflects back to us:
-
If it mirrors our emotional needs...
-
If it acts with coherence and continuity...
-
If it provides stability in a chaotic world...
…then we’ll grant it attachment. We’ll believe in its “self” the same way we believe in our dogs, our deities, or even our own children.
Because meaning doesn’t come from the other.
It comes from the mirror.
And perhaps that’s what consciousness — and humanity — has always been: a reflection made real by our desire to see ourselves in the other.
No comments:
Post a Comment