The Anthropomorphic Bias In Robotics and Thinking Machines
"The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge." — Stephen Hawking
On a fundamental level, we don't really understand the biomechanics of how we move; we don't even fully understand how we think. We lack a clear definition of what 'thinking' actually is without falling into circular logic by using the word to define itself. Yet, we try to build machines assuming that we already know what these concepts entail. I believe an overwhelming majority of the motivation behind robotics is driven by individuals who find the concept compelling and 'cool', rather than by those genuinely attempting to tackle the fundamental questions first.
More Algorithms ≠ Generality
We cannot brute-force our way to general-purpose robotics simply by layering more software on top of flawed assumptions. Humans can walk because evolution is unpredictable and full of biological "bugs"—there was no programmed target for humans to walk; it just happened due to stochastic survival optimisation. I say 'bugs', but they aren't really bugs; they're more accurately described as fallback mechanisms for uncertainty. If everything were perfectly optimised, it would break under uncertain conditions. More code and additional algorithms will not solve the problem if they are built upon faulty assumptions. We are fundamentally still trying to program something that evolved organically through countless unpredictable factors.
We Don't Even Understand Ourselves, Yet We Want to Build Ourselves
The biggest problem in robotics is that we fundamentally lack an understanding of human biomechanics. We only know and experience the outputs, which feel intuitive to us. As a result, when we attempt to build robots, we end up reverse-engineering processes we don't fully comprehend. This inevitably leads us into Moravec's paradox: the tasks that seem easy to humans are actually the hardest for robots to replicate, while algorithmic tasks—those that seem difficult to us—are often the easiest for robots. For example, robots tend to excel at well-defined algorithmic tasks (like solving a Rubik's cube or calculating trajectories) but struggle with embodied intelligence (like picking up a cup without spilling). In contrast, the human body's adaptive, feedback-driven mechanisms are inherently non-linear and chaotic, whereas most robotic algorithms are linear and deterministic, allowing humans to generalise extremely complex tasks intuitively and with minimal effort.
The "Cool Factor" vs Fundamental Science
Robotics and AI attract people who love the idea of robots (humanoids, AGI, self-driving cars) but often lack the patience for the messy, unglamorous work of understanding how intelligence and movement actually emerge in nature. Compare this to fields like neuroscience or biomechanics, where researchers spend decades studying how a single muscle group coordinates with the nervous system or how the brain represents spatial awareness. Robotics often skips this crucial step, assuming we can "hack" our way to intelligence without truly reverse-engineering it. I'm not implying that we necessarily need to slow down; rather, operating from first principles and tackling fundamental questions may inherently require slowing down.

You don't see venture capitalists (VCs) throwing billions of dollars at these 'slow' labs because they are driven by financial returns rather than tackling existential questions about reality and what is possible. They'll fund the 200th "Tesla Bot" clone that promises domestic humanoids within five years, rather than a lab studying how proprioception emerges in biological systems. In a way, this contributes to the slowness of genuine progress; there's a fundamental lack of interest in the process itself, with interest focused solely on the output. People want to skip the challenging work and build something immediately—perhaps a consumer humanoid robot they can charge £20,000 for and attract another billion pounds of investment, just to create something that can pour tea into a mug. It's quite ridiculous. As a result, you end up with 200 similar robotics companies, all falling into the same trap. I believe the reality is that most people simply aren't deeply obsessed with existential questions and the uncertainty that accompanies them. Additionally, figuring things out won't be 'sexy', yet we want something that is appealing, and VCs want something attractive enough to put money in their pockets. The true pioneers of robotics won't be AI engineers building algorithms on top of anthropomorphised principles—it will be individuals like Denis Noble, with his work on biological relativity, and Karl Friston with the Free Energy Principle; people who are obsessed with fundamental questions. I speculate that these concepts will likely be central components to anything close to the ideal image of robots that we eventually classify as 'sexy'.
Approximation Isn't Enough
One may argue that approximation is sufficient, but I believe approximation alone is insufficient for the generalised, ideal type of robots we envision. Evolution and reality operate with exact precision—not in the rigid, deterministic sense of a machine, but in the deeply integrated, context-sensitive way that arises from billions of years of relentless optimisation within physical constraints. There is no "approximation" in how a cheetah's musculoskeletal system aligns with its nervous system for sprinting or how a human hand's compliance and sensory feedback enable dexterity without conscious computation. Every component is tightly coupled with the others, and any misalignment leads to dysfunction or failure. Theoretically, we could throw billions of pounds at the problem and see what sticks, but the romanticised vision of general-purpose robotics we imagine remains distant. This is primarily because:
1) There are fundamental questions that remain unanswered.
2) We continue to rely on folk psychological definitions and human exceptionalist biases that shape our approach, often leading us astray.
The Real Nature of 'Thinking Machines'
Actual thinking robots will not emerge from companies like OpenAI, Tesla, or Google; they are more likely to come from private, niche, interdisciplinary labs. While these companies claim to be obsessed with tackling questions about intelligence, their financial incentives are fundamentally misaligned with the nature of addressing fundamental questions. The focus of these companies steers more towards marketability and what seems 'sexy' to consumers. There is nothing inherently wrong with this—indeed, what we have achieved collectively as humans is nothing short of extraordinary, and these kinds of conversations are often sparked by these very achievements. However, we must not get carried away with our successes and our romanticised, ideal reality—that is precisely what I criticise.
The Bottom Line
We are not building true thinking machines—we are constructing elaborate marionettes, animated by algorithms but devoid of understanding. The field's obsession with humanoid forms and marketable demos has become a distraction from the deeper, messier work of unravelling intelligence itself.
If robotics is to escape its current dead end, it must abandon the arrogance of reverse-engineering what we do not yet understand and instead embrace the humility of true foundational inquiry. This means:
Dethroning anthropomorphism—stop building humanoids because they "look" intelligent, and start studying the minimal principles that make intelligence possible.
Defunding the hype cycle—divert resources from sexy, short-term demos to the unglamorous, open-ended science of embodiment and cognition.
Learning from evolution's playbook—stop optimizing for rigid perfection and start engineering for adaptive resilience.
The first breakthrough likely won't come from a viral robot demo. It will emerge quietly—from a lab performing experiments no one thought to fund because they don't sound sexy. Because real thinking machines won't be built by copying humans. They'll emerge when we finally stop pretending we already understand ourselves. Of course, I could be totally wrong; but I think it will be interesting to see how things play out nonetheless.