Skip to main content

Welcome to the uncanny valley

November 24, 2025

A “human-like” robot

You must have felt it at least once. A slightly uneasy, discomforting feeling when looking at the newest prototype of a “human-like” robot on the news, or a digital AI assistant designed to mimic the facial characteristics and expressions of your average person. Something just doesn’t feel right, and it goes beyond just noticing that what you’re seeing is a piece of technology. It’s unsettling.

The uncanny valley.

The Oxford Dictionary defines it as a “phenomenon whereby a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it.”

Some scientific theories believe this is an evolutionary development meant to keep us safe, pointing to four specific hypotheses as covered in Psychology Today:

The Threat Avoidance Hypothesis states that evolutionary pressures, forged by the risk of diseases and death, shape our unease with humanoid objects. This can be linked to “pathogen avoidance,” which suggests that imperfections in human-like entities trigger associations with diseases, with the fear that such entities could transmit them.

Evolutionary Aesthetics touches on how physical attractiveness shapes our perceptions of safety. The more good-looking these humanoid objects are, the less eerie we perceive them to be; this is believed to be driven by natural selection. However, the key words here are less eerie. We remain uncomfortable when seeing them.

The Mind Perception Hypothesis proposes that human-like objects seem so realistic that we expect them to feel and sense like us. However, knowing they are robots is in direct conflict with this, leading to feelings of being, well, creeped out.

The Violation of Expectation Hypothesis, much like the Mind Perception Hypothesis, suggests we expect humanoids to move and speak as naturally as we do. Yet the second we see mechanical movements and synthetic voices, the mismatch between expectations and reality triggers avoidance behaviors, negative emotions and sometimes, even fear.

These last two hypotheses are where the term “uncanny valley” actually comes from: volunteers participating in brain scan tests on this subject were noticed to react almost normally when they first saw such humanoids. Yet once the subject began to misalign with what we expect of another person like ourselves, an immediate dip in activity was noticed in the prefrontal cortex; the “valley.”

I know what you’re thinking (as I’m a human). This is all very interesting, but what does it have to do with the economy? Well, a lot, actually.

“The practice of designing AI to intentionally mimic human traits has been referred to as ‘pseudoanthropy,’ the impersonation of humans,” says Blair Radbourne, Senior Vice President, Enterprise Technology & Cybersecurity, OMERS.

“Ethical guardrails are required here to prevent the computer systems from behaving as if they are living, thinking peers to humans. This starts to raise many ethical questions that we don’t have enough public debate around: Does being human mean to act like an undergrad at a university? Or does it mean to act like a sociopath? Should we continue to design systems that strive for the imitation and likeness of humans?”

Crossing the valley

With companies across the world deploying business plans with humanoid robots at the forefront, the economic implications of getting around this cliff are enormous. If one company is selling you an assistant that makes you so uncomfortable watching it do your dishes that you have to leave the room, it’s dead in the water against a company that has created a product that alleviates this discomfort.

But if we’re evolutionarily wired to prevent these feelings of unease, is there any hope?

Well, according to Dr. Fabian Grabenhorst, a Sir Henry Dale Fellow and Lecturer in the Department of Physiology, Development and Neuroscience at the University of Cambridge, there is. Quoted in that same article from The World Financial Review I referenced earlier, he says: “We know that valuation signals in these brain regions can be changed through social experience. So, if you experience that an artificial agent makes the right choices for you - such as choosing the best gift - then your… prefrontal cortex might respond more favourably to this new social partner.”

Just like us, it turns out trust has to be earned. But how can companies do this? Blair is a bit more cautious when it comes to this particular view.

“There’s a quote from Robert Burton in The Anatomy of Melancholy that comes to mind when I consider that question, which is: ‘Shall I say thou art a man, that hast all the symptoms of a beast? How shall I know thee to be a man? By thy shape? That affrights me more, when I see a beast in likeness of a man.’”

Who knew a view on human-looking robots could elicit such a poetic response?

Blair prefers to focus on the utilitarian value of such technological progress, stating “On the robotics side I'm more a fan of utility value; self-driving cars have utility value and have zero need to act like humans. Trust is built on their safety record and reliability, no different from the cars people own today.”

An easier adjustment for our brains, for sure.

Another way

Another alternative is to actually go in the other direction completely; specifically, by not making these AI interfaces human at all! In fact, in 2022, an MIT robotics team ran an experiment with robots that looked like dinosaurs and other animals. The results were surprising; if a robot behaves more like an animal, humans have negative emotional reactions to it being kicked or pushed. Sometimes this negative response was even stronger than seeing a real human getting the same treatment!

With this much financial gain on the line, you can bet companies will find unique and innovative ways to grow people’s comfort with the products they hope to sell. Which approach(es) will win out in the end? That remains to be seen.

For now, perhaps it’s reassuring to know that when you come across the next “not-quite-human” human and your brain sends you into flight mode, it’s just using millions of years of evolution to try to protect you.



The Relatable Economist is an ongoing written series focused on how the economy, geopolitics, markets and more are impacting our day-to-day lives, discussing topics that matter to you, even if just to share with your friends at your next get-together or in the stands at your child’s or grandchild’s soccer game. Have a topic you want to learn more about? Write to us at therelatableeconomist@omers.com.