Why Are Robots Built To Look Like Humans?
It’s not because that’s the most efficient way to design them for high tech workTech writer and author Mike Elgan is not a fan of humanoid robots in the workplace. In fact, he doesn’t understand why they are even there.
It’s not because they need to fit into the same spaces as humans. The reality in most industries is the other way around. Humans need to fit into the same spaces as machines. And with a little adjustment, we do.
So he thinks that the considerable effort that goes into making robots human-like has got to be for some other goal:
Efficiency? No.
Figure 02 is impressive but if it didn’t need to look like a human, it might be more efficient for stacking boxes or labeling parts. And a lot cheaper too.
Here are non-humanoid robots assembling a Tesla:
How would the robots be more efficient if they looked more like people?
Elgan suggests that the real reason for humanoid robots is psychological
Specifically, a study conducted by scientists at the University of Genova and the Italian institute of technology found that while non-humanoid robots are perceived as objects, humanoid robots are often perceived as “human-like” or “social agents” — not objects.
When people make eye contact with other people, the act elicits a psychophysiological connection or bonding response. Research by scientists at Tampere University in Finland found that eye contact with robots elicits the same response in people.
Yet another study conducted at IRCCS Centro Neurolesi Bonino Pulejo in Messina, Italy, found that robots programmed for “emotional intelligence” can evoke empathy in people, “especially when they exhibit anthropomorphic traits.”
Mike Elgan, “Humanoid robots are a bad idea,” Computerworld, August 20, 2024
They are designed in such a way as to trick us into thinking they are like humans, he says. The humanoid robot triggers many natural tendencies in humans — eye contact, for example — and then we fill in the blanks from experience with humans.
That’s the famous Eliza effect, where a mid-1960s chat program, ELIZA, created by MIT professor Joseph Weizenbaum (1923–2008) at the MIT Artificial Intelligence Laboratory, fooled many users into thinking that it was actually interacting with them.
For getting work done, that effect is not necessary, of course. So Elgan asks, again,
Why isn’t there a movement to make sure robots do not elicit false emotions and beliefs. What’s the harm in preserving our intuition that a robot is just a machine, just a tool? Why try to route around that intuition with machines that trick our minds, coopting or hijacking our human empathy?
Elgan, “Bad idea”
Who’s driving the car?
Here’s a thought: Let’s look at two things together:
1) The number of and importance of people promoting the idea that AI will soon surpass human intelligence:
In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that “machines will be capable, within 20 years, of doing any work that a man can do.” … Still, on average, the different approaches give different answers. Epoch’s model estimates a 50% chance that transformative AI arrives by 2033, the median expert estimates a 50% probability of AGI before 2048, and the superforecasters are much further out at 2070.
Will Henshall, When Might AI Outsmart Us? It Depends Who You Ask, Time, January 19, 2024
2) What actually happens when AI is left to itself:
Overall, maybe it is better for boosters if people honestly believe that AI is smarter than it looks.
You may wish to read: Are we close to peak AI hype? Outrageous statements are proliferating. “A lot of this hype comes from the top. CEOs are excited by AI because they have been told that it will enable them to eliminate people, an obvious benefit to the bottom line. Because CEOs like this type of optimism, they have a tendency to promote people with optimistic messages, thus fueling a competition to make outrageous statements.” (Jeffrey Funk)