Boeing’s Sidelined Fuselage Robots: What Went Wrong?It’s not what we learn, it’s what we forget
It was bound to happen. As more and more robots take over for humans, sooner or later one of them (and, in this case, an entire team of robots) would get fired.
Way back in 2015, with much hype, hope, and fanfare, Boeing launched a series of production improvements for their new 777 (a model owned by United Airlines is seen below, landing at LAX). The triple-7 was Boeing’s latest, filled with fancy tech and fuel-saving benefits. It needed a manufacturing process to match: No more humans, let the robots do it faster, safer, and cheaper.
Boeing only introduced these robots for the forward and aft fuselage sections: Even in the new system, mechanics tacked the fuselage panels together. The robots then drilled the holes and fastened the panels in place. Or so they were supposed to.
Unfortunately, the robots were a pain to use and error-prone. By 2016, mechanics were calling the robotics a “nightmare” to work with. But Boeing hung on until just this week, when they announced that the manufacturing of 777 fore and aft fuselages would return to proven, largely manual, techniques. While Boeing uses other autonomous robotic systems in the 777 assembly process, the details of the fuselages proved too much for the robots to handle:
The automation has never delivered its promise of reduced hand labor and Boeing has had to maintain a substantial workforce of mechanics to finish the work of the robots. Because of the errors in the automation, that often took longer than if they had done it all by hand from the start.Dominic Gates, “Boeing abandons its failed fuselage robots on the 777X, handing the job back to machinists” at Seattle Times
When Boeing decision-makers introduced the robots, they believed that they would mean less stress, less cost, and fewer employees. None of these hopes were realized. So, why did they believe it would? Boeing was probably blinded, as so many of us are, by techno-hype and thus predisposed to dismiss mere (evolved) humans.
Our tech-enamored culture often believes impossible things: One of them is that error-prone humans can create sophisticated technology able to surpass its makers.
Like most misguided beliefs, this one contains some truth: Humans can create machines that perform better. The smallest tracker digs faster than the fastest human with a shovel. A slow-moving car goes faster and farther than any human on foot. Machines, including robots, can, and often do, outperform us.
The mistake Boeing made, and many others make, is to assume that technology will always outperform a human. We get so used to the wonders of what humans can do—where a single body type can complete gymnastic routines that would leave robots in a pile of parts and at the same time derive complex mathematics to describe the Universe—that we fail to appreciate our own faculties. And the drumbeat of claims about evolution—for example that we, including our minds and physical abilities, are accidental by-products of a reproductive fitness contest—denigrates us further.
For example, at a conference, Elon Musk called humans a “boot loader” for AI*, warning that we must merge with machines or be overrun by them. A boot loader is a small (by comparison) program that runs automatically when a computer first starts. Its main role is to load the real software, such as Windows, which the user intends to use.
As long as we fail to recognize the staggering design of the human body and mind, and overrate the capabilities of our shiny new tech toys, we will make the same kinds of mistakes Boeing is now backing away from.
By all means, let’s build machines that enhance our abilities. But let’s not forget the really amazing thing is not the tool, but the tool builder.
Boeing workers, please don’t kick the robot on its way out. The jetliner manufacturer’s decision to give the robots’ job back to machinists underlines the hard realities of automation. For example, it doesn’t always work. Robot error turned out to be a bigger problem than human error.
“Artificial” artificial intelligence: What happens when AI needs human I?