Mind Matters Natural and Artificial Intelligence News and Analysis
Data reflecting on eyeglasses on man's face. Computrer programmer big data and ux designer concept
Image licensed via Adobe Stock

AI’s Illusion of Rapid Progress

It always seems to be on the verge of perfection

The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: “If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, “If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.”

On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called “Just Walk Out,” because it wasn’t working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and “walk straight out without queueing, as if by magic. That magic, which Amazon dubbed ‘Just Walk Out’ technology, was said to be autonomously powered by AI.”

Unfortunately, it wasn’t. Instead, the checkout-free magic was happening in part due to a “network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves.” Their tasks included “manually reviewing transactions and labeling images from videos.”

The Tech Bros and Their Illusions

Why is this announcement more important than Musk’s prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.   

But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just weren’t.

“A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carl’s Jr, Chili’s, and Del Taco,” but in reality, Filipino offsite workers are required to help with over 70% of Presto’s orders.

“Facebook released a virtual assistant named M in 2015” that purportedly enabled AI to “book your movie tickets, tell you the weather, or even order you food from a local restaurant.” But it was mostly human operators who were doing the work.

There was an impressive Gemini demo in December of 2023 that “showed how Gemini’s AI could allegedly decipher between video, image, and audio inputs in real-time.”  That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Today’s Gemini “can barely even respond to controversial questions, let alone do the backflips it performed in that demo.”

Amazon has offered a service for years called Mechanical Turk of which one service was “Expensify in 2017” in which you could take a picture of a receipt and the app “would automatically verify that it was an expense compliant with your employer’s rules, and file it in the appropriate location.” In reality, Amazon “used a team of secure technicians to file the expense on your behalf,” who were often Amazon Mechanical Turk workers.

Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, “humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.”

Invading Privacy

Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.

In the last three cases, “real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.”

Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that “the lowest hallucination rates among tracked AI models are around 3 to 5%,” and that they aren’t fixable because they stem from the LLMs “doing exactly what they were developed and trained to do: respond, however they can, to user prompts.”

Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about what’s next.

For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazon’s cashier-less technology was:

ruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.”

The president of Y Combinator should have known that humans were needed to make Amazon’s technology work, and many other AI systems. It is one of America’s most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.

So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazon’s cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.

In reality, those systems won’t be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.

Too many people are extrapolating from the systems that are purportedly automated, even though they aren’t yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding what’s going on in the background is important for understanding what the future will be in the foreground.

Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Jeff Funk is a retired professor and a Fellow of Discovery Institute’s Walter Bradley Center for Natural and Artificial Intelligence. His book, Competing in the Age of Bubbles, is forthcoming from Harriman House.

AI’s Illusion of Rapid Progress