Mind Matters Natural and Artificial Intelligence News and Analysis
digital-transformation-iot-internet-of-things-software-technology-development-concept-software-engineer-coding-on-laptop-computer-with-innovation-technology-interface-stockpack-adobe-stock
Digital transformation, IoT, Internet of Things, Software technology development concept. Software engineer coding on laptop computer with innovation technology interface
Image licensed via Adobe Stock

Hype Distracts AI Engineers from Real Work

Who is going to solve AI's actual problems?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Hype of generative AI exploded in 2023, as did share prices of big tech companies such as Microsoft, Google, and Amazon. Concurrent with this hype, AI investment moved from healthcare and AVs to generative AI in 2023, and this has continued in 2024. In fact, total private investment fell to $96 billion in 2023, down from $103.4 billion in 2022 and $130 billion in 2021 even as investment in generative AI surged eightfold from 2022 to $25.2 billion in 2023.

On the surface, generative AI looks incredible. Writing emails, marketing reports, and even academic papers is alluring for many, particularly for those people who are frequently stopped by writer’s block. Generative AI’s ability to create pictures and videos is even more attractive. Suddenly, advertising videos, education videos, and even entire movies seems possible.

All we have to do is make some improvements, the proponents claim, and we will be there. Reduce the frequency of hallucinations, racial slurs, and misplaced arms, and a productivity revolution will be unleashed.

Not So Fast

Unfortunately, we didn’t see that emphasis in the recent products that were announced by Google and OpenAI in Mid-May, however. OpenAI introduced a voice assistant, and a new model, GPT-4o that can respond to audio prompts, works across text, vision, and voice, and can read facial expressions.

Google also emphasized new features rather than greater accuracy, even showing hallucinations in its demo. Gemini Live enables video that is captured with a phone along with voice recognition to deliver contextual responses to your questions. Gemini was also integrated with Gmail so that long email chains can be summarized.

Some noticed the problem with emphasizing new features, “raising “concerns about the speed at which untested new services are coming to market, and academics and ethicists are distressed about the technology’s tendency to propagate bias,” a problem related to hallucinations.

One reason there wasn’t an emphasis on reducing hallucinations is because the problem is hard. Some argue that hallucinations area “baked into AI chatbots:” Large language models (LLMs) are

doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them.

Others argue that the big tech firms have basically given up. For instance, in Google’s IO keynote, it placed disclaimers such as “check responses for accuracy” in every new AI tool it showed off. ChatGPT operates similarly. In tiny font just below the prompt window, it says, “ChatGPT can make mistakes. Check important info.” “That’s not a disclaimer you want to see from tools that are supposed to change our whole lives in the very near future!”

Comments from OpenAI CEO Sam Altman are also illustrative:

Anyone who had an issue with AI’s accuracy was naive. If you just do the naive thing and say, never say anything that you’re not 100 percent sure about,’ you can get them all to do that. But it won’t have the magic that people like so much.

As many have argued, it’s all about wooing investors with magic.

Shiny Objects Over Safety

A CNBC article also argues that companies are more concerned with investors than users, and less concerned with reducing hallucinations. “Artificial intelligence engineers at top tech companies” said,

The pressure to roll out AI tools at breakneck speed has come to define their jobs. They say that much of their work is assigned to appease investors rather than to solve problems for end users, and that they are often chasing OpenAI.” “Some said they were switched over to AI teams to help support fast-paced rollouts without having adequate time to train or learn about AI, even if they are new to the technology.

A Microsoft engineer says that when it comes to ethics and safeguards, it “has cut corners in favor of speed, leading to rushed rollouts without sufficient concerns about what could follow. The engineer said there’s a recognition that because all of the large tech companies have access to most of the same data, there’s no real moat in AI,” comments consistent with what executive Jan Leike said when he departed OpenAI: OpenAI put “shiny products over safety.”

An “Amazon engineer, who lost his weekend to a project that was ultimately scuttled, said higher-ups seemed to be doing things just to ‘tick a checkbox,’ and that speed, rather than quality, was the priority while trying to recreate products coming out of Microsoft or OpenAI.”

Another Microsoft AI engineer “recalled instances when a software engineer on his team would come up with an #algorithm to solve a particular problem that didn’t involve generative AI. That solution would be pushed aside in favor of one that used a large language model, even if it were less efficient, more expensive and slower.”

An independent engineer “described much of the work in generative AI today as an ‘extreme amount of vaporware and #hype.’ Every two weeks, the engineer said, there’s some sort of big pivot, but ultimately there’s the sense that everyone is building the same thing.” He said he often has to put together demos of AI products for the company’s board of directors on three-week timelines, even though the products are “a big pile of nonsense.” There’s a constant effort to appease investors and fight for money, he said. He gave one example of building a web app to show investors even though it wasn’t related to the team’s actual work. After the presentation, “We never touched it again.”

What do these comments tell us about generative AI? That perhaps it isn’t as good as proponents have been claiming. If it was so great, companies would be lapping up these products and using them to increase their productivity, but they aren’t, according to the Wall Street Journal. Suppliers of AI software wouldn’t be so concerned about what the investors thought because the success of their customers would speak to investors.

Who Will Solve the Real Problem?

One big problem is that if suppliers of AI are spending more time on demos for investors, then who will solve real problems, such as the hallucination problem? The simple answer is that not enough engineers are trying to solve the problem and thus the solution will take much longer than the tech sector is claiming. The longer and more cynical answer is that they don’t have a solution and know they don’t have one, so they are trying to distract everyone. And so far, they are succeeding.

This story also tells us about investors, and how superficial they are, which is something that we kind of already knew. Nobel Laureate Robert Shiller described these investors in his 2000 book Irrational Exuberance. During a boom, price increases lead to more price increases as each increase seems to provide more evidence that the market will continue to rise. The media, with help from the financial sector, offers seemingly logical reasons for the price increases, thus creating a narrative that encourages still more increases.

The narrative that Robert Shiller is referring to is the one companies are trying to massage with their demos, many of which may be faked. We already knew that some were faked, but the quotes from engineers in the CNBC article suggests that many may be faked.

A new narrative will likely soon emerge, and that narrative will likely be exaggerated in the opposite direction. This is way that bubbles work. Investors love you on the way up and hate you on the way down.


Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence
Jeff Funk is a retired professor and a Fellow of Discovery Institute’s Walter Bradley Center for Natural and Artificial Intelligence. His book, Competing in the Age of Bubbles, is forthcoming from Harriman House.

Hype Distracts AI Engineers from Real Work