Just a few months ago professors and big consulting firms were saying that Generative AI will increase the rate of productivity growth and raise unemployment, thus frightening some into calling for a pause in AI development. For instance, “Tegmark’s Future of Life Institute spearheaded an open letter calling for a pause in training the most powerful AI systems until researchers adopt a shared set of safety standards. Its 33,000 signatories include Elon Musk, Apple co-founder Steve Wozniak and Yoshua Bengio, a computer scientist who won the Turing Prize for his pioneering work in AI.
But now, a different message is appearing. A title from a WSJ article is: “AI Startup Buzz Is Facing a Reality Check,” because of declining usage of popular services and slow adoption by big companies. For instance:
- “Monthly online visitors to ChatGPT decreased by 10% in both June and July after months of growth.”
- “Midjourney, an AI tool that can create images from user descriptions, saw monthly visits decline for three consecutive months ended in July.”
- “Synthesia, a text-to-video creator that raised $90 million in June, saw flat or declining website user growth for the past six months. Jasper, a generative AI writing tool for writers and businesses, saw declining user growth for four consecutive months ended in July.”
Behind the usage declines and slow growth in revenues for startups is a lot of corporate resistance according to the same Wall Street Journal article: “A lot of executives have characterized their foray into language models as experimental, exploratory, and they’re sort of trying to get their arms around how big a bread box is this.”
The New York Times is also reporting on the slowness of big corporations to embrace AI. “JPMorgan has blocked access to ChatGPT from its computers and told its 300,000 workers not to put any bank information into the chatbot or other generative A.I. tools.” The global chief information officer says,
“There are too many risks of leaking confidential data, questions about how the data is used and about the accuracy of the A.I.-generated answers.” Instead, “the bank has created a walled-off, private network to allow a few hundred data scientists and engineers to experiment with the #technology. They are exploring uses like automating and improving tech support and software development.”“THE A.I. REVOLUTION IS COMING. BUT NOT AS FAST AS SOME PEOPLE THINK“, THE NEW YORK TIMES
Surveys also say companies are slow to implement AI. A recent survey of 2,500 global IT decision-makers found that while 86% of businesses plan to prioritize the technology in the next five years, only 14% plan to introduce it within six months. Similarly, a global survey of 2,000 IT decision-makers found that “three in four organizations are considering or implementing bans on ChatGPT and other generative AI apps on their employees’ work devices.”
The most successful cases likely involve Hollywood’s creation of TV dramas and movies because the output is images and words, unlike most industries. Not only are some commercials and short videos being made with generative AI, but generative AI can help manage processes such as storyboarding, set design, and visual effects, along with the overall process of fitting together multiple scenes that are recorded separately but then require fillers that can be done with generative AI. Furthermore, many scenes are borrowed from previous television episodes or movie sequels and prequels through computer graphics and digital recordings. For instance, one can find dozens of videos on YouTube that demonstrate how animated scenes are borrowed from those made decades ago. The same is true for movie series such as Fast and Furious and the Marvel “cinematic universe.” AI will make this reuse more effective while adding some novelty.
How Much is AI Actually Helping?
But outside Hollywood, how is AI helping companies improve the main work of their industries such as the productivity and quality in factories, construction sites, farms, mines, software development projects, or doctors treating patients in hospitals? A 2023 book from MIT Press, Working with AI, analyzes more than 30 cases and only one of them deals with the activities mentioned in the previous sentence, the inspection of components for hard drives. Even normally optimistic voices such as the IEEE Spectrum and the Wall Street Journal have recently published articles with cautionary titles including “Coding with ChatGPT, GitHub Copilot, and other AI tools is both irresistible and dangerous” and “AI Is Writing Code Now. For Companies, That Is Good and Bad” respectively.
In healthcare, AI is helping some doctors read images, after years of studies that found AI could not match the low frequency of false positives and negatives by doctors. This is a good sign. But most of the successful cases, and efforts, revolve around support activities such as scheduling, patient communications, and providing clinical notes to patients, of which the second and third ones don’t work very well according to tests.
Another healthcare application with limited success involves Epic Systems, the biggest supplier of healthcare software in America. It has struggled to improve its artificial intelligence tool to predict sepsis, a deadly complication of infection, one that infects 1.7 million and kills 270,000 people each year in the U.S. Until the media presented an unflattering story about Epic’s software in mid-2021, users were complaining that its tool had been prone to missing cases and flooding clinicians with false alarms, sometimes with a ratio of 30 alarms to one real case.
The new version is better, but it requires training by local hospitals and organizational changes, according to one hospital that trained and implemented the Epic system. “The formula for effectively embedding the sepsis alarms in the hospital ended up being “20% solving the math problem, and 80% relationship building,” with many changes to the way nurses worked. One physician and research scientist said: “People are quite wary of whether or not these tools will fundamentally improve our patient care without the risk of becoming clinically burdensome.” More recent analyses have concluded that “poor timeliness limits its application for clinical decision support.”
Andrew Ng, co-founder and head of Google Brain and the former Chief Scientist at Baidu, has been saying for years that organizations must train their AI systems on their own data. “Those of us in machine learning are really good at doing well on a test set,” “but unfortunately deploying a system takes more than doing well on a test set.” One result is that costs often turn out to be higher—much higher—than originally assumed. “A good rule of thumb is that you should estimate that for every $1 you spend developing an algorithm, you must spend $100 to deploy and support it.”
Other AI experts emphasize system thinking and processes, strategies that have supported productivity improvements for at least 100 years giving us Henry Ford’s assembly line, JIT manufacturing, and process re-engineering for instance. But one expert says there is too much emphasis today on tasks, such as mail and writing reports:
“Today’s companies buy apps to complete tasks or make groups of tasks easier. But the tasks are not isolated; assembled together they make up processes, which make up businesses. We’re all buying apps for tasks, but is anyone thinking about how they affect the whole? To understand how the tasks fit into the big picture, we need to apply systems thinking. “AI won’t make us more productive if we don’t apply systems thinking (fastcompany.com)
These recent observations suggest that AI will be much harder to implement than previous information technologies. The ease of accessing much software, and more recently apps, through the cloud will likely be different with AI, including generative AI. Organizations will be forced to train their own systems and reorganize their processes, activities that will likely take years, if not decades. As the New York Times says, “The AI revolution is coming, but not as fast as some people think.”