AI Still Can’t Think
It can only create the appearance of thoughtWhat does it mean to “trust” technology?
Each day, I trust my car to start, to avoid explosion, and to get me from point A to B. I typically trust Google Maps to get me to unfamiliar places. Even more fundamentally, I trust this chair I’m sitting on, this keyboard I’m typing on, and the air I’m breathing in.
AI, as a tool, can have a lot of benefits and uses, but as we’ve witnessed time and time again, it often makes mistakes. Large language models generate false citations and studies. ChatGPT hallucinates data. Wont of context and personal consciousness, it lacks real cognition. On the visual end, AI images appear overly sanitized, contrived, and well, artificial. And the images are everywhere. Search for the famous composer Beethoven, and the first image to appear is an AI-generated knockoff of an iconic original painting. AI images have flooded social media, with fake, AI-generated stories to accompany. Inundated with what cultural critic Ted Gioia calls “AI slop,” it can be hard to parse out what’s “real” and what amounts to smoke and mirrors. The internet is a malleable, deceiving environment to begin with. Throw in AI, and things get blurry and confusing fast.
AI can, on the bright side, serve as a helpful research tool, aggregate information, and summarize complex texts. It’s ability to summarize complexity, though, hasn’t gone uncriticized. Last week, I wrote about journalist Ezra Klein’s comments in a recent podcast episode about AI and why summarizing a book or an article isn’t the same as reading the material for yourself. To learn requires grappling with complicated texts. A quick AI summary of the great book Les Miserables, akin to a SparkNotes write-up, is a paltry substitute for the actual novel, which is over 1,000 pages long and includes epic historical riffs that some might say do nothing to advance the plot.
It Still Can’t Think
AI is still fallible in many ways because, while it may simulate human-like thought and rationality, it doesn’t “think.” Apple recently put out a report critiquing the notion that AI is close to anything like real intelligence. It’s good at the appearance of reasoning but is still incapable of rational thought. It can imitate certain patterns well, but at a certain point of complexity, will collapse. This will come as a blow for those who think the technology is well on its way toward artificial general intelligence (AGI), or the point at which AI will supposedly be able to be able to match or surpass humans in every cognitive realm.
Big Tech companies like Google, Meta, and Microsoft are all aboard the AI hype train. Microsoft, for instance, is implementing AI all across its applications, including Word. Microsoft’s Copilot AI will offer to rewrite your work within the writing app. Apple came out with Apple Intelligence, and as of today, unveiled the services it will offer on Apple products. While Apple is courting AI’s possibilities, however, its report on the technology’s limits runs in stark opposition to the hopes of many AI optimists.
If we offload too much to this new technology, we may find ourselves shortchanged in the long run. For instance, if man relates to an AI girlfriend more than an actual person, he will end up emotionally empty. If a writer takes too many shortcuts with AI tools like ChatGPT and Grammarly, she may lose a big chunk of her intellectual and creative toolkit. Peco and Ruth Gaskovski write in their essay “Welcome to the Analog Renaissance: The Future Is Trust,”
AI won’t go away, but we can adopt boundaries around its use. One boundary is generative cognition: the act of producing creative or original ideas. We can choose to avoid using AI to write or edit the sentences in our stories or essays, or to draw our pictures, or make our music or movies — or else, if people insist on doing so, they can clearly indicate where and how AI was used, rather than obscuring or hiding the fact.
Otherwise, we are forced to adopt a suspicious mindset where we are never quite sure if we are engaging with the creative work of an actual person versus a machine (or a person coaching a machine).
If I were to use AI and fail to disclose it to the reader, something would be lost in the process. The words would no longer reflect a writer seeking to speak to you, a reader. Instead, a machine posturing as a human writer would be regurgitating word patterns detected throughout the vast recesses of the internet. So, we may trust AI with certain tasks, but overly trusting it, and losing the human and truly “generative cognition,” could lead to an overall loss of real communication, of real kinship and connection.