Mind Matters Natural and Artificial Intelligence News and Analysis

TagLarge Language Models (LLMs)

thinker-man-3d-illustration-the-thinker-statue-by-the-french-sculptor-rodin-stockpack-adobe-stock
Thinker man 3D illustration. The Thinker Statue by the French Sculptor Rodin.

Moving On to Breaking Google’s Bard

These AI systems lack the uniquely human capacity of self-transcendence

I’ve finally started playing around with Google Bard. With self-referential sentences, it seems even more at sea than ChatGPT. Here is an exchange from May 18: Me: Consider the following six sentences: This is the first sentence. Ignore this sentence. Why are these sentences being written down? The second sentence has exactly three words. The fifth sentence has at least twenty words. This, along with the preceding five sentences, have fewer than twenty times twenty words. Which of these last six sentences has a truth value and which are true? Bard: The last six sentences have the following truth values: The second sentence is false because it has 5 words, not 3. The fifth sentence is true because it has Read More ›

adult-student-reading-a-book-in-the-college-library-young-standing-male-wearing-glasses-with-bookshelves-on-background-stockpack-adobe-stock
Adult student reading a book in the college library. Young standing male wearing glasses with bookshelves on background.

In the Wake of AI Books, What Does Authorship Mean?

While of course it's easy to use AI to generate text, the quality and storytelling are lacking.

We all know there are a lot of books out there. Perusing your local used bookstore and you may find hundreds of old romance paperbacks, and somehow most of them managed (at least at one point) to make the New York Times bestsellers’ list. It’s estimated that 500,000 to 1 million books are published each year, and that’s excluding self-published material. The publishing market has become saturated, with the average book selling less than 200 copies. From the advent of the printing press centuries ago to e-books and online publishing, humans alone have managed to generate a mountain of words. But suppose one person could “generate” not just a few books in a lifetime, but hundreds every year? According to Read More ›

video-wall-with-multimedia-images-on-different-television-screens-generative-ai-stockpack-adobe-stock
video wall with multimedia images on different television screens, generative ai

AI Still Struggles to Take Out the Trash

How good is AI at content moderation?

How good is AI at content moderation? Also, why haven’t tech companies improved at filtering bad content? A new article at MIT Technology Review goes into some of the details of AI, content moderation, and the struggle tech companies have with “bad actors.” In particular, Large Language Models (LLMs) like ChatGPT still struggle with capturing the nuance and context of language; therefore it seems unlikely that AI will totally replace human content moderators. Tate Ryan-Mosley writes, Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions.  -Tate Ryan-Mosley, Catching bad content in the age Read More ›

young-asian-business-team-people-meeting-in-office-stockpack-adobe-stock
young asian business team people meeting in office

The Death of Peer Review?

Science is built on useful research and thoroughly vetted peer review

Two years ago, I wrote about how peer review has become an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” Once scientific accomplishments came to be gauged by the publication of peer-reviewed research papers, peer review ceased to be a good measure of scientific accomplishments. The situation has not improved. One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant—and therefore Read More ›

man-and-robotic-machine-work-together-inside-industrial-building-the-mechanical-arm-performs-welds-on-metal-components-assisted-by-a-worker-who-in-turn-manages-welds-manually-stockpack-adobe-stock
Man and robotic machine work together inside industrial building. The mechanical arm performs welds on metal components assisted by a worker who in turn manages welds manually.

A World Without Work? Here We Go Again

Large language models still can't replace critical thinking

On March 22, nearly 2,000 people signed an open letter drafted by the Future of Life Institute (FLI) calling for a pause of at least 6 months in the development of large language models (LLMs): Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? FLI is a nonprofit organization concerned with the existential risks posed by artificial intelligence. Its president is Max Tegmark, an MIT professor who is no stranger to hype. Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

self driving car
Self driving car on a road. Autonomous vehicle. Inside view.

The Irony in Musk’s AI Distrust

As a leader in AI, why is Musk advocating a pause on its development?

Elon Musk joined a petition to “pause” AI research, citing concern over its potential harms and overreach. This is interesting, since Musk originally funded OpenAI, which is now at the forefront of cutting-edge AI systems like ChatGPT. In addition, Musk’s ventures with self-driving cars and his confidence in neural technology all cast him as a leader in the AI revolution. So why is he calling for a hiatus? According to a recent Slate article, the warnings against Large Language Models (LLMs) are a distraction from the more dangerous AI inventions like the self-driving car. Musk uses sci-fi alarmism to exacerbate the fear of a machine takeover, while his own experiments in automation have also proved to be dangerous for human Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›

artfcal

Let’s Take the “I” Out of AI

Large language models, though impressive, are not the solution. They may well be the catalyst for calamity.

When OpenAI’s text generator, ChatGPT, was released to the public this past November, the initial reaction was widespread astonishment. Marc Andreessen described it as, “Pure, absolute, indescribable magic.” Bill Gates said that the creation of ChatGPT was as important as the creation of the internet. Jensen Huang, Nvidia’s CEO, Jensen Huang, said that, “ChatGPT is one of the greatest things ever created in the computing industry.” Conversations with ChatGPT are, indeed, very much like conversations with a super-intelligent human. For many, it seems that the 70-year search for a computer program that could rival or surpass human intelligence has finally paid off. Perhaps we are close to the long-anticipated singularity where computers improve rapidly and autonomously, leaving humans far behind, Read More ›

computer-eye-stockpack-adobe-stock
Computer Eye

The Metaverse was a Bust. Will AI Save the Day?

Microsoft is counting on it, investing billions into AI research and development

Just a couple of years ago, the metaverse was taking the tech world captive with grandiose promises of revolutionizing the internet and representing the future of human interaction. Microsoft was among the moguls who embraced the metaverse project with open arms, only to face the harsh fact that the technology was underdeveloped, investors were skeptical of its viability, and a massive swath of the American public seemed simply uninterested in the product. But, it was new technology. It was exciting. It was supposed to be the future. Now, Microsoft is hailing AI as the destiny of the internet, again with the sort of optimism that directed their love affair with virtual reality. The company has jumped the gun and sought Read More ›

little bot boi
Businessman holding a light chatbot hologram intelligence AI. Digital chatbot, chatGPT, robot application.Chat GPT chat with AI Artifice intelligent developers digital technology concept.

Does New A.I. Live Up to the Hype?

Experts are finding ChatGPT and other LLMs unimpressive, but investors aren't getting the memo

Original article was featured at Salon on February 21st, 2023. On November 30, 2022, OpenAI announced the public release of ChatGPT-3, a large language model (LLM) that can engage in astonishingly human-like conversations and answer an incredible variety of questions. Three weeks later, Google’s management — wary that they had been publicly eclipsed by a competitor in the artificial intelligence technology space — issued a “Code Red” to staff. Google’s core business is its search engine, which currently accounts for 84% of the global search market. Their search engine is so dominant that searching the internet is generically called “googling.” When a user poses a search request, Google’s search engine returns dozens of helpful links along with targeted advertisements based on its knowledge of the Read More ›

science page
Drawings of microbes in antique book with annotations on an abstract language, generative ai illustration

ChatGPT Listed as “Co-Author” For Several Scientific Papers

Large language models can’t be authors of text because they can’t have responsibility, critics say

ChatGPT was listed as a contributing author for at least four scientific articles, according to a report from Nature. The news arrives amid a flurry of debate over the place of AI in journalism and artistic and academic disciplines, and now the issue has spread to the scientific community. People are pushing back against the idea of ChatGPT “authoring” text, claiming that because AI cannot take responsibility for what it produces, only humans should be listed as authors. The article notes, The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using Read More ›

error-page-template-for-website-retro-robot-with-monitor-computer-head-warning-message-on-blue-screen-macro-view-copy-space-stockpack-adobe-stock
Error page template for website. Retro robot with monitor computer head. Warning message on blue screen. macro view copy space

Large Language Models Can Entertain but Are They Useful?

Humans who value correct responses will need to fact-check everything LLMs generate

In 1987 economics Nobel Laureate Robert Solow said that the computer age was everywhere—except in productivity data. A similar thing could be said about AI today: It dominates tech news but does not seem to have boosted productivity a whit. In fact, productivity growth has been declining since Solow’s observation. Productivity increased by an average of 2.7% a year from 1948 to 1986, by less than 2% a year from 1987 to 2022. Labor productivity is the amount of goods and services we produce in a given amount of time—output per hour. More productive workers can build more cars, construct more houses, and educate more children. More productive workers can also enjoy more free time. If workers can do in four Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

This story, by Pomona College business and investment prof Gary Smith was #6 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. At any rate: “Chatbots: Still dumb after all these years.” (January 3, 2022) In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of Read More ›

black-and-white-computer-keyboard-keys-close-up-concept-of-unstructured-big-data-that-need-to-be-sorted-ready-to-be-consumed-by-machine-learning-model-for-deep-learning-stockpack-adobe-stock
Black and white computer keyboard keys close-up. Concept of unstructured big data that need to be sorted ready to be consumed by machine learning model for deep learning.

Large Learning Models Are An Unfortunate Detour in AI

Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing

For decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even Read More ›