Mind Matters Natural and Artificial Intelligence News and Analysis

TagOpenAI

humans vs
Human vs Robots concept. Technological revolution. Unemployment in the digital world. Symbol of future cooperation, technology advance, innovation. Businessman flips wood cubes human to robot symbols.

GPT-3 Versus the Writers at Mind Matters

How does the AI fare when it is asked to write on topics covered in Mind Matters articles?

In order to give a real-world comparison of the output of GPT-3 to human-written writing, I decided it would be a fun activity to see how OpenAI’s GPT-3 compares to Mind Matters on a variety of topics that we cover.  Here, we are using OpenAI’s direct API, not ChatGPT, as there is a lot of evidence that ChatGPT responses have a human-in-the-loop.  Therefore, we are going to focus on the outputs from their API directly. I used several criteria for article selection in order to even the playing field as much as possible.  For instance, I only chose articles that did not depend on recent events.  This way, GPT-3 is not disadvantaged for not having up-to-date material.  However, I also Read More ›

drone-sunrise-in-princeton-new-jersey-stockpack-adobe-stock
Drone Sunrise in Princeton New Jersey

Princeton Student Develops AI Detector App

Software engineers are finding creative ways to regulate and detect ChatGPT

A 22-year-old student from Princeton, Edward Tian, has designed an app to discern whether text is human or AI generated. The tool, GPTZero, is already garnering interest from potential investors and will come as a sigh of relief to teachers and others who are worried about the advanced abilities of ChatGPT, OpenAI’s new text generator. According to a piece from Fast Company, Tian says his tool measures randomness in sentences (“perplexity”) plus overall randomness (“burstiness”) to calculate the probability that the text was written by ChatGPT. Since tweeting about GPTZero on January 2, Tian says he’s already been approached by VCs wanting to invest and will be developing updated versions soon.” Megan Morrone, Was this written by a robot? These tools help detect AI-generated text (fastcompany.com) Read More ›

young-african-male-programmer-writing-program-code-sitting-at-the-workplace-with-three-monitors-in-the-office-image-focused-on-the-screen-stockpack-adobe-stock
Young african male programmer writing program code sitting at the workplace with three monitors in the office. Image focused on the screen

How are Developers Using OpenAI’s Tools in their Software?

There are several interesting uses of the new AI tools, but time will tell which ones take off

OpenAI has released two major tools for developers to make use of GPT-3 and DALL-E.  GPT-3 is the radical new text generation tool, which generates large or small amount of texts from simple prompts.  It can also classify text into categories  GPT-3’s text-generation system forms the core of OpenAI’s new chatbot, ChatGPT.  DALL-E is an image generation tool, which creates images from text prompts.  Together, these two tools provide today’s state-of-the-art in AI-based content generation. So how are developers making use of these new features?  Today we are looking at several ways that these tools have been put to use. Basic Content Generation The core of GPT-3 is generating content from prompts.  Whether for making blog posts, writing summaries, or Read More ›

digital-chatbot-robot-application-conversation-assistant-ai-artificial-intelligence-concept-stockpack-adobe-stock
Digital chatbot, robot application, conversation assistant, AI Artificial Intelligence concept.

Note to Parents: Grooming and Wokeness Are Embedded in Chatbots

With or without tuning, all AI chatbots are biased one way or another. AI without bias is like water without wet

First impressions of a person can be wrong. Further interactions can reveal disturbing personality warts. Contrary to initial impressions, we might find out they lie, they are disturbingly woke,  they can’t do simple math, their politics is on the extreme left, and they have no sense of humor or common sense.   I have just described Open AI’s GPT3 chatbot, ChatGPT. Initially, users are gobsmacked by the its performance. Its flashy prose responses to simple queries look amazing.  But become roommates with the chatbot for a few hours and its shortcomings become evident .  It can’t get its facts straight, can’t do simple math problems, hates Donald Trump, and is being groomed to be “woke.” Its performance warts are so numerous that Bradley Center Senior Fellow Gary N. Smith hoists a Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

This story, by Pomona College business and investment prof Gary Smith was #6 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. At any rate: “Chatbots: Still dumb after all these years.” (January 3, 2022) In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of Read More ›

reaching toward chatbot
Chat bot Robot Online Chatting Communication Business Internet Technology Concept

OpenAI Launches Impressive New Chatbot: ChatGPT

The sophisticated AI tool could revolutionize the internet, and come with big cost

Artificial intelligence is making great strides in 2022. A few months ago, the company OpenAI introduced DALL-E, a text-to-image generator, which they made open to the public. Some have raised concerns over the future role of artists and copyright issues considering AI art generators. Does AI pose a threat to human creators? Well, that question just got weightier and more multifaceted. OpenAI just released ChatGPT, what writer Jacob Carpenter calls, “the most advanced, user-friendly chatbot to enter the public domain.” ChatGPT can “write lines of code, pen a college-level essay, author responses in the voice of a pirate, and write a piano piece in Mozart’s style.” Carpenter goes on to point out that some are wondering if the chatbot threatens Read More ›

chatbot-social-bot-mit-quellcode-und-keyboard-stockpack-adobe-stock
Chatbot / Social Bot mit Quellcode und Keyboard

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible

There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise

Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered

In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital Read More ›

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms Read More ›

artificial-neuron-in-concept-of-artificial-intelligence-wall-shaped-binary-codes-make-transmission-lines-of-pulses-information-in-an-analogy-to-a-microchip-stockpack-adobe-stock
Artificial neuron in concept of artificial intelligence. Wall-shaped binary codes make transmission lines of pulses, information in an analogy to a microchip.

Can AI Really Be “Slightly Conscious”? Can Anyone?

It’s rare to see popular media actually call out nonsense re artificial intelligence. Here’s is what it looks like when it happens

On February 9, Ilya Sutskever,co-founder of fake text generator OpenAI, made a claim that was frothy even for Twitter: “it may be that today’s largest neural networks are slightly conscious.” it may be that today’s large neural networks are slightly conscious — Ilya Sutskever (@ilyasut) February 9, 2022 Well, “slightly conscious” is like being “slightly pregnant” or “slightly dead.” While Sutskever didn’t name any specific developments, he was likely referring to huge natural language processing systems like OpenAI’s enormous GPT-3 which can translate, answer questions, fill in missing words, and generate fake news. No thought process is involved. The system approximates vast masses of actual instances of language use. The more stereotyped the language use is, the easier it is Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even Read More ›

cyber-security-concept-stockpack-adobe-stock
Cyber security concept.

The 2020 AI Dirty Dozen Hyped Stories: Countdown by Bradley Center Brain Trust Members

Is AI fundamentally flawed? Can Elon Musk merge man with machines? Will there ever be self-driving cars? Join us as we revisit the top 12 most over-hyped stories in artificial intelligence from 2020 with Robert J. Marks, Jonathan Bartlett, and Eric Holloway. Show Notes 00:01:10 | Introducing Jonathan Bartlett 00:01:37 | Introducing Dr. Eric Holloway 00:02:00 | #12: “The way Read More ›

Panoramic image, Man hand holding piggy bank on wood table. Save money and financial investment

The Word “AI” Has Become a Marketing Ploy

Think twice before investing in a business that uses the word "AI" without further explanation

Justin Wang received a bachelor’s degree from Murdoch University in 2012 with a grade of 83.7% and a master’s degree in Information Technology Management from the University of Sydney in 2016 with a grade of 82.5%. In January 2017, he founded a Singapore-based company with the mysteriously cool name Scry in order to “manage information technology to achieve business goals, as well as – and perhaps more importantly – how it can be wielded to disrupt existing value networks.” What’s behind the mystery and jargon? It turns out that Scry is a “social forecasting platform.” Users join for free and can enter their personal estimates of the probabilities that certain events will happen, with Scry calculating the average probability. For example, one question is, Read More ›

sisters-playing-with-wagon-cart-on-the-road-outdoors-stockpack-adobe-stock.jpg
Sisters playing with wagon cart on the road outdoors

Artificial Unintelligence

The failure of computer programs to recognize a rudimentary drawing of a wagon reveals the vast differences between artificial and human intelligence

In 1979, when he was just 34 years old, Douglas Hofstadter won a National Book Award and Pulitzer Prize for his book, Gödel, Escher, Bach: An Eternal Golden Braid, which explored how our brains work and how computers might someday mimic human thought. He has spent his life trying to solve this incredibly difficult puzzle. How do humans learn from experience? How do we understand the world we live in? Where do emotions come from? How do we make decisions? Can we write inflexible computer code that will mimic the mysteriously flexible human mind?  Hofstadter has concluded that analogy is “the fuel and fire of thinking.” When humans see, hear, or read something, we can focus on the most salient features, its “skeletal essence.” Read More ›

black-car-interior-stockpack-unsplash.jpg
black car interior \

AI Dirty Dozen 2020 Part III

There are many forces that shape the hyped AI stories we read. Media is everywhere and competition is fierce. Articles with provocative headlines and content are clickbait for the browsing consumer. We’re going to count down the AI Dirty Dozen: the top twelve AI hyped stories for 2020. Join Dr. Robert J. Marks as he discusses some of these stories with Read More ›

student-term-paper-showing-a-grade-stockpack-adobe-stock.jpg
student term paper showing 'a' grade

Can a Computer Write Your Paper for You Someday Soon?

GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him

This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more Read More ›

robot-concept-or-robot-hand-chatbot-pressing-computer-keyboard-enter-stockpack-adobe-stock.jpg
Robot concept or robot hand chatbot pressing computer keyboard enter

Can a Machine Really Write for the New Yorker?

If AI wins at chess and Go, why not? Then someone decided to test that…

Tech philosopher and futurist George Gilder (pictured) has a new book out, Gaming AI. Short and sweet, it explains how artificial intelligence (AI) will—and won’t—revolutionize the economy and human life. Get your free digital copy here. And now, below is a short piece he wrote, unpacking one of the book’s themes—the claim that AI can do anything that humans can do. Find out why he says no: Ilya Sutskever (pictured) may be the smartest man in the world you have never heard of. No sweat, I hadn’t heard of him either. Still under 40, he’s part of the all-male Google mindfest around “Google Brain.” His IQ honed at Open University of Israel and mentored by Artificial Intelligence (AI) pioneer Geoffrey Read More ›

hand-of-god-stockpack-adobe-stock.jpg
hand of god

Built to Save Us from Evil AI, OpenAI Now Dupes Us

When combined with several metric tons of data, its new GPT-3 sometimes it looks like it is “thinking.” No, not really

OpenAI started life in 2015 as a non-profit organization whose mission was to safeguard humanity from malevolent artificial intelligence (AI). The founders’ goal was to ensure that when superhuman AI arrived, its inborn purpose was to serve humanity rather than subjugate it. In 2019, OpenAI transitioned to a for-profit company based in San Francisco and secured a one billion dollar investment from Microsoft. Things seem to have moved on from there. There’s a good question whether superhuman AI is even possible, as we have pointed out repeatedly here at Mind Matters News. While some of the AI tasks seem impressive, oftentimes when you look under the hood, what you find is a very expensive party trick or a staged demo. Read More ›