The title question has been around for quite some time. In this discussion, I would like to take an ontological look at this question. What is the essential nature of being a person? To fully replace humans, what must AI machines become capable of? IF we want to consider the possibility of making humans obsolete, we need to know what is the essence of humanity? What is the ontological nature of a person? What characteristics define being a person? Even before we can address the essential nature of a person, we must identify the essential nature of the universe in which that person exists. What is the universe? How many dimensions does it have? Can the universe, or in it Read More ›
I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, and here. “Deep learning” is as misnamed a computational technique as exists. The actual technique refers to multi-layered neural networks, and, true enough, those multi-layers can do a lot of significant computational work. But the phrase “deep learning” suggests that the machine is doing something profound and beyond the capacity of humans. That’s far from the case. The Wikipedia article on deep learning is instructive in this regard. Consider the following image used there to illustrate deep learning: Note the rendition of the elephant at the top and compare it with the image of the elephant as we experience it at the bottom. The image at the bottom is rich, Read More ›
In 1979, when he was just 34 years old, Douglas Hofstadter won a National Book Award and Pulitzer Prize for his book, Gödel, Escher, Bach: An Eternal Golden Braid, which explored how our brains work and how computers might someday mimic human thought. He has spent his life trying to solve this incredibly difficult puzzle. How do humans learn from experience? How do we understand the world we live in? Where do emotions come from? How do we make decisions? Can we write inflexible computer code that will mimic the mysteriously flexible human mind? Hofstadter has concluded that analogy is “the fuel and fire of thinking.” When humans see, hear, or read something, we can focus on the most salient features, its “skeletal essence.” Read More ›
Computer scientist Michael I. Jordan, a leading AI researcher, says today’s artificial intelligence systems aren’t actually intelligent and people should stop talking about them as if they were: They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley. Katy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says” at IEEE Spectrum (March 31, 2031) Their principal role, he says, is to “augment human intelligence, via painstaking analysis of large Read More ›
These days Cancel culture can descend suddenly on anyone who doesn’t think the way a Twitter mob likes about one or another issue. For example: ➤ Celebrity atheist scientist Richard Dawkins was Canceled from speaking at Trinity College in Ireland because he has said critical things about Islam and about some claims of sexual assault. Note: Dawkins says critical things about all religions but Cancel mobs focus narrowly. ➤ The enforcement is irrational. Antiracist author Ibrahim X. Kendi can make negative statements about transgender culture comparatively safely but J. K. Rowlings, in a similar circumstance, became the target of a vicious “deplatform” campaign, against which she ably defended herself. However, people who cannot write like Rowlings have not nearly been Read More ›
Robert J. Marks, director of the Walter Bradley Center for Natural & Artificial Intelligence, likes to explain AI by saying “AI is anything computers do that is kind of amazing.” (“Human Exceptionalism,” Reasons to Believe, August 8, 2020). Using this definition AI is a general term that includes a collection of computer science technologies. AI is fluid. Dr. Elaine Rich (pictured), noted computer scientist and an author of Artificial Intelligence, offers a more specific definition: “AI is the study of how to make computers do things which, at the moment, people do better.” (Accessed February 17, 2021) Relying on this definition John Hsia observes: “By definition, once a computer can do what people used to do better, it’s no longer Read More ›
A new free AI tool now forewarns African farmers about impending locust attacks: “Farmers and pastoralists receive free SMS alerts 2-3 months in advance of when locusts are highly likely to attack farms and livestock forage in their areas, allowing for early intervention.” The Kuzi early warning tool is one of a number of new tools that can predict reasonably expected futures. This sort of forecasting is possible if there is large body of oracle ergodic data to train machine intelligence. “Oracle ergodic” simply means that data from the past can be used to predict data in the future. That’s not self-evident. Flipping a coin, for example, is not oracle ergodic in the sense that a history of past flips Read More ›
Until a few years ago, Brian Wansink (pictured in 2007) was a Professor of Marketing at Cornell and the Director of the Cornell Food and Brand Lab. He authored (or co-authored) more than 200 peer-reviewed papers and wrote two popular books, Mindless Eating and Slim by Design, which have been translated into more than 25 languages. In one of his most famous studies, 54 volunteers were served tomato soup. Half were served from normal bowls and half from “bottomless bowls” which had hidden tubes that imperceptibly refilled the bowls. Those with the bottomless bowls ate, on average, 73 percent more soup but they did not report feeling any fuller than the people who ate from normal bowls. Eating is evidently Read More ›
Well, outsourcing everything to technology is the thing these days and the Japanese government, faced with a steeply declining birthrate, is giving AI matchmaking a try: Around half of the nation’s 47 prefectures offer matchmaking services and some of them have already introduced AI systems, according to the Cabinet Office. The human-run matchmaking services often use standardized forms to list people’s interests and hobbies, and AI systems can perform more advanced analysis of this data. “We are especially planning to offer subsidies to local governments operating or starting up matchmaking projects that use AI,” the official said. AFP-JIJI, “We have a match! Japan taps AI to boost birth rate slump” at Japan Times (December 7, 2020) Declining birthrate? Japan Times Read More ›
Neuroengineer Gordon Cheng compares technology that can help paraplegics to walk again to learning to drive a car: The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the Read More ›
This summer the OpenAI lab, backed by $1 billion in funding from Microsoft, Google, and Facebook, released an updated version of GPT-3, a text generator that produces convincing sentences by analyzing, among other online sources, Wikipedia, countless blog posts, and thousands of digital books. According to a recent story by Cade Metz in the New York Times, one GPT-3 programmer decided to target pop psychologist Scott Barry Kaufman. Could GPT-3 really come up with a paragraph that sounded just like him? Kaufman himself (pictured) was really impressed with this one, on the subject of becoming more creative: I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more Read More ›
Recently, we were told that artificial intelligence is now smart enough to know when it can’t be trusted: How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy. David Nield, “Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted” at ScienceAlert (November 25, 2020) That’s a big claim. Intelligent humans often can’t know when they are untrustworthy. These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of Read More ›
Big computers conquered chess quite easily. But then there was the Chinese game of go (pictured), estimated to be 4000 years old, which offers more “degrees of freedom” (possible moves, strategy, and rules) than chess (2×10170). As futurist George Gilder tells us, in Gaming AI, it was a rite of passage for aspiring intellects in Asia: “Go began as a rigorous rite of passage for Chinese gentlemen and diplomats, testing their intellectual skills and strategic prowess. Later, crossing the Sea of Japan, Go enthralled the Shogunate, which brought it into the Japanese Imperial Court and made it a national cult.” (p. 9) Then AlphaGo, from Google’s DeepMind, appeared on the scene in 2016: As the Chinese American titan Kai-Fu Lee Read More ›
Economist Gary Smith and statistician Jay Cordes have a new book out, The Phantom Pattern Problem: The mirage of big data, on why we should not trust Big Data over common sense. In their view, it’s a dangerous mix: Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may “learn” (if that’s the right word) that Stock prices can be predicted from Google searches for the word debt. Stock prices can be predicted from the number of Twitter tweets that use “calm” words. An unborn baby’s sex can be predicted by the amount of breakfast cereal the mother eats. Bitcoin prices can be Read More ›
Many researchers hope that AI will leading to a“golden age” of discovery for lost languages, hard to decipher writings, and badly damaged Biblical scrolls. Algorithms can chug through vast numbers of possibilities of interpretation, presenting the scholar with probabilities to choose from. But even powerful algorithms have their work cut out for them. For example, of the hundreds of thousands of clay (cuneiform) tablets that survive from an ancient part of the Near East called Mesopotamia, many are damaged. We may know the language but we don’t know what’s missing from the text and what difference the missing part makes to what is being said. Experts try to fill in the missing parts but guessing at all the possibilities is Read More ›
The list is a selection from “Bingecast: Robert J. Marks on the Limitations of Artificial Intelligence,” a discussion between Larry L. Linenschmidt of the Hill Country Institute and Walter Bradley Center director Robert J. Marks. The focus on why we mistakenly attribute understanding and creativity to computers. The interview was originally published by the Hill Country Institute and is reproduced with thanks. https://episodes.castos.com/mindmatters/Mind-Matters-097-Robert-Marks.mp3 Here is a partial transcript, listing six limits of AI as we know it: (The Show Notes, Additional Resources, and a link to the full transcript are below.) 1. Computers can do a great deal but, by their nature, they are limited to algorithms. Larry L. Linenschmidt: When I read the term “classical computer,” how does a computer function? Let’s build on Read More ›
The basic problem is that accepting on faith what we can’t ever hope to understand is not a traditional stance of science. Thus it’s a good question whether science could survive such a transition and still be recognizable to scientists. But does turning things over to incomprehensible algorithms, as Krakauer proposes, really work anyway? Current results from a variety of areas give pause for thought.
Experts list various problems, including the fact that AI is vulnerable to failure due to unforeseen problems, including problems with data (too sparse, too noisy, too many outliers, etc.). It also doesn’t learn as well from experience as humans do.
As we struggle with the COVID-19 crisis, many are beginning to ask hard questions about how our system works, its strengths, weaknesses, and vulnerabilities. One vulnerability might be too heavy reliance on a single source for data modeling and predictions. Considering all the uses to which AI may be put in health care, getting our guidance exclusively from the Institute for Health and Metric Evaluation for modeling is reckless.