By Tom Gilson Richard Stevens’ May 11 Stream article, “AI Legal Theories,” suggests we consider making Artificial Intelligence companies legally responsible for the harms they cause. We do that already with consumer products, so in principle it should be possible to do the same with AI. Enforcement would be by civil law. Injured parties would presumably be given standing to sue the source of the harm without having to prove negligence. That gets us somewhere, but not far enough. It settles the question of who is legally responsible. But responsible for what? Specifically, what will we call harm? Who will decide? Based on what standard of wisdom? Stevens gives this example of harm, citing an earlier Stream article by Robert J. Marks: “The Snapchat ChatGPT-powered Read More ›
The Google-backed AI company DeepMind made headlines in March 2016 when its AlphaGo game AI engine was able to defeat Lee Sedol, one of the top Go players in the world. DeepMind followed up this great achievement with the AlphaZero engine in 2017, which made the remarkable achievement of soundly beating AlphaGo in Go as well as one of the world’s best chess engines in chess. The interesting difference between AlphaGo and AlphaZero is that AlphaGo uses databases of top human games for learning, while AlphaZero only learns by playing against itself. Using the same AI engine to dominate two different games, while also discarding reliance on human games suggests that DeepMind has found an algorithm that is intrinsically superior Read More ›
For decades, researchers were transfixed with the idea of humanizing great apes by raising them among humans and teaching them language. Emerging from the ruins and recriminations of the collapse, philosophy prof Don Ross has a new idea: Let’s start with elephants instead.
The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” Read More ›
We've all seen this sort of argument before in many other guises. It is commonly called “reductionism.” The reductionist claims that, because an object can be construed as made up of parts, the object is just the parts. It is like saying that because an article like this one is constructed from letters of the alphabet, the article is only rows of letters. Read More ›
None of the plants' extensive "social life" requires reason, emotion, value systems, mind, consciousness, or a sense of self. It requires only that the plant, like an animal, seek to continue its highly organized existence. But plants' ability to process information for that purpose gives pause for thought.
What is it that we want machines to be and do under our guidance that these—often seemingly strange—life forms are and do spontaneously? The life forms do those things to stay alive. Does it matter then that machines are not alive? Read More ›
If so, it might not happen in quite the way we are told to fear. U.S. kids spend more than two hours a day looking at screens "perform worse on memory, language and thinking tests than kids who spend less time in front of a device. Read More ›
An AI-generated film is not an altogether new idea. Rule-based expert systems were used to write short plays over a half century ago, in the early 1960's. Then, as now, don’t expect creativity. That is not what AI does.
Human life is full of these challenges. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home. But she cannot just tell him so. Read More ›