In Define information before you talk about it, neurosurgeon Michael Egnor interviewed engineering prof Robert J. Marks on the way information, not matter, shapes our world (October 28, 2021). In the first portion, Egnor and Marks discussed questions like: Why do two identical snowflakes seem more meaningful than one snowflake? Now they turn to the relationship between information and creativity. Is creativity a function of more information? Or is there more to it? This portion begins at 10:46 min. A partial transcript and notes, Show Notes, and Additional Resources follow. Michael Egnor: How does biological information differ from information in nonliving things? Robert J. Marks: I don’t know if it does… I do believe after recent study that the mind Read More ›
This week, we listen to Robert J. Marks speaking at the launch of the Walter Bradley Center for Natural and Artificial Intelligence in Dallas, Texas. Robert J. Marks is the Director of the Bradley Center and Distinguished Professor of Electrical and Computer Engineering at Baylor University. In a panel discussion at the 2019 launch of the Bradley Center, Dr. Marks Read More ›
So, in #3, AI knows when it shouldn’t be trusted? even though no one else does? But now, what about #2?: Sam Altman’s leap of faith!, that an AI will think like people: Earlier this year, founder-investor Sam Altman left his high-profile role as the president of Y Combinator to become the CEO of OpenAI, an AI research outfit that was founded by some of the most prominent people in the tech industry in late 2015. The idea: to ensure that artificial intelligence is “developed in a way that is safe and is beneficial to humanity,” as one of those founders, Elon Musk, said back then to the New York Times. The move is intriguing for many reasons, including that Read More ›
At Science earlier this year, we were told that “Researchers have created software that borrows concepts from Darwinian evolution, including ‘survival of the fittest,’ to build AI programs that improve generation after generation without human input.” Critics say it’s not that easy. Computer scientist Roman Yampolskiy (pictured) discusses the problem in an open access paper, starting with a joke: On April 1, 2016 Dr. Yampolskiy posted the following to his social media accounts: “Google just announced major layoffs of programmers. Future software development and updates will be done mostly via recursive self-improvement by evolving deep neural networks”. The joke got a number of “likes” but also, interestingly, a few requests from journalists for interviews on this “developing story”. To non-experts Read More ›
Kathleen Walch, Principal Analyst at Cognilytica, asks “Is AGI really around the corner, or are we chasing an elusive goal that we may never realize?” It was an oddly blunt question from someone in her industry. But then she was right to expect Ben Goertzel (right), CEO & Founder of the SingularityNET Foundation, to reassure her that all is well when she interviewed him at OpenCogCon. Ben Goertzel, a leading expert in the pursuit of Artificial General Intelligence (AGI)—computers that can think like humans—thinks that we are now at a “turning point” where AGI will see rapid advances: Over the next few years he believes the balance of activity in the AI research area is about to shift from highly Read More ›
If the brain is immensely complex, it may elude complete understanding in detail. Deep Learning may survey it but that won’t convey understanding to us. We may need to look at more comprehensive ways of knowing.