Mind Matters Natural and Artificial Intelligence News and Analysis
Bill-Dembski
Photo by László Bencze

How Humans Can Thrive in a World of Increasing Automation

Remarks on the purpose and goals of the Walter Bradley Center at its launch
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At the official launch of the Walter Bradley Center for Natural and Artificial Intelligence, July 11, 2018, design theorist design theorist William Dembski offered three key thoughts on the center’s purpose and goals—and how its work may be evaluated. Dr. Dembski was unable to attend*, so his remarks were read by the Center’s director Robert J. Marks:

Good evening. Thank you for attending this launch of the Walter Bradley Center for Natural and Artificial Intelligence. In my talk tonight, I’m going to address three points: (1) the importance of its work, (2) its likely impact, and (3) why it is appropriately named after Walter Bradley.

First, however, I want to thank friends and colleagues of Seattle’s Discovery Institute for their vision in forming this center and providing it with a secure home. Thanks go especially to Bruce Chapman and Steven Buri for making the center a full-fledged program of Discovery; to John West for working through the many crucial details; to Robert J. Marks, a towering presence in the field of computational intelligence, for his willingness to lead the center; and finally to Walter Bradley for giving us not only his name but also his example and inspiration, about which I will have more to say in a moment.

The Walter Bradley Center will do more than demonstrate a qualitative difference between human and machine intelligence. It will show how humans can thrive in a world of increasing automation. Such a vision ought to be praiseworthy and non-controversial. But we live in an age when much of the mainstream academy, inspired by scientific materialism, views humanity as unexceptional and even obsolete, easily replaced by our own products of artificial intelligence. So if we succeed, we must prepare for controversy.

Our vision requires a home whose residents take a principled stance for humanity over and above machines, and who won’t be cowed by a materialist science culture where all “right-thinking” people are expected to believe that everything can be reduced to artificial intelligence (computational reductionism). Happily, Discovery Institute provides such a home.

The Walter Bradley Center does not exist merely to argue that we are not machines.

Yes, we must respond to the vociferous advocates of strong AI and the Singularity (where robots either cannot be distinguished from humans or are superior to them). But that is only to ensure that silence is not interpreted as complicity or tacit assent. But if arguing, however persuasively, with an AI visionary like Ray Kurzweil or Nick Bostrom that machines will never supersede humans is the best we can do, then this center will fall short of its promise.

Zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry that they will. So how do we see that clearly, despite the hype? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence, so that we do not confuse it with artificial intelligence.

We aim to show society a positive way forward in adapting to machines, putting them in the service of humanity rather than thwarting our higher aspirations. Twelfth-century theologian Hugh of Saint Victor argued that the aim of technology is to improve life and thereby aid in humanity’s restoration and ultimate salvation. We can use his insights as we manage artificial intelligence today.

Unfortunately, rather than use AI to enhance our humanity, computational reductionists use it as a club to beat our humanity, suggesting that we are well on our way to being replaced by machines. Such predictions are sheer hype. Machines have come nowhere near attaining human intelligence, and show zero prospects of ever doing so. I want to linger on this dim view of their grand pretensions because it flies in the face of the propaganda about an AI takeover that constantly bombards us.

Zero evidence supports the view that machines will attain and ultimately exceed human intelligence. And absent such evidence, there is zero reason to worry that they will. So how do we see that clearly, despite the hype? We see it by understanding the nature of true intelligence, as exhibited in a fully robust human intelligence so that we do not confuse it with artificial intelligence.

What has artificial intelligence accomplished to date? AI has, no doubt, an impressive string of accomplishments: chess playing programs, Go-playing programs, and Jeopardy playing programs just scratch the surface. Consider Google’s search business, Facebook’s tracking and filtering technology, and the robotics industry. Automated cars seem just around the corner. In every case, however, one finds a specifically adapted algorithmic solution applied to a well-defined and narrowly conceived problem.

The engineers and programmers who produce these AI systems are to be commended for their insight and creativity. They are building a library of AI applications. But all such applications, even when considered collectively and extrapolated in the light of an ever-increasing army of programmers equipped with ever more powerful computers, get us no closer to computers that achieve, much less exceed, human intelligence.

For a full-fledged AI takeover (think Skynet or HAL 9000) to become a reality, AI needs more than a library of algorithms that solve specific problems. An AI takeover needs a higher-order master algorithm with a general-purpose problem-solving capability, able to harness the first-order problem-solving capabilities of the specific algorithms in this library and adapt them to the widely varying contingent circumstances of life.

Building such a master algorithm is a task on which AI’s practitioners have made zero headway. The library of search algorithms is a kludge — it simply brings together all existing AI algorithms, each narrowly focused on solving specific problems. What’s needed is not a kludge but a coordination of all these algorithms, appropriately matching algorithm to problem across a vast array of situations. A master algorithm that achieves such coordination is the holy grail of AI. But there’s no reason to think it exists. Certainly, work on AI to date provides no evidence for it. AI, even at its current outer reaches (automated vehicles?), still focuses on narrow, well-defined problems.

Absence of evidence for such a master algorithm might prompt defenders of strong AI to dig in their heels: They say, give us more time, effort, and computational power to find such a master algorithm and we’ll solve it! But why should we take their protestations seriously? We simply have no precedent or idea of what such a master algorithm would look like. Essentially, to resolve AI’s master algorithm problem, supporters of strong AI must come up with a radically new approach to programming, perhaps building machines by analogy with humans in some form of machine embryological development. Such possibilities remain pure speculation for now.

The computational literature on No Free Lunch theorems and Conservation of Information (see the work of David Wolpert and Bill Macready on the former as well as that of Robert J. Marks and myself on the latter) imply that all problem-solving algorithms, including such a master algorithm, must be adapted to specific problems. Yet a master algorithm must also be perfectly general, transforming AI into a universal problem solver. The No Free Lunch theorem and Conservation of Information demonstrate that such universal problem solvers do not exist.

Yet what algorithms can’t do, humans can. True intelligence, as exhibited by humans, is a general faculty for taking wide-ranging, diverse abilities for solving specific problems and matching them to the actual and multifarious problems that arise in practice. Such a distinction between true intelligence and machine intelligence is nothing new. Descartes and Leibniz understood it in the seventeenth century. Descartes put it this way in his Discourse on Method:

While intelligence is a universal instrument that can serve for all contingencies, [machines] have need of some special adaptation for every particular action. From this, it follows that it is impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our intelligence causes us to act.

Just to be clear, I’m no fan of Descartes (my own philosophical sensibilities have always been Platonic) and I regard much of his philosophy as misguided (for example, his undue emphasis on philosophical doubt and the havoc it created for metaphysics). Even so, I do think this observation hits the nail on the head. Indeed, it is perhaps the best and most concise statement of what may be called AI’s master algorithm problem, namely, the total lack of insight and progress on the part of the computer science community to construct a master algorithm (which Descartes calls a “universal instrument”) that can harness the algorithms AI is able to produce and match them with the problem situations to which those algorithms apply.

The Walter Bradley Center for Natural and Artificial Intelligence exists to clarify the limits of machine intelligence, to understand intelligence as it exists in nature (preeminently among humans), and above all to chart fruitful paths for humans to thrive in a world of automation brought on by AI. It’s really this latter aspect of the center that will define its success and impact. It’s one thing to exchange arguments and critiques with the defenders of strong AI. But the real challenge for this center is to help build an educational and social infrastructure conducive to productive human-machine interaction. The point is not simply to talk and critique; it is to do and build.

Good luck with that! I’m not saying it’s impossible. I am saying that there’s no evidence of any progress to date. Until then, there’s no reason to hold our breath or feel threatened by AI. I am more concerned that we may embrace the illusion that we are machines and thereby denigrate our humanity. In other words, the worry is not that we’ll raise machines to our level, but rather that we’ll lower our humanity to the level of machines.

For instance, go to a McDonald’s these days, and you’ll find that orders are taken not by humans but by responsive automated displays. Rather than view this change from human to machine order-takers as a triumph of AI, we should see it as a case of human intelligence being minimally taxed by order-taking and thus replaced, in a given instance, by a machine. Instead of lamenting that machines are encroaching on our work, we should rather seek more challenging work and the skills and education to handle it.

The Walter Bradley Center needs to be a place where the best philosophical and scientific arguments against the reduction of human to machine intelligence are aired. I’ve sketched one such argument here, the obstacles to producing a master algorithm of the sort that strong AI seems to demand. But other arguments against this reduction can be made as well, whether from metaphysics, consciousness, language, quantum mechanics, Chinese rooms, Gödel’s theorem, etc. I don’t regard all these arguments as being of equal merit, but they are worth exploring, articulating, and bringing into conversation with the supporters of a materialist reduction of mind.

Yet as I’ve emphasized, the Walter Bradley Center needs to do more than simply argue for a qualitative difference between human and machine intelligence. Indeed, if we are right and such a qualitative difference exists, then people already understand at some deep level of the soul that they are more than machines. The bigger challenge for the Walter Bradley Center, then, is to help us live effectively with machines. In that regard, we are not Luddites. Failures of AI give us no reason to celebrate. We want to encourage AI’s full development, albeit within the bounds of a life-affirming ethics. AI, like any technology, can be abused (as when it is put in the service of the porn industry).

Our goal is maintaining our full humanity in the face of technological progress.

At the same time as we want to encourage AI’s full development, we also want to encourage humanity’s full development. Ours is not a cyborg vision, in which humanity and technology meld into an indistinguishable whole. Rather, we want to maintain our full humanity in the face of technological progress. Machines must be and forever remain our servants. Interestingly, the impulse to make machines our masters comes less from strong AI than from totalitarians who see in a machine takeover a means of social control (a control exercised not by machines but through machines, as in the surveillance state).

For that reason, the Walter Bradley Center for Natural and Artificial Intelligence exists to clarify the limits of machine intelligence, to understand intelligence as it exists in nature (preeminently among humans), and above all to chart fruitful paths for humans to thrive in a world of automation brought on by AI. It’s really this latter aspect of the center that will define its success and impact. It’s one thing to exchange arguments and critiques with the defenders of strong AI. But the real challenge for this center is to help build an educational and social infrastructure conducive to productive human-machine interaction. The point is not simply to talk and critique; it is to do and build.

Accordingly, the Walter Bradley Center will need to emphasize the following themes:

Digital wellness: How can we maintain our peace of mind as machines become more and more a part of our lives, an issue already of grave concern as social media compete with face-to-face human interactions.

Education: how should we be educated, what topics do we need to study, what skills do we need to attain, and what are the most effective modalities for delivering education so that we can stay ahead of machines, living lives engaged in meaningful full-time work despite the continuing rise of automation.

Appropriate technologies: how do we ensure that people have the technologies they need (technologies increasingly affected by AI) that will allow them to thrive individually and in community rather than as cogs in an impersonal mechanized organizational system.

Entrepreneurship: how do we harness technologies to build wealth-creating enterprises (businesses) so that people, especially in the developing world, can escape poverty and become self-sufficient.

The impact of the Walter Bradley Center will depend on the degree to which it can effectively advance these themes. Each of these themes, taken individually, is significant, but jointly they define the unique focus of the center and how it can help make the world a better place. I’m optimistic that the center’s impact will be substantial and even groundbreaking. We certainly have a great team in place. But we also have the example of Walter Bradley himself, which brings me to the last topic of this talk.

Why we are honored to be associated with Walter Bradley

 It’s hard for me to speak of Walter Bradley in less than hagiographic terms. He has been an inspiration to me personally over the years, though I continue to fall short of his example in so many areas. But beyond that, in every area where this center named in his honor promises to make a difference, he has made a signal contribution.

On the question of natural and artificial intelligence, he has for decades argued in print and in person that the universe as a whole and life, in particular, gives evidence of intelligence that is not reducible to the motions and modifications of matter. He spearheaded the most important book on the origin of life in the 1980s, The Mystery of Life’s Origin, laying out the information-theoretic barriers to life arising from random movements in chemistry. His work as a materials scientist focusing on polymer chemistry gave him further insights into the distinction between the natural and the artificial.

As a long-time engineering professor, both at Texas A&M and then at Baylor, he understands the importance of education to the full growth and flowering of the human person. But he was never merely a professor imparting knowledge of his field to his students. He was always concerned about the well-being of his students, taking a personal interest in them, and inviting them to conversations about the larger issues of life. While at A&M, he even offered a non-credit mini-course to students on how they could study and learn more effectively by improving their reading skills, memory, etc. When living in Texas, I continually ran into people whose lives had been transformed because they knew Walter.

Even so, Walter’s reach goes beyond research, teaching, and looking out for the people in his backyard. While at Baylor, he helped organize a trip to Africa with fellow students and faculty from the engineering school. There, over the course of two weeks, they built a bridge that saved residents a daily twenty-mile trek around the river. This got Walter thinking not just about the benefits of technology in general, but how appropriate technologies might be used to put people in the developing world into business for themselves. And that, in turn, led to him setting up coconut farmers in business, using coconut husks to replace synthetic fibers, for example in automotive interiors.

Thus, in Walter Bradley, we find someone who has reflected deeply on the intersection of natural and artificial intelligence, who has done seminal research in this area, who has been an educator, who has ever been concerned about the well-being of fellow students and faculty (digital wellness is part of that), who has advanced appropriate technologies, and who has harnessed these technologies to help people in the developing world make a living.

Beyond all that, Walter has been fearless and uncompromising in standing against the materialist currents of our age, and thus we have a worthy namesake for the center. And so, I commend to you the Walter Bradley Center for Natural and Artificial Intelligence!

* Note: William Dembski was not able to attend the opening gala because of the death of his mother, Ursula Dembski (December 21, 1931 — July 15, 2018): “Thankfully, as Christians, we live in hope of a new Heaven and Earth in which the tears and sorrow of this life are wiped away. I write this not as a trite repetition of Christian dogma but in firm conviction. My Mom was in a lot of pain and discomfort in her last months. She is now in a better place with her Lord and also with my Dad.”


How Humans Can Thrive in a World of Increasing Automation