Transhuman brains are the melding of hyper-advanced electronics and super-artificial intelligence (AI) with neurobiological tissue. The goal is not only to repair injury and mitigate disease, but also to enhance brain capacity and boost mental function. What is the big vision, the end goal — how far can transhuman brains go? What does it mean for individual consciousness and personal identity? Is virtual immortality possible? What are the ethics, the morality, of transhuman brains? What are the dangers?
Here’s a partial transcript and comments:
Max Tegmark: I think it’s pretty clear that artificial intelligence and the quest for ever more intelligent non-biological minds is going to ultimately become the best thing ever to happen to humanity or the worst thing ever to happen to humanity. And that’s why it’s just so motivating to think about these questions now and think about what concretely we can do to make it good because everything I love about civilization — you know, from the ability to build gigantic water pumps to everything else is the product of human intelligence, right? (1:28)
So if we can amplify our intelligence with either various kinds of upgrades to our brains or pure artificial intelligence outside our bodies, then we can use that extra intelligence boost to tackle all of the toughest challenges that we are stumped by in today’s society. And you can create a future where, really, life can flourish like never before. You know, not just for the next election cycle but for billions of years. And not just on this planet but throughout much of this amazing cosmos. (2:03)
Comment: It would only be either the best or worst thing that could happen to humanity if greatly increasing average human intelligence or creating an artificial intelligence that thinks like a human being were possible. But there are reasons for believing that neither is possible. Computers think in algorithms and creativity, to take one example, is not a matter of algorithms. Also, why should we believe that either humans or machines will last in their present form — or any form — for billions of years, either individually or as a group?
Tegmark goes on to say, quite sensibly, that he favors building superintelligent machines over tinkering with human brains:
Max Tegmark: … But I also do think it’s going to turn out in the long run to be easier to build human level intelligence completely from scratch than it’s going to be and to understand exactly how our brain works. You know, just like it would turn out to be much easier to build an airplane than to build a mechanical bird. (3:00)
But then he gets into evolution:
Max Tegmark: You know, basically, because when Darwinian evolution develops something like flight or intelligence, it’s constrained to be incredibly energy efficient, which you don’t care about if you’re an engineer. And it’s constrained only to use the most common atoms in the periodic table that are abundant in nature, which again, you don’t care about as an engineer. And it’s most importantly constrained by only making things that can self-assemble, self-repair, which our laptops cannot. Right, as an engineer, instead you prefer simplicity, which evolution doesn’t care about. (4:10)
Comment: Note that Tegmark personalizes the processes of evolution (e.g., “doesn’t care about”), as if they were a human being thinking under difficult circumstances. Perhaps that is why he so easily credits the idea that machines too can have human intelligence.
In any event, he is mistaken about evolution preferring simplicity. As Michael Behe demonstrated in Darwin Devolves, the processes of evolution are far more likely to degrade complex cellular machinery than to build it up, simply because that’s… simpler. Devolution, as it is called, may be an less publicized process but it is very common.
In any event, even people who have no philosophical issues with evolution are beginning to express doubt about “self-assemble,” “self repair,” and all that, as Tegmark so airily sets them forth. Tufts biologist Michael Levin, for example, asks, where is anatomy coded in living systems? (Hint: It isn’t a blueprint.)
He says something very significant:
Michael Levin: … in general, you know, biologists don’t love to think about goal directed processes. The idea is, there’s supposed to be emergence and kind of emergent complexity. But this idea that things are working towards a goal the way that any navigational system fundamentally does is really not something that is very comfortable, certainly, for molecular biology. So how would something like this how would something like this work? How could we have a navigating system that can have goals in anatomical space? (15:04)
In short, Levin makes clear that, according to theory, evolution is an impersonal series of random outcomes in a material world. It has no goals, just processes. But that is not what the evidence seems to show, hence his discomfort with the theory.
When science is not comfortable with the evidence, either something is wrong with the science or something is wrong with the evidence. One solution is to ignore — or deprecate — the evidence. Another is to do what Tegmark is doing: Talk as if evolution is equivalent to a thinking engineer but with different goals, while accepting it as an impersonal series of random outcomes. That avoids confrontation with Levin’s dilemma.
It’s not clear what evolution has to do with transhumanism; Humans would, in theory, design the transhuman systems. But then perhaps Tegmark believes, as some thinkers do, that human consciousness is an accidental by-product of evolution and a figment of our minds. In that case, one could say that evolution is randomly producing what we take to be our consciously intended work. But Kuhn and Tegmark don’t open that can of worms.
Later, Tegmark takes up the question of the rights of AI:
Max Tegmark: If, as you said, one day we have artificial minds which can also be conscious and maybe experience suffering or positive emotions… (6:56)
Robert Lawrence Kuhn: You don’t know that’s possible. (7:00)
Max Tegmark: We don’t know whether that’s possible or not but I think it’s carbon chauvinism to just take as an axiom to say that you can only have subjective experience if you’re made of carbon atoms or meat or whatever and I think we have to… I mean Churchill said the only thing we learned from history is that we don’t learn from history. But I hope we
can learn from history on this one because we have done exactly that mistake with slavery if we set off they don’t feel pain or whatever or women don’t have souls or animals don’t feel pain. (7:32)
Comment: The obvious problem here is that slaves, women, and animals exist and there is no reason to doubt that slaves and companion animals feel pain. If there is a doctrine of the soul according to which women do not have souls, it cannot be very widespread. Now, whether invertebrate animals feel pain is principally a question of how animal consciousness works. Research is increasingly showing that they do but, given the communication constraints, that was never self-evident.
By contrast, we have no reason to believe that we can ever build machines that suffer as if they were life forms. Even if we could, why would we? Pain would be easy enough to leave out.
What strikes me about Tegmark’s approach to the questions is its fundamental lack of seriousness. The real fate of AI is surely not to become humanoids with emotions but to become the totalitarian’s favorite weapon against freedom, privacy, and personal life for humans, as it is in China.
You may also wish to read: Is it technically genocide if we kill a planetful of robots? Orville Season 3, Episode 9, features an EMP-like device that could wipe out the robotic Kaylon. While this story might have addressed the question of whether robots could ever be sentient beings, the writers chose to lecture us about prejudice instead. (Gary Varner)