At Scientific American: Starve artificial intelligence!Silicon Valley authors seek to limit AI's power. Jonathan Bartlett doesn't think it really has the power they are worried about
William Davidow, a Silicon Valley pioneer at Intel, and Michael S. Malone, one of the Valley’s best-known journalists, have come up with a scheme to restrain the power of AI in bad hands: Starve it of data. They are the authors of The Autonomous Revolution : Reclaiming the Future We’ve Sold to Machines (February 2020) and they explain their concerns at Scientific American:
Artificial intelligence is still in its infancy. But it may well prove to be the most powerful technology ever invented. It has the potential to improve health, supercharge intellects, multiply productivity, save the environment and enhance both freedom and democracy. But as that intelligence continues to climb, the danger from using AI in an irresponsible way also brings the potential for AI to become a social and cultural H-bomb. It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves. Therefore, we must be very careful about the ascendance of AI; we don’t dare make a mistake. And our best defense may be to put AI on an extreme diet.William Davidow, Michael S. Malone, “Don’t Regulate Artificial Intelligence: Starve It” at Scientific American
They don’t think current efforts are severe enough:
Europeans now have the “right to explanation,” which requires a humanly readable justification for all decisions rendered by AI systems. Certainly, that transparency is desirable, but it is not clear how much good it will do. After all, AI systems are in constant flux. So, any actions taken based on the discovery of an injustice will be like shaping water. AI will just adopt a different shape.William Davidow, Michael S. Malone, “Don’t Regulate Artificial Intelligence: Starve It” at Scientific American
Thus, concerned that we are “getting closer to creating machines capable of artificial general intelligence,” they propose starving rather than feeding the beast:
We think a better approach is to make AI less powerful. That is, not to control artificial intelligence, but to put it on an extreme diet. And what does AI consume? Our personal information…
How do we choke down the flow of this personal information? One obvious way is to give individuals ownership of their private data. Today, each of us is surrounded by a penumbra of data that we continuously generate. And that body of data is a free target for anyone who wishes to capture and monetize it. Why not, rather than letting that information flow directly into the servers of the world, instead store it in the equivalent of a safe deposit box at an information fiduciary like Equifax? Once it is safely there, the consumer could then decide who gets access to that data.William Davidow, Michael S. Malone, “Don’t Regulate Artificial Intelligence: Starve It” at Scientific American
They go on to sketch out a proposed system. Jonathan Bartlett, Director of the Blyth Institute and a Walter Bradley Fellow, offers some thoughts in response:
On the whole, Davidow and Malone have pointed out real problems in our current usage of AI, especially referring to “algorithmic prisons.” However, they get the underlying cause wrong. It is not because AI is too powerful. If it were sufficiently powerful, then algorithmic prisons might be beneficial. The problem is that we treat it as if it is powerful. AI is the modern idol. We worship at its feet, but it actually does very little for us. AI is useful as a servant, but ridiculous as a master.
On the whole, the real answer is for people to actually understand AI and its limitations. We need to, sociologically, stop pretending that algorithms will solve all our problems. Yes they are cool, no they aren’t foolproof. And human planning usually accounts for higher-order effects that machines literally can’t see.
The other problem that Davidow and Malone point out is a real one — ownership of private data. However, this is a much stickier issue than they let on. Knowing which data belongs to whom and for what purposes is actually a huge issue. If you interact with my business, whose data is that? If the business isn’t allowed to sell it, wouldn’t that conversely also mean that the person wouldn’t be allowed to post negative reviews?
In short, rather than fearing AI as if it were a god, we need to recognize that it is actually much more limited than we give it credit for, and to recognize that there is currently no solution to data privacy issues that is fair to all parties.
No current AI has anything like personal intelligence or goals to destroy or enslave people. That’s all coming from the users. But, as Bartlett points out, Davidow and Malone are right to focus attention on such misuses; for example, they talk about the high tech surveillance and social credit system in China and corporate data grabs in North America.
But it’s not AI that needs to be put on a power diet; it’s some of the users.
The danger AI poses for civilization: Why must Google be my helicopter mom? (Analysis)
Technology centralizes by its very nature. Here are some other truths about technology, some uncomfortable ones. To see what I mean about centralization, consider a non-digital tool, say, a shovel. The shovel doesn’t keep track of your shoveling, read your biometrics, and store a file on you-as-shoveler somewhere. It’s a thing, an artifact. So you see, the new digital technology is itself the heart of the surveillance problem. No Matrix could be built with artifacts. (Analysis)
Superintelligent AI is still a myth. Neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common sense. (Analysis)