Will AI Liberate or Enslave Developing Countries?Perhaps that depends on who gets there first with the technology
From electrical engineer Karl D. Stephan, who blogs at Engineering Ethics, on China’s offer to share its technology for constantly monitoring citizens with Zimbabwe:
Anything that assists the Chinese government in spying on its citizens, learning about their private as well as public actions, and controlling their behavior so that they conform to a model pleasing to the government is going to get a lot of support. And AI fits this bill perfectly, which is one reason why China is not only pouring billions into AI R&D, but exporting it to other countries that want to spy on their people too.
[Ryan] Khurana points out that Zimbabwe, an African country well-known for its human-rights abuses, has received advanced Chinese AI technology from a startup company in exchange for letting the firm have access to the country’s facial-recognition database. So China is helping the government of Zimbabwe to keep tabs on its citizens as well. Maybe Zimbabwe will come up with something like China’s recently announced Social Credit system, which is a sort of personal reliability rating that eventually every person in China will receive. Think credit rating, only one based on the government’s electronic dossier of your behavior: what stores you visit, what friends you have, what meetings you go to, what TV shows you watch, and whether you go to church. Karl D. Stephan , “Exporting enslavement: China’s illiberal artificial intelligence” at MercatorNet
It’s part of a long-term strategy:
Arrangements such as this are common under China’s Artificial Intelligence (AI) strategy, whereby Chinese private and state-controlled companies take advantage of the weak legal systems and low privacy standards of developing nations as part of the country’s effort to become a world leader in artificial intelligence by 2030. Ryan Khurana, “The Rise of Illiberal Artificial Intelligence” at National Review
Meanwhile, the Western world struggles with corporate invasions of privacy like Facebook’s Cambridge Analytica scandal: “Cambridge Analytica always denied using Facebook-based psychographic targeting during the Trump campaign, but the scandal over its data harvesting forced the company to close.” (The Guardian)
One wonders how many politicians even understand the problem, never mind where they stand on it.
See also: AI can mean ultimate Big Surveillance: That’s what we should really worry about. And the personalities behind these surveillance efforts are not advanced artificial entities but the usual suspects, armed with the usual intentions.
George Gilder: Life after Google will be okay. People will take ownership of their own data, cutting out the giant “middle man”