Ethics for an Information SocietyBecause machines can’t learn to solve their own ethical problems
AI can benefit us by automating needed jobs that would otherwise just not get done (because people do not have enough time, money, or energy for so many repetitive tasks). But we will always need to keep a firm hand on the decision of what to automate and how. Otherwise, according to Mariarosaria Taddeo, deputy director of the Digital Ethics Lab at Oxford University, we risk this type of outcome:
Another issue is the potential for AI to unfairly discriminate. One example of this, says Tadeo, was Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections. According to Taddeo, the system was used to decide whether to grant people parole and ended up discriminating against African-American and Hispanic men. When a team of journalists studied 10,000 criminal defendants in Broward County, Florida, it turned out the system predicted that black defendants pose a higher risk of recidivism than they actually do in the real world, while predicting the opposite for white defendants.
Abigail Beall, “It’s time to address artificial intelligence’s ethical problems” at Wired
Essentially, AI (machine learning) was probably faster and cheaper but the whole point of the system was supposed to be justice which, whatever the explanation, proved too difficult to calculate…
See also: Can machine learning lead to mass manipulation? Expect a perfect storm of malice, experts warn. In 2017, a group of 26 AI researchers got together at Oxford and created a report which offers a number of examples of malicious technologies of the near future.