Mind Matters Natural and Artificial Intelligence News and Analysis
hacker
Dangerous Hooded Hacker Breaks into Government Data Servers and Infects Their System with a  Virus. His Hideout Place has Dark Atmosphere, Multiple Displays, Cables Everywhere.
Image licensed via Adobe Stock

20 Ways AI Enables Criminals

If you cannot believe your eyes and ears, then how can you protect yourself and your family from crime?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As reported recently and relayed in this publication, a mom in Arizona described how criminals called her to say they were holding her daughter for ransom and used artificial intelligence (AI) to mimic perfectly her daughter’s voice down to the word choices and sobs. Only because the mom found her daughter safe in her home could she know the call was a scam.

Meanwhile, despite efforts to limit ChatGPT’s excursions into the dark side of human perversity, the wildly famous bot can be persuaded to discuss details of sordid sexuality. In one experiment with Snapchat’s MyAI chatbot, an adult pretending to be a 13-year-old girl asked for advice about having sex for the first time – in a conversation in which “she” said she was in a relationship with a 31-year-old man. The bot supplied advice and encouragement.

Business and finance can be scammed as well, of course. The FBI reportedly received a number of complaints in 2022 about people using “stolen information and deepfaked video and voice to apply to remote tech jobs.” In 2019, fraudsters used AI to mimic a company CEO’s voice, capturing even his slight German accent, to fool an executive into sending $243,000 to a fake account – the money disappeared.

While the click-bait media stories both entertain us and raise our concerns, the AI-driven cybercrime monster threatens peaceful, law-abiding societies like nothing previously ever has.

Experts Predict and Rank AI Threats for Potential Crime Increases

To offset the buzz of ChatGPT excitement must be a sober recognition that sophisticated AI presents deep and complex threats to free and prosperous societies. These AI-enhanced threats were summarized in “AI Enabled Future Crime,” appearing in the British journal Crime Science in 2020. Top thinkers in the fields of security and crime science, computer science, public policy, national defense and policing, assembled in 2019 at University College London to map out the predictable AI criminal threats. Using a multi-dimensional rubric, the experts developed a list of potential AI-enhanced crimes and rated their threat severity to society using four criteria:

Harm: The type and extent of damage to individuals, groups, or society generally (terror being a form of harm).

Criminal Profit: The nature of the criminal intention, and the amount of criminals’ financial benefit or the damage caused to victims.

Achievability: Considering the number of avenues along with the availability and expense of technology, how feasible the crime would be.

Defeatability: The types and availability of methods to prevent, detect or render unprofitable the cybercrime, considering whether methods are highlytechnical, simple vs. complex, or needing human behavior to change.

The experts using the rubric named 20 cybercrime threats that AI usage could amplify, grouped into three threat severity levels (high, medium, low) in the following order.

High Threat Severity Level

1. Audio/video impersonation (for fraud, extortion, reputation damage, or security breaching)

2. Driverless vehicles as weapons

3. Tailored phishing (exploiting a trust relationship using very personal data)

4. Disrupting AI-controlled systems (food logistics, public utilities, traffic control)

5. Large-scale blackmail (availability of harmful facts in so many sources)

6. AI-authored fake news (either based upon actual events or entirely fabricated)

Medium Threat Security Level

7. Military robots

8. “Snake oil” (fraudulent services sold as genuine uses of AI or machine learning)

9. Data poisoning (manipulating the data available to AI and machine learning systems)

10. Learning-based cyber attacks (AI-enhanced target-specific and massive attacks)

11. Autonomous attack drones (individually or as swarms independent of direct human control)

12. Online eviction (denial of individual’s access to computer controlled / online resources, e.g., banks, credit cards, public utility services)

13. Tricking face recognition (either to avoid detection or to defeat security systems)

14. Market bombing (manipulating financial or stock markets via massive or high frequency trading that sends signals triggering other AI-enhanced systems to act)

Low Threat Security Level

15. Bias exploitation (false or distorted online rankings, or placement of negative or propagandized evaluations and commentary)

16. Burglar bots (small autonomous robots delivered into premises through small access points to retrieve keys or to open doors for human burglars)

17. Evading AI detection (techniques to thwart detection of evidence of unlawful activity)

18. AI-authored fake reviews (mimicking already-extant human false advertising campaigns)

19. AI-assisted stalking (machine learning systems to track individuals via social media or cell phones, predicting their actions and locations)

20. Forgery (AI-generated art or music attributed to known human authors)

Avoiding or Mitigating AI-Enhanced Crimes

Addressing the 20 threats in any detail goes beyond this article’s scope, but there appears one fundamental take-away for everyday citizens: Do not trust unverified AI systems to be accurate, truthful, trustworthy, or reliable. Many of the threats depend upon people assuming AI systems are “good guys” or possibly already designed to avoid wrongdoing. To the contrary, as discussed here previously, AI systems do what their programmers direct and enable them to do.

The University College London group observed that changing potential victims’ behavior is a fundamental way to thwart crime. Thus, for example, people must consciously:

  • not believe pop ups in search engines or websites
  • not assume some social media posted factoid is true and then act upon it
  • not accept that a statement “quoted” from another source is accurate

Whether it is about stock trades, personal scandals, photos of things or events, assertions about history, health-related information, or product recommendations, being consistently skeptical about data on the Internet must become the norm until full transparency is achieved. We may hear but not fully believe a claimed fact until it is verified.

This very skepticism has led to two-factor authentication (2FA) in many vital online systems. When you log into your bank account, for example, you enter your ID and password, but the bank system text messages your phone a code that you must enter on the computer also. Via 2FA we verify the computer system as trustworthy: It is the system we expect, and it knows we are the right persons for the account. Deploying 2FA style methods for other Internet or AI system interactions could help reduce exposure to scams and deception.

Adopting a 2FA mindset for anything important coming in via the Internet makes sense. Don’t trust until you verify gives individuals, businesses, and their computer systems a fighting chance against AI-enhanced criminality.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

20 Ways AI Enables Criminals