Five AI Predictions to Watch in 2020We'll check on these a year from now
As 2019 ended, many people dug out their crystal balls. Here are five predictions for 2020 that we will check a year from now. They are taken from “Artificial intelligence predictions for 2020: 16 experts have their say” by Ellen Daniel at Verdict UK. Should be fun:
One: “Deepfakes will become a serious threat to corporations” – Jesper Frederiksen, VP and GM EMEA, Okta. Deepfakes are highly realistic AI misrepresentations of people and events that could be used to manipulate elections—or perhaps change brand loyalties. Deepfakes can be detected. But will staid corporations adapt or fall victim?
There is, however, a legitimate question of how much difference deepfakes can make, for several reasons: Perhaps any technology, however sophisticated, leaves some evidence of its existence—evidence that other technologies can find. More generally, as we noted earlier, the deepfaker’s target audience may simply want to believe the doctored images. For the same reasons, it may reject authentic accounts of the misdeeds of its favorites.
Two: “Ramp up in autonomous vehicles“ Specifically, “removal of the on-board safety driver” – Danny Shapiro, Senior Director of Automotive, NVIDIA Shapiro fudges a bit by talking about the whole decade but let’s see if, by January 2021, the onboard safety driver has indeed vanished. Meanwhile, a more persuasive sign that the automated vehicle industry is maturing would be a diminished cowboy element. Yes, Elon, your name did come up in this discussion…
Our own #1 AI hype of 2019 here at Mind Matters News was Elon Musk’s phantom fleet of robotaxis for 2020. Such fun for investors with money to burn. Meanwhile, we’ll keep an eye on how well automated cars (Level 5) begin to address real transportation needs in 2020.
Three: “Voice technologies will infiltrate the office” Specifically, “Voice assistants have established themselves as common place in our personal lives. But 2020 will see an increasing amount of businesses turning to them to improve and personalise the customer experience.” – Alberto Pan, Chief Technical Officer, Denodo
Will chatbots prove less frustrating than the muzak punctuated by fits of corporate self-promotion that we all endure in the hapless helpline queue? Can they replace the “real human being” we so much need to talk to at that point? No harm trying, perhaps, but the history of chatbots is not encouraging. Consider, for example, Microsoft’s unfortunate chatbot progeny: Politically correct Zo was apparently an even more tiresome brat than politically incorrect Tay.
Four: “The ethics of AI” Specifically, “2020 will be the year research & investment in ethics and bias in AI significantly increases.” – Ashvin Kamaraju, CTO for Cloud Protection and Licensing activity at Thales. Kamaraju is referring to the problem of racist and sexist attitudes getting baked into the data that the AI works with.
One problem is, as Brendon Dixon has pointed out, “the delegation to a machine of what should be a human decision.” He adds, “Like a magnifying glass, a machine can help us see that which we might miss but it cannot—because there is no ghost in the machine—decide what to do.” Let’s see what happens.
Five: “Manipulating AI” Specifically, “2020 is going to be a tipping point, when algorithmic decision making AI will become more mainstream. This brings both opportunities and challenges, particularly around the explainability of AI. We currently have many blackbox models where we don’t know how it’s coming to decisions. Bad guys can leverage this and manipulate these decisions.” – Kevin Bocek, Vice President, Security Strategy & Threat Intelligence at Venafi
If algorithms make decisions no one understands, when things go wrong, we have a brand new problem: Diligent research might not uncover the cause. “Bad guys” are inevitable but in this case, they might make a bad situation unresolvable. Think of the millions of dollars locked forever in Bitcoin and only a dead man knows the code.
The good news is, we are not stuck with failed AI. We can do without it and make decisions later about better AI. After all, Amazon dumped the sexist recruiting program. Boeing scrapped the underperforming fuselage assembly robots. And one Kaiser Permanente hospital no longer tells patients they are dying via a video-linked TV mounted on a robotic cart. Hey, things could be worse. At least no one thinks that the last item was an improvement in patient care just because it was high tech.
Let’s see how the predictions fare this time next year.
See also: 2019 AI Hype Countdown #2: Big Data is our crystal ball! The biggest problem is that human behavior is not as predictable as the models imply. Many models are ridiculously simplistic, making the results worse than worthless. They become a way of solidifying biases.