Mind Matters Natural and Artificial Intelligence News and Analysis
Close-up Shot of Hacker using Keyboard. There is Coffee Cups and Computer Monitors with Various Information.
Close-up Shot of Hacker using Keyboard. There is Coffee Cups and Computer Monitors with Various Information.

Sometimes the ‘Bots Turn Out To Be Humans

That “lifelike” effect was easier to come by than some might think
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Woebot inventor Alison Darcy

For various reasons, companies sometimes pretend to be using AI or machine learning (“pseudo-AI”) when they are actually using human employees. One reason is that they have promised potential investors more high tech than they can deliver. Sometimes, as we learned recently at The Guardian, it gets a bit sticky:

In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy

We can certainly hope that identities were redacted…

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them. Olivia Solon, “The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work” at The Guardian

Maybe someone knows where you shop, what you buy, and where it was delivered… Not a big deal except that you didn’t know that anyone was looking.

A Canadian technology writer offers some additional reasons for pseudo-AI from industry pros:

“By doing things manually and less efficiently at first, companies can ensure that they are investing in technology which solves a valuable problem for customers.”

Another often overlooked reason why tech companies would turn to humans in the process of developing automated systems is that implementing AI solutions is really difficult.

“We’ve seen great progress in lab-like settings, but it is hard to actually implement AI in the real world,” says [Robert] Seamans. Ramona Pringle, “Artificial intelligence is on the rise — but not without help from humans” at CBC News

Key to the strategy, of course, is the diligent humans who keep costs under control by earning the minimum wage.

Alison Darcy, the inventor of a therapy bot (the Woebot, pictured right), calls the trend the “Wizard of Oz technique”, after the pseudo-wizard

“There’s already major fear around AI and it’s not really helping the conversation when there’s a lack of transparency,” she added, and it is evident that the technology could be much more dangerous if it were to fall into the wrong hands. Why are humans pretending to be bots? The rise of pseudo-AI in tech” at M360

It’s not clear whose the right hands are.

Hat tip: Eric Holloway

See also: Screen Writers’ Jobs Are Not Threatened by AI. Unless the public starts preferring mishmash to creativity (Robert J.Marks)

and

AI That Can Read Minds? Deconstructing AI Hype The source for the claims seems to be a 2018 journal paper, “Real-time classification of auditory sentences using evoked cortical activity in humans.” The carefully described results are indeed significant but what the Daily Mail article didn’t tell you sheds a rather different light on the AI mind reader. (Robert J.Marks)


Sometimes the ‘Bots Turn Out To Be Humans