Mind Matters News and Analysis on Natural and Artificial Intelligence

CategoryMachine Learning

kevin-364843-unsplash

Who Creates Information in a Market?

Do exchange-traded funds (ETFs)' algorithms make personally gathering information obsolete?

Algorithmic strategies can only be as good as the information that goes into them.  Ignoring how the information is created causes us to misunderstand the dynamics of value creation.  Algorithms can leverage information, they can’t create it.

Read More ›
Cropped shot of call center operator in headset working and talking with client

Why machines can’t think as we do

As philosopher Michael Polanyi noted, much that we know is hard to codify or automate
Human life is full of these challenges. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home. But she cannot just tell him so. Read More ›
hugues-de-buyer-mimeure-350387-unsplash

Why Can’t Machines Learn Simple Tasks?

They can learn to play chess more easily than to walk
If specifically human intelligence is related to consciousness, the robotics engineers might best leave consciousness out of their goals for their products and focus on more tangible ones. Read More ›
Collection CT scan of brain and multiple disease

Better medicine through machine learning?

Data can be a dump or a gold mine
The biggest problem today isn’t the sheer mass of data so much as the difficulty of determining what it is worth. The answer lies, unfortunately, in the undone studies and the unreported events. Machine learning will be a much greater help when those problems are addressed. Read More ›
jackson-jost-530469-unsplash

AI Is Not (Yet) an Intelligent Cause

So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool.
Hutson describes one test last year where a computer scientist at UC Berkeley subtly altered a stop sign with stickers. It fooled an autonomous vehicle’s image recognition system into “thinking” it was a 45 mph speed limit sign. Humans could immediately recognize the stop sign, but the car did not. Autonomous car makers wonder, could hackers turn them into terror weapons? Read More ›
Santa Claus Snow Globe

The driverless car: A bubble soon to burst?

Car expert says journalists too gullible about high tech

Why do we constantly hear that driverless, autonomous vehicles will soon be sharing the road with us? Wolmar blames “gullible journalists who fail to look beyond the extravagant claims of the press releases pouring out of tech companies and auto manufacturers, hailing the imminence of major developments that never seem to materialise.”

Read More ›
jehyun-sung-477894-unsplash

GIGO alert: AI can be racist and sexist, researchers complain

Can the bias problem be addressed? Yes, but usually after someone gets upset about a specific instance.

From James Zou and Londa Ziebinger at Nature: When Google Translate converts news articles written in Spanish into English, phrases referring to women often become ‘he said’ or ‘he wrote’. Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant. Now where, we wonder, would a mathematical formula have learned that? Maybe it was listening to the wrong instructions back when it was just a tiny bit? Seriously, machine learning, we are told, depends on  absorbing datasets of Read More ›