I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.
Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.
Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.
Not only was Jean-François Bonnefon’s paper rejected by conventional anti-plagiarism software but the rejection didn’t make any sense. Bonnefon, research director at Toulouse School of Economics, was informed of “a high level of textual overlap with previous literature” (plagiarism) when he was citing scientists’ affiliations, standard descriptions, and papers cited by other—information he was obliged to cite accurately, according to a standard format. “It would have taken two [minutes] for a human to realise the bot was acting up,” he wrote on Twitter. “But there is obviously no human in the loop here. We’re letting bots make autonomous decisions to reject scientific papers.” Reaction to the post by Dr Bonnefon, who is currently a visiting scientist at the Massachusetts Institute Read More ›
Media often co-operate with researchers’ inflated claims about machine learning’s powers of discovery. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.
AI can certainly help scientists. But to understand why AI can’t do science on its own, we should take a look at the NP-Hard Problem in computer science. The “Hard” is in the name of the problem for a reason…
Irish playwright John Waters warns of a time when we might have to grant moral discretion to computer algorithms, just as Christians now grant to the all-knowing but often inscrutable decrees of God. Not likely.
One new issue that the cloud creates is that programmers are more often required to be “full stack” developers,” Jonathan Bartlett explains. “Unfortunately, most programmers coming out of college have little to no system administration experience. That’s why this book is based on the ‘full stack’ concept, showing how system administration and programming relate to each other.”
We want our calculation to demonstrate the notion that if we have high accuracy and a small model, then we have high confidence of generalizing. Intuitively, then, we add the model size to the accuracy and subtract this quantity from the entropy of having absolutely no information about the problem.
One technique to avoid data snooping is based on the intersection of information theory and probability: An object’s probability is related to its information content. The greater an object’s information content, the lower its probability. We measure a model’s information content as the logarithmic difference between the probability that the data occurred by chance and the number of bits required to store the model. The negative exponential of the difference is the model’s probability of occurring by chance. If the data cannot be compressed, then these two values are equal. Then the model has zero information and we cannot know if the data was generated by chance or not. For a dataset that is incompressible and uninformative, swirl some tea Read More ›