Multiverse Physicist Max Tegmark Seeks AI That Checks News BiasNaive people who truthfully claim to be acting only “for good” in trying to address bias in the news via AI are kidding themselves
Max Tegmark (right) is probably better known as a multiverse cosmologist than as an AI specialist. The MIT physics professor told New Scientist in 1998 that “All possible universes exist, even triangular ones.” He also informed Scientific American in 2003 that “Not just a staple of science fiction, other universes are a direct implication of cosmological observations”:
Is there a copy of you reading this article? A person who is not you but who lives on a planet called Earth, with misty mountains, fertile fields and sprawling cities, in a solar system with eight other planets? The life of this person has been identical to yours in every respect. But perhaps he or she now decides to put down this article without finishing it, while you read on.Max Tegmark,, “Parallel Universes” at Scientific American
He posits four levels of multiverse, the fourth being “all that can be described by mathematical structures” (his own contribution to the discussion). There is no observable evidence for any universe other than our own.
But Tegmark has set all that aside to work on AI. In 2014, he co-founded the Future of Life Institute in 2014 which aims to understand and mitigate existential risks to humanity, particularly those associated with the rise of AI.” He sees it as the next stage of human evolution, as in his 2017 book, Life 3.0: Being Human in the Age of Artificial Intelligence.
Earlier, we noted that he has argued that “AI can be the best thing ever for humanity” (New Scientist) and that “All of today’s greatest problems can be solved with better technology, ultimately. And AI is a crucial part of that.” (VentureBeat)
Recently, he was interviewed at New Scientist about his thoughts about AI and the current focus of his work, which is using machine learning “for good” to identify news bias, which he feels that humans flub:
There are these projects aiming to improve the quality of the news by having humans go in and fact-check and flag problems. But if you look more closely, you will see that some factchecking sites find 95 per cent of errors in media outlets on the left side of the political spectrum, and other ones will only find errors in the media outlets on the right. It’s unclear exactly what criteria they use.
We decided to build something entirely automated. It’s a work in progress, but we use machine learning to classify news articles on all sorts of different metrics: by the topic that they are about, whether they are left or right, pro- or anti-establishment, in-depth or quite breezy, more inflammatory or quite nuanced. The tool is a bit like Google News, but with a bunch of sliders underneath, so you can adjust for what you want to read.Stephanie Singleton, “Max Tegmark interview: “AI can be the best thing ever for humanity”” at New Scientist
He hopes that his sliders to adjust what you want to read will not contribute to the problem of echo chambers (people hearing only from accustomed and preferred sources). As he told Singleton, “My hope is that a user won’t just set their preferences once and for all, but exhibit some curiosity.”
From a news writer’s perspective, it’s hard to see what use his program would be. The advent of concepts like fake but accurate, “truthiness,” and “post-truth” in this decade signals intractable differences of opinion about what truth even is. An algorithm is not going to solve that.
Analysis sites like Snopes tend to get captured by one side in political controversies; indeed, last year, Snopes declared a ridiculous war on the Christian satire site, The Babylon Bee, apparently taking the satirical sketches for news items. Similarly, the process of assigning “Pinocchios” when questioning statements by public figures is easily corrupted. Political statements are full of ambiguities and nuances. Only those on the other side of an issue typically perceive a statement to be a “lie.”
The risk with well-intentioned ideas such as Tegmark’s is that power brokers may push to treat an AI-based fact/bias checker as infallible when it simply reflects the shared philosophical biases of its programmers. That’s why, for example, the US st Amendment reads “Congress shall make no law… abridging the freedom of speech, or of the press… ” The key word is “abridging.” Media shouldn’t need protection or support or monitoring; they should be free to publish what they see as news, though others may consider it “false” or “biased,” and take their chances. In short, naive people who truthfully claim to be acting only “for good” in trying to address bias in the news via AI are kidding themselves, if not others.
Tegmark’s broader agenda of “machine learning for good” is worth a look for the philosophy that underlies it:
Interviewer Stephanie Singleton asked some tough questions (a rarity in this subject area). For example,
Is building this sort of advanced “general” AI realistic, given that we don’t even understand how human intelligence works?
You could just as well ask, how could we possibly figure out how to build a flying machine before understanding how birds fly? Darwinian evolution gave us both flying birds and thinking animals, but it was very constrained: to only build solutions that could selfassemble, that could self-repair, that only used a handful of chemical elements, that were super-energy-efficient. When you remove all these biological constraints, you can often find much simpler solutions to the same problems.
I know some people think there’s something magical about intelligence, making it possible for it to exist only in human bodies. I don’t think so. I am a blob of electrons and quarks processing information in certain complex ways, and the key to intelligence is just the nature of that information processing. I would go so far as to predict that the way we are finally going to understand exactly how the human brain works is by building something simpler that is comparably smart.Stephanie Singleton, “Max Tegmark interview: “AI can be the best thing ever for humanity”” at New Scientist
He thinks computers can be programmed to have emotions:
My own guess is that consciousness is simply the way information feels when it’s being processed in certain complex ways. I think scientists owe it to the world to figure out what those complex ways are.Stephanie Singleton, “Max Tegmark interview: “AI can be the best thing ever for humanity”” at New Scientist
He also told Singleton that he thinks of consciousness as “the last bastion that has still refused to be captured by physics” and that it “can be fully understood in terms of information processing done by particles moving around.”
Above all, he is confident: He tells her, “The key to having a good future is to be able to formulate a vision that people around the world can really get on board with.” The fact that China is pushing AI as the fast route to totalitarianism does not seem to register. The naivete of many otherwise brilliant AI professionals is sometimes astounding.
Consciousness is two hard problems, not one. Psychology prof Gregg Henriques argues, consciousness “plays by a different set of rules than the language game of science”
China: Sophisticated surveillance decides who gets sent to Uuyghur camps. The leak of documents from police in Karakax County in Xinjiang reveal the details of everyday life that can send a Uyghur to the camps.
Is Big Bang Theory’s Sheldon right re the multiverse? Robert J. Marks comments: Sheldon Cooper insists that in no universe would he dance with Penny.“Some claim, there is an infinite number of universes in the multiverse. That is ludicrous because there are no infinities in the physical world. Even if there were, Cantor’s theory of the infinite shows that, if there were an infinite number of contingencies, not all contingency combinations could be accounted for by an infinite number of universes.”