Mind Matters Natural and Artificial Intelligence News and Analysis
robot-with-artificial-intelligence-observing-human-skull-in-evolved-cybernetic-organism-world-3d-rendered-image-stockpack-adobe-stock.jpg
Robot with Artificial Intelligence observing human skull in Evolved Cybernetic organism world. 3d rendered image
Robot with Artificial Intelligence observing human skull in Evolved Cybernetic organism world. 3d rendered image

#2 Computers Can Be As Smart As Humans If We Crowdfund Them!

Eric Holloway: Y Combinator's Sam Altman is taking a crazy movement to its logical conclusion
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

So, in #3, AI knows when it shouldn’t be trusted? even though no one else does? But now, what about #2?: Sam Altman’s leap of faith!, that an AI will think like people:

Earlier this year, founder-investor Sam Altman left his high-profile role as the president of Y Combinator to become the CEO of OpenAI, an AI research outfit that was founded by some of the most prominent people in the tech industry in late 2015. The idea: to ensure that artificial intelligence is “developed in a way that is safe and is beneficial to humanity,” as one of those founders, Elon Musk, said back then to the New York Times.

The move is intriguing for many reasons, including that artificial general intelligence — or the ability for machines to be as smart as humans — does not yet exist, with even AI’s top researchers far from clear about when it might. Under the leadership of Altman, OpenAI, which was originally a non-profit, has also restructured as a for-profit company with some caveats, saying it will “need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”

Whether OpenAI is able to attract so much funding is an open question, but our guess is that it will, if for no reason other than Altman himself — a force of nature who easily charmed a crowd during an extended stage interview with this editor Thursday night, in a talk that covered everything from YC’s evolution to Altman’s current work at OpenAI.

Connie Loizos, “Sam Altman’s Leap of Faith” at TechCrunch (May 19, 2019)

Okay. It is one thing to be regarded as a force of nature by colleagues and another thing to actually be, just for example, the polar vortex …

Our nerds here at the Walter Bradley Center have been discussing the top twelve AI hypes of the year. Our director Robert J. Marks, Eric Holloway and Jonathan Bartlett talk about overyhyped AI ideas (from a year in which we saw major advances, along with inevitable hypes). From the AI Dirty Dozen 2020 Part III, here’s #2: As we saw, Sam Altman, president of Y Combinator, quit his job to help make human-like intelligence happen. Will it?

Our story begins at 16:02. Here’s a partial transcript below . Show Notes and Additional Resources follow, along with a link to the complete transcript.

Robert J. Marks: Eric, what is going on here with Sam Altman? Who is he? And what’s his leap of faith, which is totally incorrect, I believe.

Eric Holloway: I would actually say Sam Altman is totally correct. He’s actually taking the AI trend to its logical conclusion because if AI is truly as great as it should be, we can actually reproduce human intelligence, and then it can feed into itself and then take off forever. Then the crazy claims he’s making here are actually correct. I would say it’s not Sam Altman that’s crazy; he’s just the logical conclusion of a crazy movement.

And he says stuff like, “I’m only going to focus on creating AI because once you get AI, it’s going to embed absolutely everything else.” He calls it the light cone of the future. And then he makes these funny venture capitalists sells, like instead of saying, “Hey, we’re only going to give you a certain percentage of the profit.” He says, “Well, once you get a hundred times return on what you invest in us, then we’re going to have to give the rest to charity.” He’s obvious in trying to undersell his over-promising. Pretty hilarious.

Robert J. Marks: This guy is no slouch. He is president of OpenAI or something like that?

Eric Holloway: He has a fantastic history as a great venture capitalist. He came up with a company called Loopt when he was just in his early twenties. They sold for millions. And then he took control of Y Combinator, which is one of the most successful venture capitalist firms in Silicon Valley, which has a pretty nice lean startup approach, or at least they used to. And then he took that approach and even made it better. He has a great background. And so that’s why I say he’s not crazy; it’s the movement that he’s heading up. It itself is crazy. And he’s just taking it to its logical conclusion.

Robert J. Marks (pictured): A friend of the Bradley Center, Roman Yampolskiy, on April Fool’s, put out a tweet on social media. And he said, “This is incredible. Google fires all of their programmers because they have developed a super AI that will write all of the programs of the future.” And if you just think about that, it’s just really ridiculous. Yet, he got a lot of thumbs up and he was even contacted by people in the media that said, “We want to talk more about this.” And he said, “Look, it was a joke. It was simply a joke.”

Eric Holloway: Yeah. And so let’s look at this from their perspective. If I told you, “Hey, I have this neat little black box, and you can plug anything you want into it. And this little black box will power it forever. It just creates energy out of nothing,” no one would take me seriously. But, what Sam Altman is claiming is exactly equivalent of that but in information theory instead of with energy. And actually, if he was right about information theory, then you could probably actually turn that into a source of infinite energy, too. So, it’s essentially the perpetual motion machine for computer science.

Robert J. Marks: Yeah, that’s very interesting. And, of course, this idea of AI writing better AI that writes better AI assumes that AI is creative. … According to the Lovelace test, artificial intelligence has yet to be creative.

Eric Holloway: Yeah. And, in fact, well, the things we were just talking about, like the open GAN generating games and GPT generating texts, at least GPT … actually that’s part of the Sam Altman’s company, and all of his AI advances, even though they’re pretty remarkable in themselves, they illustrate exactly this. The only things they’re doing is regurgitating all of their training data, just in more finer grain in interpolation between data points. But, it’s all just reproducing what somebody else wrote. There is zero creativity in these AIs that have come out.

Robert J. Marks: Wow, it’s really an embracement of materialism and determinism, isn’t it?

Eric Holloway:Yeah. Yeah. And the ironic thing is that the more they buy into materialism, the less they actually create.

Robert J. Marks: Right. And I think that our stance is well-grounded in computer science. And why people don’t recognize this, I don’t know. There’s lots of people that believe AI will never be creative. This includes the recent Nobel Laureate, Roger Penrose in his book, An Emperor’s New Mind. And Satya Nadella, who is the CEO of Microsoft, said basically the same thing. He said, “In the future, we’re going to do a lot of things with artificial intelligence but creativity is always going to belong to the programmer.” There’s lots of people that understand the limitations of AI. Yet, there is still this, I don’t know, theology out there that we’re going to reach this idea of a singularity. No, it isn’t going to happen. It isn’t going to happen.

Note: Here are six permanent limitations of artificial intelligence.


Well, here’s the rest of the countdown to date. Read them and think:

3 AI, we are now told, knows when it shouldn’t be trusted! Gödel’s Second Incompleteness Theorem says that, for any system that can reliably tell you that things are true or false, it cannot tell you that it itself is reliable. Holloway: If you just have an AI that says “Never trust me,.” it’s always going to be right.

4 Elon Musk: This time autopilot is going to WORK!Jonathan Bartlett: I have to say, part of me loves Elon Musk and part of me can’t stand the guy Eric Holloway: Tesla is supposedly worth more than Apple now. Who said you can’t make money with science fiction?

5 AI hype: AI could go psychotic due to lack of sleep Well, that’s what we can hear from Scientific American, if we believe all we read. It seems to be an effort to make AI seem more human than it really is.

6 in our Top 12 AI hypes A Conversation bot is cool —If you really lower your standards. A system that supposedly generates conversation—but have you noticed what is says? Bartlett: you could also ask “Who was President in 1600,” and it would give you an answer, not recognizing that the United States didn’t exist in 1600.

7 AI Can Create Great New Video Games All by Itself! In our 2020 “Dirty Dozen” AI myths: It’s actually just remixing previous games. Eric Holloway describes it as like a bad dream of PACMan. Well, see if it is fun.

8 in our AI Hype Countdown: AI is better than doctors! Sick of paying for health care insurance? Guess what? AI is better ! Or maybe, wait… Only 2 of the 81 studies favoring AI used randomized trials. Non-randomized trials mean that researchers might choose data that make their algorithm work.

9 Erica the Robot stars in a film. But really, does she? This is just going to be a fancier Muppets movie, Eric Holloway predicts, with a bit more electronics. Often, making the robot sound like a real person is just an underpaid engineer in the back, running the algorithm a couple of times on new data sets. Also: Jonathan Bartlett wrote in to comment “Erica, robot film star, is pretty typical modern-day puppeteering — fun, for sure, but not a big breakthrough.

10 Big AI claims fail to work outside lab. A recent article in Scientific American makes clear that grand claims are often not followed up with great achievements.

This problem in artificial intelligence research goes back to the 1950s and is based on refusal to grapple with built-in fundamental limits.

11: A lot of AI is as transparent as your fridge A great deal of high tech today is owned by corporations.

Lack of transparency means that people trained in computer science are often not in a position to evaluate what the technology is and isn’t doing.

12: AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.

Show Notes

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

#2 Computers Can Be As Smart As Humans If We Crowdfund Them!