#7 AI Can Create Great New Video Games All by Itself!In our 2020 "Dirty Dozen" AI myths: It’s actually just remixing previous games
Our Walter Bradley Center director Robert J. Marks has been interviewing fellow computer nerds (our Brain Trust) Jonathan Bartlett and Eric Holloway about 12 overhyped AI concepts of the year. From AI Dirty Dozen 2020 Part II. Now here’s #7. Computers can create their own video games, no imagination involved! Or maybe… wait … don’t invest just yet …
“Computers can create their own video games” starts at 05:07. Here’s a partial transcript. (Show Notes and Additional Resources follow, along with a link to the complete transcript.)
Robert J. Marks: Okay. Number seven. AI can implement video games just by watching. This was from an article called “Learning to simulate dynamic environments with gameGAN.”
Eric Holloway (pictured): Yeah but you won’t really be selling these video games to make millions of dollars. So, it’s able to learn some kind of feedback matrix based on looking at the game screen and the players’ input. And, so you get something that looks a little bit like PAC-MAN or a little bit like that game Doom. But, it doesn’t stay coherent for very long. Walls will appear and disappear, and ghosts will pop up and disappear. Yeah. So, it’s not super coherent, but because you already know what’s going on with PAC-MAN, you can squint your eyes and say, “Yeah, that’s a PAC-MAN game.”
This video gives some sense of PACMan, now 40 years old:
Robert J. Marks: Oh, so in other words, they train some artificial intelligence with a number of games and this artificial intelligence creates a game. Is that the idea?
Eric Holloway: Right. Yeah. And, it’s not creating a new game. It’s basically just reproducing what it already learned.
So, they train on a whole bunch of screens of PAC-MAN and player input, and it just learns how to map the input to different screen frames and finds the gradient between those. So, what they can do with that is they can randomize it and come up with random variants of PAC-MAN, but still it remains PAC-MAN in general, just a much weirder kind of PAC-MAN …
This is what I see with pretty much all the convolutional neural network type results like the GPT result, which I think we’ll talk about a little bit later. If it’s generating texts, if you look at just a few words or a few sentences or maybe at the paragraph level, and you squint your eyes, you can get something that makes sense out of that. But, once you start stepping out and gain the bigger picture, it falls apart because the neural network is really good at learning these very closely related relationships, but it doesn’t really have a concept of the overall structure of anything. And, so that’s what you see in these video games, too. PAC-MAN, you move around and within four or five squares, you see pretty much the same maze. But, once you leave an area and come back to an area, it starts misremembering what it came up with before. It’s like a bad dream of PAC-MAN.
This is early PACMan (huge noise warning). The rest owes a great deal to subsequent human imagination:
Anyway, here’s the rest of the countdown to date. Read it and whistle. No, seriously, have fun over coffee!:
#8 in our AI Hype Countdown: AI is better than doctors! Sick of paying for health care insurance? Guess what? AI is better ! Or maybe, wait… Only 2 of the 81 studies favoring AI used randomized trials. Non-randomized trials mean that researchers might choose data that make their algorithm work.
#9: Erica the Robot stars in a film. But really, does she? This is just going to be a fancier Muppets movie, Eric Holloway predicts, with a bit more electronics. Often, making the robot sound like a real person is just an underpaid engineer in the back, running the algorithm a couple of times on new data sets. Also: Jonathan Bartlett wrote in to comment “Erica, robot film star, is pretty typical modern-day puppeteering — fun, for sure, but not a big breaththrough.
10: Big AI claims fail to work outside lab. A recent article in Scientific American makes clear that grand claims are often not followed up with great achievements. This problem in artificial intelligence research goes back to the 1950s and is based on refusal to grapple with built-in fundamental limits.
11: A lot of AI is as transparent as your fridge A great deal of high tech today is owned by corporations. Lack of transparency means that people trained in computer science are often not in a position to evaluate what the technology is and isn’t doing.
12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.
- 00:30 | Introducing Jonathan Bartlett
- 00:38 | Introducing Dr. Eric Holloway
- 01:25 | #8: “Is AI really better than physicians at diagnosis?” (Mind Matters News)
- 05:07 | #07: “Learning to Simulate Dynamic Environments with GameGAN” (GameGAN)
- 09:03 | #6: “GPT-3 Is “Mindblowing” If You Don’t Question It Too Closely” (Mind Matters News), “Built to Save Us from Evil AI, OpenAI Now Dupes Us” (Mind Matters News), “There’s a subreddit populated entirely by AI personifications of other subreddits” (The Verge), and “Bot posing as human fooled people on Reddit for an entire week” (The Independent)
- 16:03 | #5: “Lack of Sleep Could Be a Problem for AIs” (Scientific American)
- Jonathan Bartlett at Discovery.org
- Eric Holloway at Discovery.org
- #8: “Is AI really better than physicians at diagnosis?” (Mind Matters News)
- #7: “Learning to Simulate Dynamic Environments with GameGAN” (GameGAN)
- #6: “GPT-3 Is “Mindblowing” If You Don’t Question It Too Closely” (Mind Matters News), “Built to Save Us from Evil AI, OpenAI Now Dupes Us” (Mind Matters News), “There’s a subreddit populated entirely by AI personifications of other subreddits” (The Verge), and “Bot posing as human fooled people on Reddit for an entire week” (The Independent)
- #5: “Lack of Sleep Could Be a Problem for AIs” (Scientific American)