Have a Software Design Idea? Kaggle Could Help It Happen for FreeOkay, not exactly. You have to do the work. But maybe you don’t have to invent the software
In a recent Mind Matters podcast, “Artificial General Intelligence: the Modern Homunculus,” Walter Bradley Center director Robert J. Marks, a computer engineering prof, spoke with Justin Bui from his own research group at Baylor University in Texas about what’s happening — and isn’t happening — in artificial intelligence today. The big story turned out to be all the free software you can use to advance your own projects. This time out, Dr. Bui focuses on what open source (free) Kaggle software can do for you, including competitons.
Call it science non-fiction, if you like…
This portion begins at 12:58 min. A partial transcript and notes, Show Notes, and Additional Resources follow.
Justin Bui: Kaggle is owned by Google; I believe they were acquired somewhat recently. It’s an open source platform that provides computational resources to data scientists and machine learning engineers. But of course, anyone has access to it, if you have an email, you can get access to it.
It’s a website that allows people to post competitions so there are a lot of design competitions of various types. There’s image classification, stock price prediction, housing price prediction…
One example [of a competition] is that the NFL has a helmet detection competition going on right now, in cooperation with Amazon Web Services.
Robert J. Marks: Okay, wait. NFL helmet detection?
Justin Bui: They’re trying to develop a system that can detect and track helmet locations for players… to detect illegal hits — like targeting, for example — by tracking helmets and detecting helmet-to-helmet collision. So part of it’s player safety.
They’re looking at ways to automate this. If you think about a human in the system, a referee has to watch how much of the field? Well, really, all of it. So they miss some things from time to time. And when you think about it from a player safety perspective, you want to be minimizing or, ideally, completely eliminating some of those rough shots. If you developed an AI system to do that, you could shift that burden…
Robert J. Marks: I can also see this being used by people such as neuroscientists to study the impact of these collisions on brain development. We had a guest a while back, Yuri Danilov, a neuroscientist who did just fascinating work… and he refused to let his kids play football, until his oldest son finally did get on a team. And I said, “Well, what happened? I thought you forbade it.” He said, “I was outvoted,” so his kid literally played football.
But I could see tracking this in real time would be really interesting because you could measure, for example, the acceleration of the helmet, you could do the… Let me get a little nerdy here: I think in beginning physics, everybody talks about distance, velocity, acceleration. And then I learned, when I was working for Boeing, that each one of those is related by a higher derivative in calculus.
So you start with the distance, you get the velocity, you get the acceleration. And then, what is the derivative of acceleration? It’s something called jerk. And if your acceleration changes really quickly, you have a jerk associated with you. I could see [this] being used with AGI in order to monitor jerk, which I think that neuroscientists would find very interesting in terms of tracking potential brain damage.
Who is involved in this, is it universities, is it companies, is it both?
Justin Bui: One thing about platforms like Kaggle is, it really is anybody. Anybody who wants to participate can join, so I think from my observation, it’s a lot of individuals. You can actually join teams and coordinate across the world, really. If you’d like, there’s several teams that are multinational. I think the larger thing to take away from that is it’s crowdsourcing the development, so to speak. So you can, in a way, fork up what sounds like a pretty significant amount of money, but in the grand scheme of things, from a company perspective, is relatively small and get, basically, unrestricted access to the IP that’s developed basically for cheap.
Robert J. Marks: Wow, that is really interesting. These are companies which are, if you will, outsourcing their R and D to competitions and probably getting results a lot cheaper than hiring a bunch of experts and trying to tackle the problem locally.
Justin Bui: Yeah, exactly.
Robert J. Marks: You mentioned to me that, in monitoring these things on Kaggle, you saw not an advancement of AGI [artificial general intelligence] but, in a way, a reversal of the AGI. Could you repeat what you told me about that?
Justin Bui: Yeah, sure. I think to summarize it, what we’re seeing is, like you said, it’s a 180. You’re really seeing almost this hyperspecificity in a lot of the applications. If you go through and you observe a lot of the competitions that have closed where many of the competitors have shared their code, you see a lot of evidence of transfer learning so, of course, there’s some network reuse and stuff.
Robert J. Marks: Wait, just elaborate, just a second on transfer learning. Here’s the way I understand transfer learning. Suppose that you had a neural network that was trained on dogs, that you trained this neural network to detect dogs, and you would have to spend a heck of a lot of time figuring out this neural network and training this neural network to recognize dogs. Now, you want to come along and you want to classify cats. Well, it turns out that classifying cats is similar to classifying dogs, so why would you have to go back and start again with scratch? Why couldn’t you use part of that dog neural network to train the cat neural network? And the art of doing that is referred to, I believe, as transfer learning. Is that fair?
Justin Bui: Yeah, that’s a great example:“Hey, why reinvent the wheel when I have a system that gives me 85% of a wheel? So yeah, you’re spot on.
Robert J. Marks: Okay, good. Despite all of these challenges with AGI and your observation that it’s going the other way, maybe we’re waiting for a new theoretical breakthrough, which I don’t think will ever be achieved. But nevertheless, there are people that believe that we are making steps towards AGI… And the word that I used for this was a so-called keyboard engineer. These are people that when they’re looking for a solution don’t sit down and look at the theory, rather, they go directly to the keyboard. You had some interesting comments on that. Could you elaborate on that?
Justin Bui: Yeah, sure. It’s one of those things that some of my colleagues and I have jokingly referred to as Stack Overflow engineers.
Robert J. Marks: Stack Overflow, that’s a website, right?
Justin Bui: It’s a forum where people can post errors or issues that they’re having with their code, and it’s a community-sourced solution house, if you will. But it’s pretty funny because some of the colleagues I’ve had throughout the years have joked about, “Okay. Hey, we just got this problem, let me go check Stack Overflow really quick. Chances are somebody’s done it before. I’ll just reuse it.” And so I think that feeds into some of the AGI belief as well. “Oh well, open AI has produced X, Y, Z neural networks,” and, “Oh hey, Google and Google’s Brain team have published on A, B, C works, and if we can start merging these together, the system will just become super intelligent.” and so I think in some regards, it’s fed a lot by what people are observing from the major companies.
It was funny because when I think of AGI, I think of Hal 9000 or Skynet or, for those of you that are more into movies more recent, Ultron — these systems that seemingly have limitless resources and infinite knowledge and, obviously, evil intentions. I think that’s one of the things that helps capture people’s attention and their creativity as well.
But, I think, at the end of the day, Bob, people just go straight to the keyboard. They don’t sit down thinking about how to approach a problem. How do we solve it from the theory perspective and then start deploying it? It’s really more, “Well, okay. I need to go make a classifier that tells me the difference between kumquats and giraffes,” and they just sit down and start coding.
Skynet (in the movies, not at open source websites):
Robert J. Marks: And so they import these things and download the software and use the software as a black box, without looking at the deeper theory of how it is created and the computer science of where it came from and the possibilities of doing AGI in the future.
They don’t address some of the things we talk about on Mind Matters News: they don’t address the Lovelace test,for creativity, which has never been demonstrated in artificial intelligence. They don’t talk about even simple counterarguments, like Searle’s Chinese Room…
Robert J. Marks: Okay, any final comments?
Justin Bui: In some regards, AI and machine learning have become catchphrases throughout the world. I used to joke that AI is very similar to the word “synergies” in the business world. Synergies, everybody wants synergies. The new thing is everybody wants machine learning. They don’t necessarily understand what it is, it’s… Typically, it’s, “I was just handed an assignment, I’ve got two weeks to do it. I’m going to go to my keyboard and start writing some code.”
Anyway, Kaggle is real and Ultron isn’t (except, of course, in the movies):
Here’s are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui: If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.
Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.
In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.
In Episode 161, Part 1, Marks, Haug, and Bui discuss the Iron Law of Complexity: Complexity adds but its problems multiply. That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. They also discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.
In Part 2 of Episode 161, they look at the Pareto tradeoff and the knowns and unknowns:
Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty. Constraints underlie any engineering design — even the human body.
- 00:44 | The Homunculus
- 03:21 | Introducing Justin Bui
- 04:10 | AI Software
- 06:04 | Fast AI
- 12:58 | Deepfake Technology
- 20:03 | Transfer Learning
- 23:25 | Rapture of the Nerds
- 28:59 | Little Faith in AGI
- Karl Shuker, Grow Your Own Homunculus
- Dr. Pretorius and some of his creations from The Bride of Frankenstein.