Mind Matters Natural and Artificial Intelligence News and Analysis
two-female-programmers-working-on-new-projectthey-working-late-at-night-at-the-office-stockpack-adobe-stock
Two female programmers working on new project.They working late at night at the office.
Two female programmers working on new project.They working late at night at the office.

If Not Hal or Skynet, What’s Really Happening in AI Today?

Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent Mind Matters podcast, “Artificial General Intelligence: the Modern Homunculus,” Walter Bradley Center director Robert J. Marks, a and computer engineering prof, spoke with Justin Bui from his own research group at Baylor University in Texas on what is — and isn’t — really happening in artificial intelligence today. Some of the more far-fetched claims remind Dr. Marks of the homunculus, the “little man” of alchemy.

So what are the AI engineers really doing and how do they do it? Call it science non-fiction, if you like…

This portion begins at 00:44 sec. A partial transcript and notes, Show Notes, and Additional Resources follow.

Robert J. Marks: Isaac Newton was the genius who founded classical physics. He also invented calculus. He also did other things. Newton was a student of the Bible, specifically Bible prophecy, and he wrote extensively on his research.

Newton also dabbled in alchemy. Now, most think of alchemy as the quest to turn lead into gold but there’s a lot more to alchemy. Some in alchemy pursued the creation of a so-called homunculus, a little person created in a test tube. If you watched the 1935 classic monster movie, The Bride of Frankenstein, you see a scene where the mad scientist, Dr. Pretorius, shows off his homunculi to Henry Frankenstein:

No one, to date, has created the alchemist’s dream of a homunculus. And if you exclude, maybe, cloning, I don’t think that they probably ever will. The search for the homunculus today has been replaced by a search for artificial general intelligence or AGI.

What does AGI do? AGI seeks to duplicate and exceed what you and I do. If artificial general intelligence is achieved, some say we will become pets of computers. The point where AI becomes superior to humans is called the the Singularity by Google’s Ray Kurzweil. If this happens, watch out. AI will write better software that writes better software that writes better software on an endless staircase of ever-increasing intelligence.

Robert J. Marks

There are smart people who believe this will happen. but AGI is not happening, and there’s growing evidence it never will. AI can be written to mimic many human traits, but there are some human characteristics that will never be duplicated by AI. We cover this a lot on Mind Matters News.

Properly defined, these properties that will never be achieved include creativity, sentience, and understanding. In fact, AI seems to be going in the opposite way. More and more human expertise is being folded into the AI software. The added intelligence in AI is not due to AI, but is due to human creativity and ingenuity infused in the software, by the programmer.

Note: One person who more or less agrees with Marks’s assessment is iconic Silicon Valley venture capitalist Peter Thiel, who spoke at COSM 2021. He told the gathering that whole transhumanist movement is slowing down. But, he added, what is happening should sober us up a lot: There’s no road to computers that think like people that wouldn’t take us through 24/7 computer surveillance first. Is that really what we want?

Robert J. Marks: To talk about these things, our guest today is Dr. Justin Bui. Justin is a freshly-mind PhD from my research group at Baylor University. He specializes in, among other things, artificial intelligence and deep learning. Before we go into some trends in artificial intelligence, what I’d like to do is describe the playing field that you have been watching.

The remarkable AI software resources that are free to download and use

Robert J. Marks: So first, let’s talk about the software, Justin. AI software is widely available, it’s free, and it’s powerful. It’s available to anyone on the net. Could you go through some of the AI software and what it does?

Justin Bui: Sure. It’s an interesting playing space. It seems like every day, there’s a new tool that comes out that makes everybody’s lives just a little bit easier. The big ones, of course, are PyTorch and TensorFlow, driven by Facebook and Google respectively. They make up probably 75% to 80% of a lot of the machine learning systems out there, if not more. They’re very easy to use.

And going hand in hand with that is the use of free web resources. A lot of systems out there provide free computational resources, basically virtual machines that anybody can sign up for and use. They can design, deploy, evaluate any machine learning model that they like. And it’s actually quite interesting to see how prevalent some of these systems have become.

Robert J. Marks: One of the fascinating things is the free available computation. AI–like deep neural networks, for example, can take a long time to train. And so you’re crunching the computer again and again and again… And yet, fast software resources, available on the web, allow you to do this in cloud. And that, to me, is just amazing, that people are making this available for free. There’s also something called fast AI. What is fast AI?

Justin Bui: fast.AI is a wrapper for PyTorch, with a lot of pre-built models. It’s meant for rapid proof-of-concept testing, if you will. It takes advantage of a lot of transfer learning techniques. Really, anybody can pick up a Jupyter Notebook or a little bit of Python code and follow along on one of their tutorials and effectively deploy a classification model or regression model. It’s really meant to help speed up the initial proof of concept for a lot of these model development processes.

Robert J. Marks: It’s an interface in a way, is that right?

Justin Bui: Yeah. I think a good way to classify it would be like a high-level wrapper. It lets you take advantage of some of the work that’s already been done and that ultimately cuts down on somebody’s development cycle.

Robert J. Marks: So, “PyTorch,” the Py is for Python. Python is a computer language that’s available for free. Everybody can use it, right?

Justin Bui: Correct. Torch itself is actually built on Lua, which … is a scripting type language. and so PyTorch is… the Python high-level wrapper for the Lua interface, that is, Torch.

Robert J. Marks: Okay. What sort of stuff can you do with all this free software? Maybe, specifically, some of the stuff we see in the news today?

Justin Bui: All of these tools have high-level code wrappers for doing custom layer developments. So of course, you’ve got convolutional layers which, for those that are familiar, go into convolutional neural networks. You have transformer layers which are gaining popularity and…

Robert J. Marks: Can I interrupt you just for a second? for the general audience What does a wrapper serve?

Justin Bui: Good question. A wrapper is … a chunk of code that ultimately makes deploying something more complex very easy. You can think of it as like a “super function” in a way.

Robert J. Marks: I see. So you might have software to build the pyramids. You click yes and the pyramids are built. Something very big happens.

Justin Bui: Exactly. It’d be something like “build a pyramid,” and all the hard stuff is done underneath the hood, so to speak.

Robert J. Marks: Okay. So you were talking about some of the stuff that you can do with all of this free available software and all of this free available computational space…

Justin Bui: Of course, like I mentioned previously, you have convolutional layers, you have recurrent layers, which are things like LTSMs that add a little bit of memory, so to speak, to the neural network transformers and a whole bunch of combinations in between. It really lets you get creative with the architecture. You can combine different techniques into this…

You can consider it an amalgamation of different neuron types with different inputs and outputs. And you can create this hydra-looking system, if you will, where it can take various inputs and create various outputs. And it’s really great because it lends itself to this creative model development through its flexibility.

Both of these tools allow that to happen and you see this competition back and forth. It’s been interesting to follow along, as these tools develop. When I started doing a lot of my research, TensorFlow 2.0 was still relatively new. I believe it was still in beta actually. And most recently, I believe they’re up to stable release 2.6. PyTorch, was, I believe, at about 1.2 at the time when I started my research and, most recently, their stable release is 1.9.

So you’re seeing some pretty heavy iteration improvements in these tools. It’s great because it’s driving a lot of the AI machine learning development, going hand in hand with deployment of these tools as you’re seeing more and more of these free resources becoming available.

Robert J. Marks: Now, the interesting thing: This is available to anybody in the world. All of our adversaries in the United States, at least politically, militarily, like China, Iran can plug in, get this free software, and do all this artificial intelligence for free.

Justin Bui: Yeah, that’s right. It’s a double-edged sword in a way. But I think in an ideal world, anyways, what you’re doing is you’re providing the masses with the tools and the opportunities to push the envelope forward. And I think it’s a good thing because it makes the accessibility and the learnability of the techniques much more grounded.

Whereas before, it was pretty heavily academic and very computationally intense or required a lot of subject matter expertise. A lot more of the innovation now is who can get to the finish line first. So it should, in a way, encourage some more competition.

Justin Bui: One caveat, of course, to free resources, is that they are constrained. Most systems, you’re typically limited to a fixed number of training or running hours. You get a fixed amount of memory.

Most systems provide between two and 16 gigs of RAM to use, which sounds like quite a bit. Most people probably have 16 or 32 gigs of RAM on their personal computers. But if you’re loading a data set that contains 150 gigabytes of DICOM data, for example, or other medical data, well, that’s not going to fit in memory. You’ll find out very quickly that these systems break.

Robert J. Marks: However, if you do have the resources of computation and memory by yourself, you can download the software and run it in your system with, basically, limitations which are totally dictated by resources that you have locally, right?

Justin Bui: Yeah, that’s correct. One of the nice things about the open source tools is that if you want to build yourself a small supercluster and with a couple of terabytes of RAM and whole bunch of processors, you’re, of course, welcome to do that. You have almost no limitations other than making sure that you have compatible drivers and that all of your system software plays nicely.

Next: Have a software design idea? Kaggle may make it happen


Here’s are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui: If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.

and

Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.

In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.

In Episode 161, Part 1, Marks, Haug, and Bui discuss the Iron Law of Complexity: Complexity adds but its problems multiply. That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. They also discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.

and

In Part 2 of Episode 161, they look at the Pareto tradeoff and the knowns and unknowns:
Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty. Constraints underlie any engineering design — even the human body.

You may also wish to read: Harvard U Press computer science author gives AI a reality check. Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand it. Computers, he said, have a very hard time understanding many things intuitive to humans and there is no clear programming path to changing that.

and

Jonathan Bartlett: An Interview With the Author “Learn to Program with Assembly” teaches programmers the language needed for a better understanding of their computer. Knowing assembly language as a programmer is like understanding the mechanics of a race car as a NASCAR driver, says Bartlett.

Show Notes

  • 00:44 | The Homunculus
  • 03:21 | Introducing Justin Bui
  • 04:10 | AI Software
  • 06:04 | Fast AI
  • 12:58 | Deepfake Technology
  • 20:03 | Transfer Learning
  • 23:25 | Rapture of the Nerds
  • 28:59 | Little Faith in AGI

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

If Not Hal or Skynet, What’s Really Happening in AI Today?