A new article by AI researcher Arlie Coles at American Mind aims to “demystify” artificial intelligence, particularly the claim that AI creators have no idea what they’re creating. Coles says that we understand AI much better than the doomsdayers let on. Part of the reason for AI’s cloudy nature is due to its mathematical complexity, which Coles finds understandable. But that’s no reason not to try and understand what AI is and gauge its benefits and capacities in an accurate light. She writes,
We do know what we’re building and how it works, and it’s not too late for us to speak forthrightly about AI so that the general public, not just those with math or computer science Ph.D.s, can grasp the big intuitions. The same way everyone successfully learned how to use Google search, everyone is capable of getting a feel for modern AI technology and making wise decisions in the face of marketing, regulatory capture, and l’appel du vide of a good old extinction event. But specialists and non-specialists must respect each other enough to meet in the middle once again.–Arlie Coles, Demystifying AI – The American Mind
Coles goes on to describe the “neural network” structure of AI systems like ChatGPT, noting how such computers are “number-mappers.” She details how these systems work and how, for all their sophistication, they are dependent on their human programmers. In addition, she mentions how the original AI programmers didn’t equate the brain and mind to the machines they were developing. She reiterates the truth that humans and machines are categorically separate.
Coles writes a helpful article situating AI hype and helping readers understand more accurately about what it is and what it can (and can’t) do.