When AI Goes Wrong
AI must do what it is designed to do, but what if it doesn’t? What if AI begins behaving in bizarre and unpredictable ways? The more complex the system, the more that it can go wrong. Robert J. Marks discusses artificial general intelligence (AGI) with Justin Bui and Samuel Haug.
Show Notes
- 00:37 | Introducing Justin Bui and Samuel Haug
- 01:18 | The Rest of the Story
- 02:50 | Could AI Win at Jeopardy?
- 05:39 | IBM’s Deep Blue
- 07:52 | Deep Convolutional Neural Network
- 10:31 | Self-Driving Cars
- 16:33 | Unexpected Contingencies
Additional Resources
- Haug, Samuel, Robert J. Marks, and William A. Dembski. “Exponential Contingency Explosion: Implications for Artificial General Intelligence.” IEEE Transactions on Systems, Man, and Cybernetics: Systems (2021).
- Robert J. Marks II, This Year’s (2018’s) Top Ten AI Exaggerations, Hyperbole, and Failures: Part II, Mind Matters News
- Lauren Vespoli, Where Watson went wrong, MM&M, Sept 8, 2021
- Paul Scharre. Army of none: Autonomous weapons and the future of war. WW Norton & Company, 2018.