AI Language Models: Real Intelligence Or Creative Thievery?
Are AI language models like ChatGPT, Grok, and Perplexity actually intelligent? Or are they getting away with a creative kind of thievery? Welcome to the Mind Matters News podcast, your source for discussions on all things philosophy of mind and artificial intelligence. Today, guest host Patrick Flynn continues his conversation with Dr. Eric Holloway and Dr. Robert J. Marks. The trio talk about AI, plagiarism, and the illusion of intelligence, and, specifically, why AI-generated language is not truly creative and depends on a kind of artistic thievery to get the job done. Dr. Holloway and Marks explain how these models are essentially advanced language prediction engines, not thinking entities. The discussion covers Searle’s Chinese room experiment, the problem of “model collapse” that prevents true AI creativity, and the legal issues surrounding AI plagiarism. The conclusion? While these language models are impressive, they are fundamentally limited in their ability to match human-level creativity and understanding. This is Part 2 of a three-part discussion.