Large Learning Models Are An Unfortunate Detour in AI
Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincingFor decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›