Mind Matters Natural and Artificial Intelligence News and Analysis

TagLarge language models (lack of accountability)

engineers-meeting-in-robotic-research-laboratory-engineers-scientists-and-developers-gathered-around-illuminated-conference-table-talking-using-tablet-and-analysing-design-of-industrial-robot-arm-stockpack-adobe-stock
Engineers Meeting in Robotic Research Laboratory: Engineers, Scientists and Developers Gathered Around Illuminated Conference Table, Talking, Using Tablet and Analysing Design of Industrial Robot Arm

At Salon, Funk and Smith Take On “Stealth AI Research”

All we know for sure about the claims about Google AI’s LaMDA showing human-like understanding is that, since 2020, three researchers who expressed doubt/concerns were fired

Yesterday at Salon, Jeffrey Funk and Gary N. Smith took a critical look at “stealth research” in artificial intelligence. Stealth research? They explain, A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet’s CEO, has compared to mankind’s harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review. Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently reported that LaMDA, Google’s state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation: Blaise: How do you know if a thing loves you back? LaMDA: There isn’t an easy answer Read More ›