The Federal Trade Commission (F.T.C.) sent a letter to OpenAI, the San Fransisco company responsible for creating ChatGPT, the Large Language Model that captured the world’s imagination in November of 2022. Per the New York Times, the F.T.C. is investigating the AI company’s methods of data acquisition and also plans on measuring the potential harms of AI on society, citing concerns over false information and job replacement. Cecilia Kang and Cade Metz report:
In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data.
The investigation was earlier reported by The Washington Post and confirmed by a person familiar with the investigation.
The F.T.C.’s investigation poses the first major regulatory threat to OpenAI. Sam Altman, the start-up’s co-founder, testified in Congress in May and said he invited A.I. legislation to oversee the fast-growing industry, which is under scrutiny because of how the technology can potentially kill jobs and spread disinformation.-Cecilia Kang & Cade Metz, F.T.C. Is Investigating ChatGPT Maker – The New York Times (nytimes.com)
ChatGPT, a neural network, can be trained to learn information and predict outcomes. It has been used by lawyers, students, and computer programmers, but makes mistakes, as the Times article notes, and can’t be trusted with tasks that require complexity and context. Mind Matters contributor Gary Smith has written extensively on this topic in the past; now the Commission is moving forward with an investigation.