US regulators are investigating OpenAI, an artificial intelligence company, regarding the potential risks posed by its ChatGPT model in generating false information. The Federal Trade Commission (FTC) has sent a letter to OpenAI, backed by Microsoft, requesting information on how the company addresses risks to individuals’ reputations. This inquiry reflects the increasing regulatory scrutiny surrounding AI technology.
ChatGPT is capable of generating human-like responses to user queries in a matter of seconds, diverging from the traditional search engine’s results consisting of links. This AI technology, along with similar products, is expected to significantly transform the way people access information online. However, its emergence has sparked intense debates, ranging from concerns about data usage and response accuracy to potential violations of authors’ rights during the technology’s training phase.
The FTC’s letter specifically asks OpenAI about the measures taken to mitigate the risk of generating false, misleading, disparaging, or harmful statements about real individuals. The commission is also examining OpenAI’s approach to data privacy, including how it acquires data for training and informing the AI system.
OpenAI’s CEO, Sam Altman, has expressed the company’s willingness to cooperate with the FTC, highlighting the extensive safety research and efforts made to ensure ChatGPT’s alignment and safety before its release. Altman emphasised OpenAI’s commitment to user privacy and the design of their systems to focus on learning about the world rather than private individuals.
The FTC, under the leadership of Chair Lina Khan, has taken a prominent role in regulating major tech companies. Khan, known for her criticism of America’s anti-monopoly enforcement related to Amazon, has faced criticism from some quarters, accusing her of overstepping the boundaries of the FTC’s authority. Despite these challenges, the FTC continues to play a significant role in scrutinising tech giants.