Explosive Growth of ChatGPT Prompts FTC to Investigate Data Usage Risks
ChatGPT experienced explosive growth when it came on the scene late last year. However, it can count on a large-scale interest from government authorities. The US Federal Trade Commission (FTC) requires OpenAI to explain how it mitigates data usage risks and image damage for individuals.
That writes The Washington Post. This week, the FTC allegedly sent a 20-page document to OpenAI’s headquarters in San Francisco, full of questions about the alleged risks.
As is now known worldwide, chatbots can be capable of a lot. Thanks to generative AI, applications like ChatGPT and Google Bard are capable of free human communication. Nevertheless, there are many disadvantages, which often lead to concerns among parties such as the EU and the US government. Just think of the huge datasets that Big Tech collects to feed the AI model, which leads to all kinds of copyright problems.
FTC Concerns of a Different Nature
The issues the FTC is concerned about are of a somewhat different nature than what we’ve seen so far. The pain point usually lies with privacy and copyright, while this also concerns the so-called hallucinations that such chatbots struggle with. Because a large language model is not chasing truth, it can be quite wrong when it comes to facts. LLMs predict the next word based on a large set of parameters, using a user’s prompt as the ignition flame. In addition, it does not take fact-checking into account in itself.
The Washington Post also reports that political legislation to regulate AI will be months away. It’s a difficult issue to fathom, even as OpenAI CEO Sam Altman stopped by the US Congress for a hearing in May.
Generative AI and Factual Accuracy
Attempts to put generative AI in a factually correct guise are in full development. Chatbots are rather an incomplete example of what the underlying innovation is capable of. Potentially, an LLM can sift through a huge corpus of data to gain insights faster and more effectively than a human alone – with sufficient training, development and the right input.
ChatGPT has experienced a meteoric rise in popularity since its launch late last year, and it has not gone unnoticed by government authorities. The US Federal Trade Commission (FTC) has sent a 20-page document to OpenAI’s headquarters in San Francisco, asking questions about the potential risks associated with the use of the technology.
Generative AI, such as ChatGPT and Google Bard, has the potential to revolutionize the way humans communicate. However, there are a number of potential pitfalls, such as the huge datasets that Big Tech collects to feed the AI model, which can lead to copyright issues.
The FTC is concerned about more than just privacy and copyright issues. It is also worried about the so-called hallucinations that chatbots struggle with. A large language model is not chasing truth, and can be quite wrong when it comes to facts. LLMs predict the next word based on a large set of parameters, using a user’s prompt as the ignition flame. In addition, it does not take fact-checking into account in itself.
Political legislation to regulate AI is still months away, and OpenAI CEO Sam Altman has already testified before the US Congress in May. In the meantime, attempts to put generative AI in a factually correct guise are in full development. Chatbots are just a small example of what the underlying innovation is capable of. Potentially, an LLM can sift through a huge corpus of data to gain insights faster and more effectively than a human alone – with sufficient training, development and the right input.