OpenAI, the company behind the ChatGPT AI chatbot, has announced the launch of a bounty program for discovering vulnerabilities. Researchers are promised rewards of up to $20,000 for vulnerabilities found in ChatGPT and other OpenAI products and assets. Registered security researchers will be able to search for bugs in the manufacturer’s product line and receive rewards for reporting them through the Bugcrowd crowdsourcing platform. The amount of the reward will depend on the severity and potential impact of the discovered issues, ranging from $200 for minor bugs to $20,000 for extremely serious vulnerabilities.
The OpenAI Application Programming Interface (API) and the ChatGPT chatbot are part of the bug bounty program. However, the company is asking researchers to report chatbot AI issues via a separate form if the bugs do not impact security. OpenAI states, “Language model security issues do not fit well into a bug bounty program because they are not separate, isolated bugs that can be fixed directly. Solving these problems often requires serious research and a broader approach. To make sure these issues are properly fixed, please report them using the dedicated form, rather than submitting them through the bug bounty program. By reporting them properly, you allow our researchers to use these reports to improve the model.”
Issues that fall outside the scope of the bounty program include jailbreaks and security bypasses that ChatGPT users are using to force the chatbot to ignore rules set by OpenAI engineers.
Last month, Chat-GPT users suffered a data breach in which users saw other people’s AI requests, and some ChatGPT Plus subscribers saw other people’s personal data, including the subscriber’s name, email address, billing address, as well as the last four digits of their bank card number and expiry date. It was later revealed that the failure occurred due to an error in the Redis open source client library. Although the company does not link the launch of a bug bounty program to this incident, it is likely that the problem that caused the leak could have been discovered earlier and the leak could have been avoided.