OpenAI has launched a bug bounty program in collaboration with the security startup Bugcrowd to discover security bugs in its AI models. This program does not cover substantive errors in the models or tools such as ChatGPT.
The bug bounty program is aimed at detecting security errors in OpenAI’s AI models and tools, as well as related third-party software and tooling. These are technical security errors, not errors in the operation of the models or tools such as wrong answers, answers with biases or “hallucinations” of the AI models.
Participants in the program must adhere to a number of strict requirements. This is to enable OpenAI to distinguish between ‘ethical’ and ‘malicious’ hacks. These include following policies, reporting discovered bugs, not violating privacy, disrupting systems, destroying data or hindering the user experience. Discovered bugs must remain confidential until OpenAI gives permission to present them, which should be within 90 days of receipt of the bug report.
OpenAI and Bugcrowd’s bug bounty program allows security researchers to earn between $200 and $20,000 for the vulnerabilities, bugs, or other security flaws they discover. The more serious the bug, the higher the reward.
OpenAI has introduced a plugin functionality for ChatGPT to further enhance its AI models.