Canadian researchers have stated that ChatGPT makes a few mistakes when used for coding assistance. This week, The Register reported on the scientific research conducted by four scientists from Canada’s Université du Québec. The research analyzed the code that generates OpenAI’s large language model based on security considerations and found that ChatGPT’s code is not very secure.
The researchers assigned ChatGPT several programming tasks that could reveal different kinds of security vulnerabilities, such as memory corruption, denial of service, and improperly implemented cryptography. The results showed that ChatGPT generated only five out of 21 safe programs after a first try. Furthermore, the researchers found that ChatGPT appears to be aware of potential vulnerabilities and readily admits the presence of critical vulnerabilities in the code it represents.
Raphael Khoury, an associate professor of computer science and engineering, told The Register, “Obviously it’s an algorithm. It doesn’t know anything, but it can recognize unsafe behavior.” Several companies have banned the use of ChatGPT due to its security issues.