Kaspersky Lab conducted an experiment to test how well the ChatGPT chatbot could recognize phishing links. The company’s specialists tested GPT-3.5-turbo on more than two thousand phishing links, mixing them with regular ones. The results showed that ChatGPT correctly identified phishing links 87.2% of the time and had a false positive rate of 23.2%. When asked if it was safe to follow a link, the detection rate was higher at 93.8%, but the false positive rate was also higher at 64.3%.
The language model was also successful in identifying popular brand names in phishing links. It correctly identified the names of large companies and corporations, including TikTok, Google, Amazon and Steam, as well as the names of different banks from around the world, without any additional training.
However, ChatGPT was not always able to explain why a particular link was malicious. Many explanations included fictitious data, and the AI ”hallucinated” and gave answers that did not correspond to reality. For example, the AI referred to a WHOIS domain verification service that it does not have access to and gave erroneous information.
Kaspersky Lab conducted an experiment to assess the ability of the ChatGPT chatbot to recognize phishing links. The company’s specialists tested GPT-3.5-turbo on more than two thousand phishing links, mixing them with regular ones. The results showed that ChatGPT correctly identified phishing links 87.2% of the time and had a false positive rate of 23.2%. When asked if it was safe to follow a link, the detection rate was higher at 93.8%, but the false positive rate was also higher at 64.3%.
The language model was also successful in identifying popular brand names in phishing links. It correctly identified the names of large companies and corporations, including TikTok, Google, Amazon and Steam, as well as the names of different banks from around the world, without any additional training. However, ChatGPT was not always able to explain why a particular link was malicious. Many explanations included fictitious data, and the AI ”hallucinated” and gave answers that did not correspond to reality.