ChatGPT User Accounts Compromised by Infostealers
More than 101,000 Credentials Stolen
Over the past year, more than 101,000 ChatGPT user accounts have been stolen by infostealers, according to Group-IB analysts. In May 2023 alone, attackers published about 26,800 credentials on the dark web.
The researchers write that they found more than 100,000 logs containing credentials from ChatGPT accounts on various underground sites, and the peak of their publications was recorded in the aforementioned May 2023.
In total, between June 2022 and May 2023, nearly 41,000 accounts were compromised in Asia-Pacific, nearly 17,000 in Europe, and North America ranked fifth with 4,700 accounts compromised. Other countries where quite a few ChatGPT accounts have been compromised include Pakistan, Brazil, Vietnam, Egypt, the United States, France, Morocco, Indonesia, and Bangladesh.
“Many enterprises integrate ChatGPT into their operations,” says Dmitry Shestakov, Group-IB specialist. “Employees conduct confidential correspondence or use a bot to optimize proprietary code. Given that ChatGPT saves all past conversations in its default configuration, this could unintentionally give attackers access to valuable information if they get hold of the credentials.”
According to the company, the number of logs containing credentials from ChatGPT is steadily growing. At the same time, almost 80% of all logs come from the Raccoon stealer, followed by Vidar (13%) and Redline (7%).
Growing Demand for ChatGPT Credentials
Experts point out that more and more people are using chatbots to optimize their work, whether it’s software development or business communication. Accordingly, the demand for ChatGPT credentials is also steadily growing.
The increasing number of ChatGPT user accounts being compromised by infostealers is a cause for concern for businesses and individuals alike. With the rise of chatbot technology, it is becoming increasingly important to take measures to protect user credentials and confidential information.
Businesses should ensure that their employees are aware of the risks associated with using chatbot technology and take steps to protect their data. This includes using strong passwords and two-factor authentication, as well as regularly changing passwords and monitoring for suspicious activity.
Individuals should also take steps to protect their data, such as using strong passwords and two-factor authentication, as well as regularly changing passwords and monitoring for suspicious activity. Additionally, users should be wary of any suspicious emails or messages that may contain malicious links or attachments.
In conclusion, the increasing number of ChatGPT user accounts being compromised by infostealers is a cause for concern. Businesses and individuals alike should take steps to protect their data, such as using strong passwords and two-factor authentication, as well as regularly changing passwords and monitoring for suspicious activity.