According to media reports, Samsung engineers have been using ChatGPT to quickly fix errors in source code. However, the company has recently encountered three cases of data leaks via the chatbot, including notes from internal meetings and data related to production and profitability. As a result, Samsung is now warning employees about the dangers of using ChatGPT and is considering blocking access to the service altogether.
The Economist reported that in one case, a Samsung developer gave the chatbot the source code for a proprietary error-correction program, essentially exposing the code to a secret AI application run by a third party. In the second case, an employee exposed ChatGPT test patterns designed to identify defective chips and requested their optimization. The third case involved an employee using the Naver Clova app to convert a recording of a private meeting to text and then sending it to ChatGPT to prepare a presentation.
To prevent similar incidents in the future, Samsung is working on safeguards and is also considering developing its own AI service, similar to ChatGPT, for internal use. The company is warning employees that data transmitted to ChatGPT is stored on external servers and cannot be “revoked”, increasing the risk of confidential information leakage. Furthermore, ChatGPT learns from the received data, which means it can disclose confidential information to third parties.