According to an internal memo, Samsung has introduced a new policy this week, prohibiting employees from using generative artificial intelligence tools like OpenAI's ChatGPT and Google Bard in the workplace.

Samsung stated that in April, an engineer accidentally leaked internal source code by uploading it to ChatGPT. This has raised concerns for the company that their data might end up in the hands of other users through AI platforms.

Currently, Samsung employees are banned from using AI tools on company devices, including computers, tablets, and smartphones. However, employees can still use AI tools on personal devices for non-work-related activities.

The memo emphasized the importance of adhering to security guidelines, and employees who violate the rules and cause company information or data leaks will be subject to disciplinary action or termination, depending on the severity of the case.

Samsung added that it is working on creating a secure environment that allows generative AI to help employees improve productivity without any risks. Until then, AI usage will be restricted.

Meanwhile, Samsung is developing its own AI tools for employees to use for tasks such as software development and translation.

Maintaining Distance Samsung is not the first company to ban employees from using AI tools. Large banks such as JPMorgan Chase, Bank of America, and Citigroup have already imposed restrictions on AI usage and were among the first to limit employee access to ChatGPT.

These banks also share concerns about the risks of third-party software accessing sensitive information and fear that AI could potentially leak their financial information, leading to stricter regulatory actions.

In January, tech giant Amazon similarly warned employees against using ChatGPT in the workplace, citing data protection concerns.

In April, OpenAI announced new measures to address data leakage issues, such as allowing users to disable chat logs. OpenAI stated that after disabling chat logs, ChatGPT would retain new conversations for 30 days, and the company would only review conversations for monitoring abuse before permanently deleting them.

Google said it would improve Bard's privacy security by using automated tools to help remove users' personal identification information. Google also assured that users' conversations with Bard would be reviewed by experts, kept separate from their Google accounts, and stored for three years.