[ad_1]
Samsung has banned the use of popular generative AI tools, such as the likes of OpenAI’s ChatGPT, Google Bard, and Bing, among its employees.
The South Korean company has informed employees of one of its most extensive divisions through a memo reviewed by Bloomberg that a new policy has been put in place. The company is worried about the data transmitted to artificial intelligence platforms like Google Bard and Bing, as it is stored on external servers, making it challenging to retrieve and delete. This data may be disclosed to other users, which worries Samsung
The company conducted a survey last month to gauge the use of AI tools within the organisation. The survey results indicate that 65 per cent of respondents said that using such services carries a security risk.
“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung told staff. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
According to the memo, the implementation of the new policy is due to the accidental leak of internal source code by Samsung engineers who uploaded it to ChatGPT.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo said. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
As per a report by Korean media, Samsung employees uploaded corporate secrets onto ChatGPT.
One employee even copied the source code of a semiconductor database download program, while another uploaded program code intended to identify defective equipment. Additionally, a third employee tried to auto-generate meeting minutes by uploading meeting records.
Samsung has instructed its employees who use personal devices to access ChatGPT and other similar tools not to share any company-related information or personal data that could potentially reveal the company’s intellectual property. The company emphasises that failure to comply with the new policy could lead to termination.
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the company writes in the memo.
Samsung is said to be developing in-house AI tools for software development, as well as for translation and summarisation of documents.
The South Korean company has informed employees of one of its most extensive divisions through a memo reviewed by Bloomberg that a new policy has been put in place. The company is worried about the data transmitted to artificial intelligence platforms like Google Bard and Bing, as it is stored on external servers, making it challenging to retrieve and delete. This data may be disclosed to other users, which worries Samsung
The company conducted a survey last month to gauge the use of AI tools within the organisation. The survey results indicate that 65 per cent of respondents said that using such services carries a security risk.
“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung told staff. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
According to the memo, the implementation of the new policy is due to the accidental leak of internal source code by Samsung engineers who uploaded it to ChatGPT.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo said. “However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
As per a report by Korean media, Samsung employees uploaded corporate secrets onto ChatGPT.
One employee even copied the source code of a semiconductor database download program, while another uploaded program code intended to identify defective equipment. Additionally, a third employee tried to auto-generate meeting minutes by uploading meeting records.
Samsung has instructed its employees who use personal devices to access ChatGPT and other similar tools not to share any company-related information or personal data that could potentially reveal the company’s intellectual property. The company emphasises that failure to comply with the new policy could lead to termination.
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the company writes in the memo.
Samsung is said to be developing in-house AI tools for software development, as well as for translation and summarisation of documents.
[ad_2]
Source link