Part of:

Here’s Why Companies Are Restricting the Use of Generative AI Tools for Employees

Why Trust Techopedia
KEY TAKEAWAYS

Due to concerns over data leakage and doubts about the security and reliability of generative AI tools like ChatGPT, many organizations have implemented bans on their use by employees on proprietary systems. Instances like the data leak at Samsung, where sensitive data was inadvertently uploaded, which highlights the potential risks involved. Given the high stakes associated with safeguarding extremely sensitive and confidential data, organizations prioritize protecting their reputation by utilizing home-grown generative AI tools instead.

Many organizations have prohibited employees from using generative AI tools on their proprietary systems. The move is mainly driven by concerns regarding the potential leakage of confidential data, as well as the lack of confidence in the cyber resilience of productive AI tools like ChatGPT.

One such case occurred at Samsung, where employees uploaded code containing sensitive information, leading to the exposure of confidential data.

What Are the Reasons Behind Restrictions on Generative AI Tools?

Many organizations handle highly sensitive and confidential data, and their reputation relies on data protection. Uploading information to generative AI tools poses risks of potential public exposure. Additionally, there are concerns about the security and reliability of these tools, raising doubts about their accuracy.

As such, many organizations have opted to use their own internally developed generative AI tools instead.

Below, we discuss in detail some of the key factors behind generative AI tools getting banned.

Potential data leakage

Some generative AI tools, such as ChatGPT, advise against using sensitive or personal data in prompts. This suggests that these tools cannot guarantee data safety.

Advertisements

Top executives are concerned about the possibility of sensitive data being exposed to the public. Verizon, Samsung, and Northrup Grumman are among those companies imposing a blanket ban on using generative AI tools.

Meanwhile, Apple has been working on creating its own generative AI tool but doesn’t allow the use of external ones.

Inaccurate responses

Some generative AI tools often generate inaccurate responses. Experts point out that such tools hallucinate and produce responses that do not match the facts.

For example, if a developer prompts a code snippet and expects the tool to identify errors, it may fail to do so. In the context of critical software deliveries, such failures can be critical.

It’s fair to say that generative AI tools have not yet provided enough assurance to corporates regarding their reliability and robustness.

Risk of biased output

For reputable companies, it’s crucial to avoid any perception of bias or discrimination, as it can cause irrevocable damage to their brand.

Consider the scenario where a respected publication house’s official blogs publish content that shows bias against certain racial groups. The problem lies in the training methodology of tools like ChatGPT, which relies on content written by human beings, some of which are biased or discriminating. As such, it’s no surprise that ChatGPT produces biased output.

To OpenAI’s credit, it asks its users to report inadequate responses. However, that’s not enough. Critics point out that OpenAI was hasty in releasing ChatGPT without adequately addressing this fundamental problem, possibly due to commercial considerations.

Consequences of the? Infamous Samsung?Case

Samsung has explicitly banned the use of ChatGPT or any generative AI tools in any of its work. It has prohibited employees from using company data or code as prompts due to privacy and confidentiality concerns.

Bloomberg reported that a memo was sent to the staff on the usage of company resources vis-a-vis generative AI tools.

According to the memo, “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Samsung added:

“We ask that you diligently adhere to our security guidelines, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment.”

Even if employees are not directly utilizing company content or code, there is still a risk of abstracted code or content being accessed and misused.

What Approach Should Organizations Take?

As much as the reactions of the corporations to the generative AI tools reflect their concerns to safeguard their business interests, the tendency to ban the tools may be equivalent to burying the head in the sand.

Generative AI is too powerful a phenomenon to ignore that businesses cannot afford to overlook.

How can they manage generative AI tools?

  • Some companies like Apple have already recognized the significance of generative AI and pursued the development of their own tools. However, there are pros and cons to this approach. Not all companies have the financial resources or capacity to embark on such endeavors. In such cases, it may be more practical to leverage existing generative AI technology that is already available.
  • It’s essential to establish a framework that enables the utilization of generative AI technology while effectively managing security risks. One approach could involve providing comprehensive training to employees on prompt engineering. This not only ensures optimal output but also safeguards the security of the information used as prompts.
  • Organizations may run pilot projects that not only extensively leverage the generative AI tools but also use dummy data. Before that, however, companies need to ensure that the employees are trained in prompt engineering. This includes educating them on the fundamentals of prompting, the basic elements of a prompt, as well as zero-shot and few-shot prompting techniques. The results obtained from these initiatives can show the path forward for organizations.

In any case, ChatGPT marks a huge step towards natural language processing. As Michael Jordan, a Professor of Computer Science at the University of California, Berkeley, said:

“ChatGPT is a remarkable achievement in the field of natural language processing and has the potential to transform the way we communicate with machines.”

The Bottom Line

It would be naive for companies to turn a blind eye to generative AI technology as they understand its significance. However, the security risks and biases exhibited by these tools cannot be overlooked.

It’s crucial for pioneering organizations like OpenAI to address these concerns at their core, which should be a collaborative effort involving various stakeholders.

Advertisements

Related Reading

Related Terms

Advertisements
Kaushik Pal
Technology Specialist
Kaushik Pal
Technology Specialist

Kaushik is a Technical Architect and Software Consultant with over 23 years of experience in software analysis, development, architecture, design, testing and training. He has an interest in new technologies and areas of innovation. He focuses on web architecture, web technologies, Java/J2EE, open source software, WebRTC, big data and semantic technologies. He has demonstrated expertise in requirements analysis, architecture design and implementation, technical use cases and software development. His experience has spanned across industries like insurance, banking, airlines, shipping, document management and product development etc. He has worked on a wide range of technologies ranging from large scale (IBM S/390),…

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/