Tech News Featured

Personal LLM Accounts Drive Shadow AI Data Leak Risks

Tappy Admin
January 7, 2026
3 min read
105 views
Personal LLM Accounts Drive Shadow AI Data Leak Risks

The increasing adoption of generative AI applications such as Large Language Models in the workplace is also adding to the potential for cyber security breaches as companies are struggling to stay on top of how these applications are being used.

One of the biggest challenges that the IT department, along with the security department, is currently experiencing is the existence of Shadow AI, where employees are using these AI platforms using their personal accounts, such as ChatGPT, Google Gemini, or Microsoft Copilot.

Nearly half (47%) of the individuals utilizing the power of generative AI solutions at the workplace are utilizing their own accounts and applications, this can be revealed from the Cloud and Threat Report of 2026 released by Netskope.

It does however mean there is a lack of visibility or control regarding how workers are using those personal generative AI accounts within the workplace.

This leads to emerging risks relating to cyber security threats, as well as data violations of policy, where the leakage of corporate confidential information could occur.

On the other hand, the number of requests being submitted to generative AI applications is increasing.

"Despite the average number of users tripling, the volume of data delivered to SaaS gen AI apps has increased six fold, from 3,000 to 18,000 prompts per month. However, the top 25% of businesses are above 70,000 prompts per month, while the top 1% are above 1.4 million prompts per month," stated Netskope in its report.

 

Generative AI Data Policy Violations Average 223 Per Month

With the meteoric rise in the corporate interest in the use of Generative AI, the potential for enormous data leaks is also on the rise. What is even more disturbing is the fact that a report from Netskope shows the problem of data policy infractions related to the use of AI tools has been doubled in the past year.

The data reveals that the companies most ”AI forward,” or the top 25% of adopters, are experiencing an astonishing 2,100 security incidents per month. Workers are inadvertently sharing with LLMs anything from source code to intellectual property, right down to mere login credentials.

The most frustrating scenario for IT teams continues to be “Shadow AI” related activity, which involves employees making use of personal accounts or unofficial tools that do not require company management approval. But there is some glimmer of optimism on the horizon, since the percentage of personal AI accounts being used in the office has fallen from 78% to 47% this year. It seems that while data continues to witness growth, company management controls are finally bringing AI out of its shadows.