Saeree ERP - Complete ERP Solution for Thai Organizations Contact Us

Article: Shadow AI — The Silent Threat in Organizations

  • Home
  • Articles
  • Shadow AI — The Silent Threat in Organizations
What Is Shadow AI — When 90% of Employees Use AI Without IT Knowing
  • 26
  • February

Now that AI has become a tool as accessible as opening a browser, employees across entire organizations have begun using it — whether it's ChatGPT, Google Gemini, Claude, or other AI tools — all without IT ever knowing, without approval, and without any policy in place. This phenomenon is called "Shadow AI", and it is creating invisible risks for organizations worldwide, including those in Thailand.

What Is Shadow AI?

Shadow AI refers to the use of AI tools by employees without approval, without review, and outside the oversight of IT or information security departments. The term "Shadow" comes from the same concept as Shadow IT, which refers to using software or IT services outside of officially sanctioned channels.

However, Shadow AI is far more dangerous than Shadow IT because modern AI tools can:

  • Ingest large volumes of data for processing — Users often copy entire screens or files into AI without thinking
  • Retain submitted data — Some AI platforms store data as training data
  • Produce convincing but potentially inaccurate results — Employees use these for decision-making without verification
  • Be accessed from personal devices — No installation needed, no VPN required, no permission necessary

Alarming Statistics: Shadow AI in Organizations Worldwide

Data from multiple cybersecurity reports reveals that Shadow AI is no longer a minor issue:

Figure Details
90% of AI usage in organizations occurs without IT's knowledge
65% of Shadow AI data leak incidents involve personally identifiable information (PII)
40% involve intellectual property that was inadvertently exposed
40% Gartner predicts that by 2026, 40% of enterprise applications will have embedded AI Agents
Only 6% of organizations worldwide have an advanced AI security strategy

These numbers clearly show that the gap between AI usage and AI governance is enormous, and this gap is exactly where Shadow AI thrives.

Real-World Shadow AI Scenarios in Organizations

Consider these scenarios that could happen in your organization every day:

Scenario 1: HR Employee Uses ChatGPT to Draft a Warning Letter

An HR employee copies employee data — full name, position, salary, and details of problematic behavior — into ChatGPT to help draft a warning letter. All of this sensitive personal data is immediately sent to overseas servers.

Scenario 2: Sales Team Uses AI to Summarize Customer Data

A sales employee uploads an Excel file containing customer names, purchase volumes, special pricing conditions, and credit limits into an AI tool to help summarize and rank VIP customers. Confidential business data is exposed without anyone realizing.

Scenario 3: Development Team Uses an AI Coding Assistant

A programmer copies source code containing API keys, database connection strings, and proprietary business logic into an AI coding assistant for debugging or refactoring — effectively exposing the organization's intellectual property.

Scenario 4: Accounting Team Uses AI to Translate Contracts

An accounting employee pastes a trade contract with foreign partners — containing pricing information, special terms, and legal obligations — into an AI tool for translation. Business data covered by non-disclosure agreements (NDAs) could be leaked.

Real Case Study: Samsung banned all employees from using ChatGPT after discovering that engineers had copied confidential source code into ChatGPT three times in less than a month. Meanwhile, Amazon also warned employees about feeding confidential data into AI after internal data was found to have leaked through AI chatbots.

How Shadow AI Impacts PDPA and Personal Data Protection Laws

For organizations in Thailand, Shadow AI is not just a data security issue — it is also a legal issue directly under Thailand's Personal Data Protection Act (PDPA).

PDPA Principle Violated Shadow AI Behavior
Lawful Basis for Processing Sending personal data to foreign AI platforms may lack a legal basis for processing
Cross-Border Data Transfer AI servers are located abroad — sending personal data constitutes cross-border data transfer
Security Measures Organizations lack controls over data flowing to AI tools used by employees
Data Subject Rights Data subjects were never informed or asked for consent regarding their data being processed by AI

PDPA penalties can reach up to 5 million baht in administrative fines, with potential additional criminal penalties and civil liability for damages. All of this could result from a single employee sending customer data into ChatGPT without thinking.

Why Organizations Can't Just "Ban" AI — They Must "Manage" It

Many organizations respond to Shadow AI by banning all AI tools. However, experience from Samsung, Amazon, and other organizations shows that banning is not the solution because:

  • Employees use personal devices — Block it on company computers, and they'll just use their phones
  • AI is already embedded in everyday tools — Microsoft 365 Copilot, Google Workspace AI, and Notion AI all have built-in AI
  • Gartner predicts 40% of enterprise apps will have AI Agents by 2026 — Banning AI effectively means banning essential work software
  • Organizations that don't use AI will fall behind — AI genuinely boosts productivity; a total ban means abandoning business opportunities

The real solution is to build a Governance Framework that allows employees to use AI safely and effectively. Read more about creating AI policies in our article on How to Use AI Safely in Organizations — Essential AI Governance Policies.

3-Phase Governance Framework for Managing Shadow AI

The approach recommended by AI security experts is a 3-Phase Governance Framework designed so that organizations of all sizes can start immediately:

Phase 1: Foundation — Weeks 1-4

Objective: Give the organization visibility into where Shadow AI exists and establish initial rules.

  • Survey AI tools currently used by employees — Create an AI Tool Inventory identifying which departments use what tools and what data they're sending
  • Classify data (Data Classification) — Define which data levels must never be sent to AI, such as personal data, financial data, and intellectual property
  • Issue the first AI policy — It doesn't need to be perfect; just establish clear Do's & Don'ts
  • Communicate to all employees — Don't just send an email; conduct training sessions with Q&A

Phase 2: Operationalization — Months 2-3

Objective: Turn paper policies into working processes.

  • Create an AI Approved List — A list of AI tools that have been vetted and approved for use
  • Establish an approval process — Define the steps employees must follow when they want to use a new AI tool
  • Deploy monitoring tools — Detect unauthorized AI usage through network monitoring and DLP (Data Loss Prevention)
  • Conduct in-depth employee training — Ensure each department understands which data types must never be sent to AI
  • Define an Incident Response Plan — Outline how to respond if data leaks through AI

Phase 3: Continuous Improvement — Month 4 Onward

Objective: Make AI Governance part of the organizational culture, not just a one-time project.

  • Review and update policies quarterly — AI technology changes rapidly; policies must keep pace
  • Measure policy compliance — Are Shadow AI incidents decreasing? Is employee compliance improving?
  • Expand the list of approved AI tools — Add more vetted tools for employees to choose from
  • Monitor new laws and standards — PDPA may issue additional AI-related regulations, and the EU AI Act may affect organizations trading with Europe
  • Exchange knowledge with other organizations — Join AI Governance communities

How ERP Systems Help Reduce Shadow AI Risk

One of the main reasons employees feed data into AI is that the organization's systems don't meet their needs. Data is scattered across dozens of Excel files, reports can't be generated on time, and analysis is too slow — so they turn to external AI. An ERP system with proper data management can reduce these risks across multiple dimensions:

ERP Capability How It Reduces Shadow AI Risk
Data Governance Data is stored in one place with consistent standards — employees don't need to pull data from multiple sources to combine in AI
Access Control Data access is granted by role — sensitive data is only accessible to authorized personnel, reducing the chance of leaks through AI
Audit Trail Every access, modification, or export is logged — you can trace who accessed what data and when
Reports & Dashboards The system generates reports, analyses, and summaries — reducing the need for employees to send data to external AI for analysis
Data Export Control Can restrict exports of certain data types — preventing employees from downloading data to paste into AI

Read more about data security in ERP systems and choosing the right AI tools for your organization.

Checklist: Is Your Organization Ready to Handle Shadow AI?

Check how many of these your organization has already achieved:

  • A written AI usage policy that has been communicated to all employees
  • A list of approved AI tools (AI Approved List)
  • Data classification defining which data levels must not be sent to AI
  • Employee training on safe AI usage at least once a year
  • Monitoring tools to detect unauthorized AI usage
  • An approval process for when employees want to use new AI tools
  • An Incident Response Plan for data leaks through AI
  • An ERP or central information system that reduces the need for employees to feed data into external AI

If your organization can check fewer than 4 of these items, there are still gaps where Shadow AI can cause damage.

Shadow AI is not a technology problem — it is a management problem. When an organization lacks clear rules, employees will find ways to use AI on their own. And every time they send data into AI without controls, the risk falls on the entire organization.

- Saeree ERP Team

Summary: Manage Shadow AI Before It's Too Late

  1. Accept that Shadow AI exists in your organization — 90% of organizations face this problem; don't assume you're the exception
  2. Don't ban — build a governance framework — You can't prohibit it, but you can manage it
  3. Start with the Foundation phase — Survey, classify data, and issue your first policy
  4. Invest in an ERP system with Data Governance — Reduce the reasons employees need to rely on external AI
  5. Make AI Governance an ongoing effort — Not a one-time project, but part of organizational culture

If your organization is looking for a system to manage data systematically, reduce data leak risks, and provide a verifiable audit trail, you can schedule a demo or contact our consultants for an organizational readiness assessment.

References

Interested in ERP for your organization?

Consult with Grand Linux Solution experts — free of charge

Request a Free Demo

Tel. 02-347-7730 | sale@grandlinux.com

image

About the Author

ERP expert team from Grand Linux Solution Co., Ltd. — providing comprehensive ERP consulting and implementation services.