- 26
- February
As AI tools become as easy to access as opening a browser, employees across organizations have started using them — ChatGPT, Google Gemini, Claude, and others — without IT ever knowing, approving, or having any policy in place. This phenomenon is called "Shadow AI", and it is creating invisible risks for organizations around the world, including those in Thailand.
What Is Shadow AI?
Shadow AI refers to employees within an organization using AI tools for work without authorization, oversight, or supervision from IT or information security teams. The word "Shadow" comes from the same concept as Shadow IT — the use of software or IT services outside officially approved channels.
But Shadow AI is far more dangerous than Shadow IT, because modern AI tools can:
- Process large volumes of data at once — users often paste entire screens or files into AI without a second thought
- Retain data fed into them — some AI platforms store inputs as training data
- Generate convincing but potentially inaccurate outputs — employees act on them without verification
- Be accessed from personal devices — no installation, no VPN, no approval needed
Alarming Statistics: Shadow AI in Organizations Worldwide
Reports from multiple cybersecurity firms paint a clear picture — Shadow AI is no longer a minor problem:
| Figure | Detail |
|---|---|
| 90% | of AI usage inside organizations occurs without IT knowledge |
| 65% | of Shadow AI data incidents involve personally identifiable information (PII) |
| 40% | involve exposure of intellectual property |
| 40% | Gartner predicts that by 2026, 40% of enterprise applications will have embedded AI Agents |
| Only 6% | of organizations worldwide have an Advanced-level AI security strategy |
These numbers make it clear: the gap between AI adoption and AI governance is enormous — and that gap is exactly where Shadow AI thrives.
Real-World Shadow AI Scenarios in Your Organization
Consider these situations that could be happening in your organization every day:
Scenario 1: An HR Employee Uses ChatGPT to Draft a Warning Letter
An HR employee copies employee data — full name, position, salary, behavioral details — and pastes it into ChatGPT to draft a formal warning letter. All of that sensitive personal information is instantly sent to overseas servers.
Scenario 2: A Sales Rep Uses AI to Summarize Customer Data
A salesperson takes an Excel file with customer lists, purchase volumes, special pricing terms, and credit limits and feeds it into AI to identify VIP customers. Confidential business data is exposed without anyone realizing it.
Scenario 3: A Development Team Uses an AI Coding Assistant
A programmer pastes source code containing API keys, database connection strings, and proprietary business logic into an AI coding assistant for debugging or refactoring — exposing the organization's intellectual property.
Scenario 4: An Accountant Uses AI to Translate Contract Documents
An accountant takes a commercial agreement with an overseas partner — containing pricing terms, special conditions, and legal obligations — and pastes it into AI for translation. Data covered by an NDA may be leaked.
Real-World Case: Samsung banned all employees from using ChatGPT after discovering that engineers had pasted confidential source code into ChatGPT three times within a single month. Amazon similarly warned employees about sharing sensitive data with AI after internal information was found to have leaked through AI chatbots.
How Shadow AI Impacts PDPA and Personal Data Protection Laws
For organizations in Thailand, Shadow AI is not just a data security issue — it is a direct legal risk under the Personal Data Protection Act (PDPA):
| PDPA Principle Violated | Shadow AI Behavior That Triggers It |
|---|---|
| Legal Basis for Processing | Sending personal data to overseas AI platforms may lack a lawful basis under PDPA |
| Cross-Border Data Transfer | AI servers are overseas — sending personal data to them constitutes a cross-border data transfer |
| Security Measures | Organizations have no controls over data flow to AI tools employees use independently |
| Data Subject Rights | Data subjects never knew or consented to their data being processed by AI tools |
PDPA penalties can reach 5 million baht in administrative fines, with additional criminal charges and civil damages possible — all triggered by a single employee thoughtlessly sending customer data to ChatGPT.
Why Organizations Can't Simply "Ban" AI — But Must Govern It
Many organizations respond to Shadow AI by banning all AI tools entirely. But experience from Samsung, Amazon, and others shows that banning doesn't work, because:
- Employees use personal devices — ban it on company computers, and they'll just use their phones
- AI is already embedded in tools they use daily — Microsoft 365 Copilot, Google Workspace AI, and Notion AI all have built-in AI
- Gartner predicts 40% of enterprise apps will have AI Agents by 2026 — banning AI means banning the software needed to do their jobs
- Organizations that avoid AI fall behind — AI genuinely boosts productivity; a blanket ban means throwing away competitive advantage
The real solution is to build a Governance Framework that allows employees to use AI safely and effectively. Read more about building AI policies in How to Use AI Safely in Organizations — Essential AI Governance Policies.
3-Phase Governance Framework for Managing Shadow AI
AI security experts recommend a 3-Phase Governance Framework designed to be actionable for organizations of any size:
Phase 1: Foundation (Weeks 1–4)
Goal: Give the organization visibility into where Shadow AI exists and establish initial ground rules.
- Survey AI tools employees are using — conduct an AI Tool Inventory to find out which departments use what, and what data they send in
- Classify data (Data Classification) — define which data levels must never be sent to AI: personal data, financial data, intellectual property
- Issue a first-version AI Policy — it doesn't need to be perfect; just establish clear Do's and Don'ts
- Communicate to all employees — not just an email; run a training session with a Q&A
Phase 2: Operationalization (Months 2–3)
Goal: Turn the paper policy into a working operational process.
- Create an AI Approved List — a catalog of AI tools that have been reviewed and authorized for use
- Establish an approval process — define what steps employees must follow when they want to use a new AI tool
- Deploy monitoring tools — detect unauthorized AI usage via network monitoring and DLP (Data Loss Prevention)
- Run in-depth training by department — ensure each team understands which data types must never go into AI
- Define an Incident Response Plan — establish what happens when a data leak occurs through AI
Phase 3: Continuous Improvement (Month 4 Onwards)
Goal: Make AI Governance part of the organizational culture — not a one-time project.
- Review and update the policy quarterly — AI technology moves fast; policy must keep pace
- Measure compliance — are Shadow AI incidents declining? Are employees following the policy more consistently?
- Expand the approved AI tool list — add more vetted tools to give employees more safe options
- Track new laws and standards — PDPA may issue further AI-specific guidance, and the EU AI Act may affect organizations trading with Europe
- Join AI Governance communities — share learnings with peer organizations
How ERP Systems Help Reduce Shadow AI Risk
One reason employees feed data into external AI is that the organization's own systems fall short — data is scattered across dozens of Excel files, reports can't be pulled easily, and analysis takes too long. A well-implemented ERP system with proper data management can reduce these risks significantly:
| ERP Capability | How It Reduces Shadow AI Risk |
|---|---|
| Data Governance | All data is stored in one place with consistent standards — employees don't need to pull data from multiple sources and combine them in AI |
| Access Control | Role-based access ensures sensitive data is only accessible to authorized staff, reducing the chance that sensitive data leaks through AI |
| Audit Trail | Every access, edit, or data export is logged — fully traceable to see who accessed what data and when |
| Reports & Dashboards | The system generates reports, analytics, and summaries automatically — reducing the need for employees to feed data into external AI for analysis |
| Data Export Control | Certain data types can be restricted from export, preventing employees from downloading sensitive data to paste into AI |
Read more about data security in ERP systems and choosing the right AI tools for your organization.
Checklist: Is Your Organization Ready to Handle Shadow AI?
Check how many of these your organization has already done:
- A written AI usage policy that has been communicated to all employees
- An AI Approved List — a catalog of authorized AI tools
- Data Classification that defines which data levels must never be sent to AI
- At least annual employee training on safe AI usage
- Monitoring tools to detect unauthorized AI usage
- A formal approval process when employees want to use a new AI tool
- An Incident Response Plan for data leaks through AI
- An ERP or central information system that reduces employees' need to rely on external AI
If your organization checks fewer than 4 of these boxes, there are gaps where Shadow AI can cause serious damage.
Shadow AI is not a technology problem — it's a management problem. When organizations lack clear rules, employees find their own way to use AI. And every time they send data into AI without oversight, the risk falls entirely on the organization.
— Saeree ERP Team
Conclusion: Address Shadow AI Before It's Too Late
- Accept that Shadow AI exists in your organization — 90% of organizations have this problem; don't assume you're the exception
- Don't ban — govern — you can't stop it, but you can manage it
- Start with Foundation — survey, classify data, issue a first-version policy
- Invest in an ERP system with Data Governance — reduce the reasons employees need to rely on external AI
- Make AI Governance ongoing — not a one-time project, but an organizational culture
If your organization is looking for a system that manages data systematically, reduces data leakage risks, and provides a traceable Audit Trail, book a Demo or contact our advisory team for an organizational readiness assessment.
