02-347-7730  |  Saeree ERP - Complete ERP Solution for Thai Organizations Contact Us

AI Governance

How to Use AI Safely in Organizations — Essential AI Governance Policies
  • 23
  • February

Today, many organizations have started using AI in daily work — from summarizing documents and writing emails to data analysis. But if used without governance policies (AI Governance), organizations may face serious problems, from confidential data leaks and PDPA violations to unreliable outputs that directly damage the business.

Why Do Organizations Need AI Governance?

Many people think AI is just a productivity tool that can be used without any rules. But multiple real case studies have proven that using AI without governance policies can cause enormous damage.

Case 1: Samsung Employee Enters Source Code into ChatGPT

In 2023, multiple Samsung Electronics employees entered confidential company source code into ChatGPT for bug fixing and code optimization. The result was that this data was sent to OpenAI's servers and could potentially be used to train future models. Samsung had to immediately ban all external AI usage.

Lesson: Without a clear policy stating "what data cannot be entered into AI," employees will use their own judgment, which often leads to risk.

Case 2: Lawyer Cites Non-Existent Cases

A lawyer in the United States used ChatGPT to search for case references to submit to court. It turned out the AI "fabricated cases entirely" (Hallucination), citing case names, case numbers, and rulings that did not exist at all. The lawyer was sanctioned by the court for filing false documents.

Lesson: AI can generate information that looks credible but is false. Without Human-in-the-Loop verification before use, severe damage can occur.

Case 3: AI Bias in Recruitment (Amazon)

Amazon once developed an AI system for resume screening but had to abandon it after discovering the system had Gender Bias, systematically scoring female applicants lower than male ones. The cause was that the training data came from historical hiring records, which were predominantly male.

Lesson: AI is not automatically neutral. If the training data contains bias, the results will be biased accordingly. A Fairness Audit must always be conducted.

Case 4: PDPA Compliance Risks

When employees enter customers' personal data (names, phone numbers, emails, financial information) into external AI, it constitutes "transferring personal data to a third party" under the Personal Data Protection Act (PDPA), which may be illegal and carries fines of up to 5 million baht.

Summary: Why AI Governance Is Necessary

  • Prevent data leaks — clearly define what data cannot be entered into AI
  • Reduce legal risk — comply with PDPA and related regulations
  • Ensure accuracy — require human verification before using AI output
  • Prevent bias — audit fairness of AI-generated results
  • Build trust — show clients and partners that the organization uses AI responsibly

Components of an AI Governance Policy

A good AI Governance policy must cover 6 key components as follows:

1. Data Prohibited from AI Input

Define a strict list of data that must never be entered into external AI to prevent data leaks and legal violations:

Data Type Examples Reason Prohibited
Personal Data Full name, national ID number, customer email Violates PDPA
Financial Data Financial statements, bank account numbers, cost prices Trade secrets
Source Code Internal system code, API Keys, Credentials Intellectual property + Security
Customer Data Customer lists, purchase history, contracts Violates PDPA + Trade secrets
Trade Secrets Business plans, marketing strategies, production formulas Loss of competitive advantage

2. Types of AI Permitted

Organizations should classify AI into 3 tiers with different usage policies for each:

Type Examples Restrictions
Public AI ChatGPT, Gemini, Claude (free versions) Do not enter confidential / personal data
Enterprise AI ChatGPT Enterprise, Azure OpenAI, Google Vertex AI Permitted under a DPA (Data Processing Agreement)
Self-hosted AI Llama, Mistral, models installed on the organization's own servers Most secure — data never leaves the organization

3. Approval Process

Clearly define who has authority to approve new AI tools in the organization, with these steps:

  • The requester must submit a request specifying: AI Tool name, purpose, and types of data to be used
  • IT Security assesses risks: Where does data go? Is there a DPA? Is it PDPA-compliant?
  • Management/DPO approves or rejects, with usage conditions
  • Review every 6 months to check whether approved AI tools remain appropriate

4. Human-in-the-Loop

The most important principle of AI Governance is "AI assists decisions, but humans must verify before using", especially for high-impact tasks:

  • AI drafts documents → Humans must review before sending
  • AI analyzes data → Humans must verify the accuracy of results
  • AI screens applications → Humans make the final decision
  • AI generates reports → Humans must confirm figures and references

Principle: The higher the impact of AI output (finance, legal, hiring), the more rigorous the Human-in-the-Loop process must be.

5. Audit Trail

Organizations must record who used which AI, when, and for what purpose to enable retrospective auditing. Data that should be logged includes:

  • User name and AI Tool used
  • Date and time of use
  • Type of task AI assisted with (e.g., document summarization, coding, data analysis)
  • What the output was used for

6. Training & Awareness

All employees must receive training on the organization's AI policy before use, covering:

  • What data is prohibited from AI input and why
  • Which AI tools are permitted and which are not
  • How to verify accuracy of AI outputs
  • Risks of AI Hallucination
  • Reporting procedures when inappropriate AI use is discovered

PDPA and AI — Risk Assessment Matrix

When organizations use AI with personal data, risks must be carefully assessed under the Personal Data Protection Act (PDPA):

Scenario PDPA Risk? Recommendation
Entering customer names into ChatGPT High risk Prohibited — constitutes transferring personal data to a third party
Using AI to analyze aggregate data Low risk Permitted if anonymized and individuals cannot be identified
Using AI for credit decisions High risk Must notify the data subject and obtain consent first
Using AI to summarize internal reports (no personal data) No risk Permitted, but be careful not to enter trade secrets into public AI
Using AI to screen job applications High risk Must notify applicants + Human-in-the-Loop required + Bias audit required
Using Enterprise AI (with DPA) to process customer data Medium risk Permitted with a robust DPA + Privacy Impact Assessment

Warning: PDPA violations carry fines of up to 5 million baht, and data subjects can file unlimited civil damage claims. Careless AI use can unknowingly make your organization a defendant.

AI Ethics Framework for Thai Organizations

Beyond operational policies, organizations should have an AI Ethics Framework as high-level principles for decision-making in situations not covered by policy:

Principle Meaning Implementation
Transparency Can explain what AI does and how it decides Notify customers when AI processes data, maintain AI Decision Logs
Fairness AI does not discriminate by gender, race, or age Conduct Bias Testing before deployment, review results periodically
Accountability Clear accountability when AI makes errors Assign an AI Owner for every use case, establish an Escalation Path
Privacy Protect personal data in compliance with PDPA Conduct a Privacy Impact Assessment (PIA) before using AI with personal data
Security Protect AI from attacks or manipulation Test for Adversarial Attacks, check for Prompt Injection

AI Policy Template for Organizations

For organizations wanting to start building an AI Governance policy, here is a checklist of topics to include:

1. Scope and Objectives — who this policy covers and which AI types it applies to
2. Definitions — what AI, Machine Learning, Generative AI mean in the organizational context
3. Permitted AI Tools — list of evaluated AI tools, categorized by tier (Public/Enterprise/Self-hosted)
4. Data Prohibited from AI — strictly prohibited data with clear examples
5. Approval Process — steps for requesting new AI tool approval and authorized approvers
6. Human-in-the-Loop — define which tasks require human verification before using AI output
7. Audit Trail — methods for logging AI usage and log retention periods
8. PDPA Compliance — measures for complying with the Personal Data Protection Act
9. AI Ethics — organizational AI ethics principles (Transparency, Fairness, Accountability)
10. Training & Awareness — employee training plan, frequency, and required content
11. Incident Response — procedures for data breach incidents or AI malfunctions
12. Review and Updates — review the policy at least annually or when new technology emerges

Saeree ERP and AI Governance

Currently, Saeree ERP does not yet have AI features, but they are in the near-term development roadmap. However, Saeree ERP already has foundational systems that are ready to support AI Governance today:

AI Governance Requirement Saeree ERP Feature
Audit Trail — must record who did what and when Full Audit Trail system automatically logs every transaction, fully traceable
Access Control — restrict data access permissions Role-Based Access Control (RBAC) system with permissions down to menu and button level
Data Protection — protect personal data Encrypt sensitive data, HTTPS on every connection, supports 2FA
Approval Workflow — approval process Multi-level Approval Workflow system supporting hierarchical approvals
Compliance — legal compliance Designed in compliance with PDPA and OWASP Top 10 security standards

Note: When Saeree ERP develops AI features in the future, the existing foundational systems for Audit Trail, Access Control, and Approval Workflow will ensure AI adoption is safe, transparent, and auditable in accordance with AI Governance principles.

Using AI in organizations is not just about technology — it's about "people, processes, and policies" that must move together. Organizations with good AI Governance can leverage AI at full capacity without risking data leaks, legal violations, or unreliable outputs.

- Saeree ERP Development Team

If your organization needs an ERP system with Audit Trail and Access Control ready to support AI Governance, you can schedule a Demo or contact our consulting team for further discussion.

Interested in ERP for your organization?

Consult with our expert team at Grand Linux Solution — free of charge

Request Free Demo

Call 02-347-7730 | sale@grandlinux.com

Saeree ERP Team

About the Author

Paitoon Butri

Network & Server Security Specialist, Grand Linux Solution Co., Ltd.