02-347-7730  |  Saeree ERP - Complete ERP Solution for Thai Businesses Contact Us

Thailand's First AI Law Draft

Thailand's First AI Law Draft and How Organizations Should Prepare
  • 27
  • February

Thailand is entering a new era of technology governance — the Ministry of Digital Economy and Society (DE) together with the Electronic Transactions Development Agency (ETDA) has unveiled the country's first AI law draft, aiming to balance innovation promotion with public protection. This article summarizes the key points of Thailand's AI law draft, compares it with international AI regulations, and outlines how organizations should prepare.

Why Does Thailand Need an AI Law?

2026 is the year AI has rapidly penetrated every sector — reports show that over 90% of Thai consumers know about AI and more than 80% use it regularly. At the same time, risks from AI usage have increased significantly, including Deepfakes, biased decision-making, and personal data breaches.

Previously, Thailand had no specific legal framework for AI, relying on the Personal Data Protection Act (PDPA) and other laws not designed specifically for AI. The new draft law aims to:

  • Establish a clear regulatory framework — so organizations know what they need to do when using AI
  • Protect citizens' rights and freedoms — prevent AI from being used to violate rights
  • Promote innovation — avoid over-regulation that stifles development
  • Build user confidence — ensure Thai people can use AI safely

Key Points of Thailand's AI Law Draft

The draft law uses a Risk-based Approach similar to the EU AI Act, built on three pillars:

Pillar 1: Deregulation — Removing Legal Barriers

Unlocking legal restrictions that hinder AI adoption, such as copyright exemptions for Text and Data Mining (TDM), which is a critical process for training AI models.

Pillar 2: Promotion — Supporting Development

Supporting AI development through multiple measures:

  • Regulatory Sandbox — a controlled environment for testing AI innovations
  • Tax incentives — for organizations investing in AI development
  • AI Governance Center (AIGC) — a center for consultation and technical support

Pillar 3: Governance — Appropriate Oversight

Governance levels are divided by AI risk:

Risk Level Characteristics Measures
Prohibited AI that clearly threatens society, e.g., Social Scoring, citizen ranking Absolutely prohibited from development and use
High-risk AI in healthcare, finance, justice, employment Must have risk management systems, legal representative in Thailand, anomaly reporting
General risk General AI that doesn't qualify as high-risk, e.g., Chatbots, product recommendations Voluntary compliance with best practices

Specifically for Generative AI

The draft law includes additional criminal penalties for using AI to create obscene content or false information that may impact society or electoral processes, with clear enforcement mechanisms for regulating AI-generated content.

Comparison with International AI Laws

Thailand is not the first country to legislate AI — let's see how other countries have approached this:

Country/Region Law Approach Status
European Union EU AI Act Strict risk-based regulation with heavy penalties In effect
South Korea AI Basic Act Asia's first AI law, focusing on policy framework and governance structure Full enforcement Jan 22, 2026
Thailand Draft AI Act Balance between promotion and governance, using both Soft Law and Hard Law Public consultation phase

The distinguishing feature of Thailand's AI law draft is its combined Soft Law + Hard Law approach — not as strict as the EU AI Act, but not too lenient either, emphasizing promotion alongside governance.

How Should Organizations Prepare?

Although the draft law is not yet in effect, organizations that start preparing now will have an advantage — both in terms of compliance and credibility.

1. Conduct an AI Inventory

The first step is to know where AI is being used in your organization:

  • Customer service Chatbots
  • Analytics tools with AI/ML capabilities
  • Facial recognition systems
  • Generative AI tools used by employees (e.g., ChatGPT, Gemini)
  • AI embedded in other software (e.g., ERP, CRM systems)

2. Assess Risk Levels

Categorize AI usage according to the risk levels defined in the draft law:

  • High-risk AI? — e.g., AI for credit decisions, employee screening, medical diagnosis
  • General-risk AI? — e.g., general-purpose Chatbots, product recommendations, report analysis

3. Develop AI Governance Policies

Create an AI Governance framework within your organization, including:

  • AI usage policy — define who can use AI, for what purposes, and within what boundaries
  • Review processes — verify that AI operates correctly, without bias or rights violations
  • Risk management — assess and manage risks from AI usage
  • Issue reporting channels — enable employees and users to report anomalies

4. Prepare Your Data Systems

AI requires quality data — and the law requires that data used to train AI must be auditable. Organizations should:

  • Organize data systematically — scattered data across multiple Excel files is hard to audit. ERP systems consolidate data into a single database
  • Maintain audit trails — record who accessed, modified, or used data and when (critical under 2FA and data security requirements)
  • Comply with PDPA — ensure data used with AI has proper consent and is stored correctly

ERP Systems and AI Compliance

ERP systems are a critical foundation for AI compliance because they consolidate data from all departments into a single system, provide complete audit trails, and support role-based access control — which are fundamental requirements that the AI law mandates for organizations using high-risk AI.

5. Train Your People

The law requires organizations to have "people who understand AI" — they don't need to be developers, but they must understand how AI works, its limitations, and its risks.

Expected Timeline

Period Expected Developments
2025–2026 Public consultation, merging multiple draft bills into a single instrument
2026–2027 Legislative process, parliamentary consideration
2027–2028 Expected enactment, with a transition period for organizations to adapt

Organizations with well-organized data systems, complete audit trails, and AI Governance policies will adapt to the new law much faster than those still managing data in a scattered manner — therefore, implementing an ERP system today is the best preparation for the future.

- Saeree ERP Team

Summary

Thailand's first AI law draft is an important step toward creating a framework for responsible AI use. Organizations that need to prepare the most are those using AI for decisions that affect people's rights — in finance, employment, and healthcare.

5 things to start doing today:

  1. Survey AI in your organization — conduct an AI Inventory
  2. Assess risk levels — categorize according to the draft law's criteria
  3. Develop AI Governance policies — establish internal rules and guidelines
  4. Prepare data systems — consolidate data with proper audit trails
  5. Train your people — build understanding of AI and its risks

If your organization is planning to organize data systems and workflows to be ready for the AI law, you can consult our advisory team for free.

References

Interested in an ERP system for your organization?

Consult with Grand Linux Solution experts for free

Request Free Demo

Call 02-347-7730 | sale@grandlinux.com

Saeree ERP Team

About the Author

Expert ERP team from Grand Linux Solution Co., Ltd., providing comprehensive ERP consulting and implementation services.