02-347-7730  |  Saeree ERP - Complete ERP Solution for Thai Organizations Contact Us

AI Agent Limitations & Risks — EP.4

Limitations, risks, and how to control AI Agents
  • 7
  • April

AI Agents are powerful — but they're not a "magic pill." This final episode covers what many don't want to hear: 5 key limitations, 5 critical risks, and most importantly, 7 guardrails every organization must have before deploying AI Agents in production. We also explore the role of ERP systems as the best "guardian" for AI Agents.

Series: AI Agent as a Team Member — Does It Really Work?

5 Key Limitations of AI Agents

Before deciding to let AI Agents handle real work, you must clearly understand these limitations:

1. Hallucination — Generating Convincing but False Information

AI can "fabricate data" convincingly — citing non-existent research papers, generating plausible but false statistics, or answering confidently when it doesn't know the answer. This is extremely dangerous in accounting, legal, and business decision-making contexts.

2. Limited Context Window — Can Only Remember So Much

AI Agents have limited "short-term memory." They can't remember everything from previous conversations. When conversations get too long, AI starts "forgetting" early context, leading to inconsistent responses.

3. No Real Common Sense — Pattern Matching, Not Understanding

AI works by matching patterns from training data — it doesn't truly "understand" the way humans do. In novel situations or those requiring common sense, AI can give shockingly wrong answers.

4. Prompt-Dependent — Garbage In = Garbage Out

AI Agent output quality depends entirely on the instructions (Prompt) given. If prompts are unclear, incomplete, or misleading, the output will be wrong accordingly.

5. Accumulating Costs — API Costs Add Up Fast

Using AI Agents via APIs incurs per-token costs that accumulate rapidly, especially when processing large datasets or running multiple iterations. Proper budget planning is essential.

Limitation Impact Mitigation
Hallucination False data used for decisions Verify all outputs, use RAG + fact-checking
Limited Context Window Forgets context, inconsistent answers Break tasks into small pieces, summarize context each round
No Common Sense Wrong answers in novel situations Define clear scope, require human review
Prompt-Dependent Output doesn't match needs Create standard prompt templates, do A/B testing
Accumulating Costs Budget overruns without awareness Set budget caps, monitor usage, choose models wisely

5 Critical Risks to Manage

1. Data Leakage — Unknowingly Sending Confidential Data to AI

When employees send customer data, financial information, or strategic data to AI, that data may be stored on the AI provider's cloud servers or used for model training. Many organizations still overlook this risk.

2. Legal Liability — Who's Responsible When AI Makes Mistakes?

If an AI Agent produces a report with incorrect data and an executive uses it for a board presentation without verification, who bears responsibility? Current legal frameworks are still unclear, but organizations using AI must be accountable for outcomes.

3. PDPA / Data Privacy — Personal Data Sent to Cloud

Sending employee data, customer information, or other personal data to AI for processing may violate data protection regulations (PDPA). Clear policies on what data can and cannot be shared with AI are essential.

4. Dependency Risk — Over-Reliance on AI

As AI Agents handle more work, human skills decline. If AI goes down one day, or the provider changes pricing/terms, how will the organization adapt?

5. Bias and Discrimination — AI Learns from Biased Data

AI learns from human-created data that may contain embedded biases. For example, AI resume screening may favor one gender over another, or AI credit analysis may score certain areas lower.

Risk Level Example Scenario
Data Leakage High Employee sends customer database to ChatGPT for analysis without knowing data is stored on foreign servers
Legal Liability High AI generates a report with wrong numbers; executive presents to board without verification
PDPA / Data Privacy High HR sends entire salary database to AI for analysis without anonymization
Dependency Risk Medium AI Agent handling bank reconciliation goes down for a full day; accounting team can't work at all
Bias / Discrimination Medium AI resume screener consistently scores candidates from prestigious universities higher

How to Control: 7 Essential Guardrails

Guardrails are the "safety rails" that prevent AI Agents from operating outside defined boundaries. Every organization using AI Agents needs at minimum these 7:

1. Human-in-the-Loop — Define Approval Points

Not every AI output should proceed automatically. Clearly define which tasks require human review — such as sending emails to clients, approving POs, or posting journal entries above a threshold.

2. Data Classification — Define What AI Can/Cannot Access

Classify data into 3 levels: Freely shareable (public data), Shareable but must anonymize (internal data), Never share (confidential data, personal information).

3. Output Validation — Verify AI Work Before Use

Every AI output must be verified before real-world use, especially numbers, references, and recommendations.

4. Audit Trail — Log Every AI Action

Every time an AI Agent acts, log what it did, when, what data it used, and what the output was — ensuring complete traceability at all times.

5. Access Control — Limit AI Like You Limit Employees

AI Agents shouldn't access all organizational data. Set access permissions just as you would for employees — the accounting AI Agent accesses only accounting data, not HR data.

6. Regular Review — Review Prompts + Performance Weekly

Weekly reviews should assess whether AI Agents meet defined KPIs, whether prompts remain appropriate, and what errors occurred.

7. Kill Switch — Instant Shutdown When Problems Arise

You must have a way to shut down AI Agents immediately, whether via an emergency button or automated scripts that trigger when anomalies are detected.

Guardrail How to Implement Tools Example
Human-in-the-Loop Define approval workflows ERP Workflow Engine POs over $1,500 require human approval
Data Classification Create data sharing policy DLP Software, organizational policy Salary data = never send to AI
Output Validation Verification checklist Automated testing + manual review Verify report numbers before board presentation
Audit Trail Log every action ERP Audit Log, SIEM Record which journal entries AI posted
Access Control Role-based access setup ERP RBAC, API Gateway Accounting AI Agent accesses only GL Module
Regular Review Weekly review meetings Dashboard, KPI tracking Every Monday: review AI accuracy rate
Kill Switch Emergency button + auto-stop API rate limit, circuit breaker Error rate > 10% = auto-disable agent

ERP Governance + AI Agent — Why ERP Is the Best Guardian

A well-designed ERP system has governance features that support safe AI Agent operations:

Saeree ERP Feature How It Supports AI Governance
Role-Based Access Control (RBAC) Set AI Agent permissions just like employee permissions — access only necessary data
Approval Workflow Define Human-in-the-Loop checkpoints — AI drafts but humans must approve before execution
Audit Trail Record every AI-created transaction — 100% traceable and auditable
Data Lineage Track data origin and processing history before AI uses it
Budget Control Prevent AI Agents from approving over-budget expenses — automatic system blocks
Segregation of Duties Separate AI Agents that "create entries" from those that "approve entries" — fraud prevention

Key Principle: AI Agents operating within a well-governed ERP system are far safer than AI Agents working independently on Excel or Google Sheets, because every action has audit trails, access controls, and approval workflows in place.

Complete Series Summary — All 4 Episodes

Summary: AI Agent as a Team Member — Does It Really Work?

EP Topic Key Takeaway
EP.1 What Is an AI Agent? AI Agent =/= Chatbot — works continuously, uses tools, makes preliminary decisions
EP.2 Setting Up AI Agent Teams Start small, measure results, then scale — don't start big
EP.3 What % Can Be Replaced? 40-80% of routine tasks, not 100% — ERP data is the fuel
EP.4 Limitations & Guardrails 5 limitations + 5 risks + 7 guardrails = safe AI deployment

AI Agents are powerful, but power without control is dangerous. Organizations that want to use AI Agents sustainably need both "people who understand AI" and "systems that control AI." A well-governed ERP system is the foundation that makes AI Agents work safely and auditably.

- Sureeraya Limpaibul, Saeree ERP

References

Interested in ERP for Your Organization?

Consult with Grand Linux Solution experts — free of charge

Request Free Demo

Call 02-347-7730 | sale@grandlinux.com

Saeree ERP Author

About the Author

Sureeraya Limpaibul

Managing Director, Grand Linux Solution Co., Ltd. & Founder of Saeree ERP — providing comprehensive ERP consulting and services.