- 21
- March
March 2026 marks a turning point for AI regulation worldwide. The United States is advancing the AI Accountability Act (H.R. 1694) at the federal level, while Washington state has passed 5 AI bills covering everything from deepfakes to chatbot safety for children. The message is clear: "The era of unchecked AI is coming to an end."
30-Second Summary
- AI Accountability Act (H.R. 1694) — Federal bill requiring NTIA to study and report on AI accountability measures within 18 months, covering AI in communications networks, social media, and spectrum sharing
- Colorado AI Act (SB 24-205) — First state law mandating bias audits for high-risk AI, effective June 30, 2026
- NYC Local Law 144 — Requires annual bias audits for AI hiring tools, fines $500-$1,500/day
- Washington passed 5 AI bills — Covering disclosure, child chatbot safety, health insurance, deepfakes, and digital likeness rights
- Global organizations using AI or serving US markets must prepare now
What Is the AI Accountability Act (H.R. 1694)?
The AI Accountability Act is a federal bill introduced to the US House of Representatives on February 27, 2025. It directs the National Telecommunications and Information Administration (NTIA) to conduct a comprehensive study and report on accountability measures for AI systems.
While H.R. 1694 is still in the legislative process, it signals a clear direction: the US federal government is moving toward establishing a national AI governance framework rather than leaving regulation solely to individual states.
| Item | Details |
|---|---|
| Bill Name | AI Accountability Act (H.R. 1694, 119th Congress) |
| Introduced | February 27, 2025 |
| Lead Agency | NTIA (National Telecommunications and Information Administration) |
| Requirements | Study + stakeholder consultation + report with recommendations |
| AI Scope | AI in communications networks, social media platforms, spectrum sharing applications |
| Report Deadline | Within 18 months of enactment |
| Report To | House Committee on Energy and Commerce + Senate Committee on Commerce, Science, and Transportation |
Additionally, the Senate has introduced the AI Accountability and Personal Data Protection Act (S. 2367), which combines AI accountability with personal data protection requirements.
Bias Audits — Already Enforceable in the US
While federal legislation is still in progress, state and city-level laws have already made bias audits mandatory for AI systems used in consequential decisions such as hiring, lending, health insurance, and criminal justice.
| Law | Scope | Key Requirements | Penalties |
|---|---|---|---|
| NYC Local Law 144 | AI hiring tools | Annual independent bias audit + publish summary + notify candidates | $500-$1,500/day |
| Colorado AI Act (SB 24-205) | All high-risk AI systems | Annual impact assessment + risk management + public disclosure on website | Effective June 30, 2026 |
| Illinois AEDT Law | AI video interview analysis | Notify candidates + obtain consent + allow opt-out | AG enforcement fines |
What Is a Bias Audit?
A bias audit is an independent review process that measures whether an AI system exhibits algorithmic discrimination — for example, an AI hiring tool that systematically scores male applicants higher than female applicants, or a lending AI that disproportionately rejects applicants from certain ethnic backgrounds. Audit results must be published publicly for transparency. Organizations using AI in consequential decisions need strong AI governance frameworks to stay compliant.
Washington's 5 AI Bills — A Comparative Summary
Washington state passed several AI bills before its March 12, 2026 adjournment. Governor Bob Ferguson has already signed multiple bills into law, covering everything from AI content disclosure to protecting children from chatbots.
| Bill | Topic | Key Requirements | Impact |
|---|---|---|---|
| HB 1170 | AI Disclosure | Embed watermarks or disclosures in AI-generated images, audio, and video so users can detect AI content | AI developers must update systems |
| HB 2225 | Chatbot Safety for Children | Hourly AI reminders to users + suicidal ideation detection + block explicit content for minors + ban simulated romantic relationships with children | Chatbot companies must overhaul protocols |
| SB 5395 | AI in Health Insurance | Increased transparency and accountability for AI use in prior authorization decisions by health insurers | Insurance companies must disclose AI use |
| SB 5105 | AI Deepfakes + Minors | Protect children from AI-generated deepfakes + preventive measures | Platforms must implement detection systems |
| SB 5886 | Digital Likeness Rights | Prohibit use of a person's digital likeness without consent. Courts can issue injunctions. Effective June 10, 2026 | AI content creators must verify rights |
Why This Matters — Real Cases of AI Bias
These AI laws did not emerge from theoretical concerns. They were driven by real incidents where AI caused measurable harm to people.
| Case | What Happened | Outcome |
|---|---|---|
| Amazon AI Hiring Tool (2018) | AI recruitment tool systematically scored female applicants lower than males because it learned from historically male-dominated hiring data | Amazon scrapped the system + reputational damage |
| Apple Card Gender Bias (2019) | AI approved credit limits 20x higher for husbands than wives, even when filing joint tax returns | Goldman Sachs investigated + regulatory scrutiny |
| COMPAS Recidivism (Ongoing) | AI recidivism scoring rated Black defendants higher risk than white defendants with similar records | Lawsuits + global ethics debate |
| UnitedHealth AI Denial (2023) | Health insurance AI automatically denied claims at high error rates, leaving patients without treatment | Lawsuits + inspiration for SB 5395 |
| Character.AI + Youth (2024) | Children formed deep emotional bonds with AI chatbots, leading to mental health impacts and self-harm cases | Direct catalyst for HB 2225 |
Key Lesson: Every case above resulted from organizations deploying AI without adequate audit processes — no impact assessments, no bias testing, no human oversight. Organizations with strong AI governance frameworks can prevent these problems before they occur.
Comparing AI Laws: US vs EU AI Act vs Thailand
The world is entering the era of AI regulation simultaneously, but each region takes a different approach. Organizations operating across borders must understand these differences (read more about Thailand's AI regulations).
| Topic | United States | EU AI Act | Thailand |
|---|---|---|---|
| Approach | Sectoral — each state/industry creates own rules | Comprehensive — single law covering all AI | Guideline-based — best practices + PDPA |
| AI Risk Classification | High-Risk (Colorado) / AEDT (NYC) | Unacceptable / High / Limited / Minimal Risk | No legal classification yet |
| Bias Audit | Mandatory (NYC, Colorado) | Mandatory for High-Risk AI | Recommended (not yet mandatory) |
| Penalties | $500-$1,500/day (NYC) + AG enforcement | Up to 35M EUR or 7% global revenue | PDPA: up to 5M THB (~$140K USD) |
| Child Protection | Yes (Washington HB 2225, COPPA) | Yes (prohibits certain AI for children) | PDPA protects children's data |
| Deepfakes | Yes (Washington SB 5886, TAKE IT DOWN Act) | Yes (must label AI-generated content) | Computer Crime Act (partial coverage) |
| Status | Enforced (NYC) + effective soon (Colorado Jun 2026) | Partially effective Feb 2025 + full Aug 2026 | AI Ethics Guidelines (MDES) + Draft AI Act |
How Organizations Should Prepare — 8-Point Checklist
Even though these laws are US-based, any organization that uses AI for decision-making, exports AI services internationally, or uses US-based AI platforms is affected. Preparing early reduces risk and creates competitive advantage (also read about PDPA compliance in accounting systems).
Organizational Compliance Checklist
- Inventory all AI systems in use — Create a comprehensive AI inventory across departments (HR, finance, sales, customer service)
- Assess risk levels for each AI system — AI making decisions about people (hiring, lending, healthcare) = high risk
- Conduct impact assessments — Analyze AI impact on different user groups following Colorado AI Act guidelines
- Test for bias — Check whether AI discriminates based on gender, age, ethnicity, or any protected category
- Establish an AI policy — Define who approves AI use, what data cannot be fed to AI, and who is accountable when AI errs
- Build audit trail capabilities — Record every AI-involved decision with traceability and review capability
- Train employees — Ensure staff understand AI limitations, how to verify outputs, and when to override
- Monitor legislation continuously — Track US, EU, and local AI regulations, which are evolving rapidly
AI Governance + ERP = Full Auditability
The core requirement of every AI law is transparency and auditability — exactly what a well-designed ERP system provides out of the box. Whether it is AI governance frameworks or human-AI collaboration strategies, having the right data infrastructure is essential.
| Legal Requirement | How ERP Helps |
|---|---|
| Decision audit trail | ERP records every transaction with timestamp, approver, and rationale |
| Impact assessment | ERP data enables immediate analysis of AI impact across different groups |
| Data quality for bias testing | ERP stores data systematically, enabling accurate bias testing |
| Human oversight | Multi-level approval workflows with defined human checkpoints |
| Compliance reporting | Auto-generate compliance reports from recorded data |
Saeree ERP Supports AI Governance
Saeree ERP is built with comprehensive audit trails for every transaction, multi-level approval workflows, and APIs for AI system integration. When AI laws demand auditability, organizations with ERP are ready to comply without starting from scratch.
Summary — The Age of Auditable AI
| Issue | What Is Happening | What You Should Do |
|---|---|---|
| Federal AI law | AI Accountability Act is progressing | Monitor NTIA study outcomes |
| Bias audits | Already enforced in NYC + Colorado effective June 2026 | Start bias testing today |
| Child AI protection | Washington passed chatbot safety law | Review AI interacting with young users |
| Deepfake + digital rights | Digital likeness protection laws enacted | Exercise caution with AI-generated content of individuals |
| Global impact | Indirect effects worldwide + Thailand drafting AI Act | Prepare AI inventory + policy + audit trail |
"The era of unchecked AI is ending. Regulations are advancing. Organizations that prepare now will lead. Those that wait will be left scrambling to catch up."
— Saeree ERP Team
If your organization needs to build a data foundation that supports auditability for AI compliance and future AI adoption, contact our Saeree ERP team for a free, no-obligation consultation.
References
- H.R. 1694 - AI Accountability Act, 119th Congress — Congress.gov
- S. 2367 - AI Accountability and Personal Data Protection Act — Congress.gov
- SB24-205 Consumer Protections for Artificial Intelligence — Colorado General Assembly
- Automated Employment Decision Tools (AEDT) — NYC DCWP
- Washington Legislature Passes Consumer-Facing Interactive AI Bill — Troutman Pepper
- Washington Deepfake Law Signed by Gov. Ferguson — NBC Right Now
