- 3
- March
On 28 February 2026, London's King's Cross — the city's technology hub — became the site of what has been called "the largest anti-AI protest ever." Hundreds of protesters marched through the main streets, carrying signs demanding a halt to unregulated AI development, amid growing global concerns ranging from deepfakes and job losses to existential risks to humanity.
Who Organized This Protest? Pause AI and Pull the Plug
This protest was organized by two key organizations that have been active in AI safety advocacy:
- Pause AI — A global organization calling for a pause on developing frontier AI capabilities until adequate regulations are in place
- Pull the Plug — An activist group focused on raising public awareness about AI's risks to society
Hundreds of protesters marched through King's Cross, home to several major tech companies including Google DeepMind, with the goal of sending a warning to AI companies and governments worldwide.
What Were They Protesting? Concerns by Category
The protesters' demands covered multiple levels of concern, from problems already occurring today to far-future risks:
| Type of Concern | Details | Urgency Level |
|---|---|---|
| Online Slop (AI-generated junk content) | AI-generated content flooding the internet, making it harder to find real information | Already happening |
| Deepfakes used for harm | Fake images and videos used to harm, deceive, or blackmail people, especially women | Already happening |
| Job losses | AI replacing human jobs across many industries, especially creative and service roles | Currently happening |
| Killer Robots | Autonomous weapons using AI to make lethal decisions without human oversight | Approaching risk |
| Human extinction | Concerns that advanced AI may become uncontrollable and pose an existential threat to humanity | Long-term risk |
The diversity of these concerns reflects that the anti-AI movement does not come from a single group, but is a convergence of many groups with different reasons — from artists worried about copyright to scientists concerned about AI governance
Timeline: The Growing Global Anti-AI Movement
The London protest did not happen in isolation — it is part of a growing global wave of concern about AI:
| Period | Key Events |
|---|---|
| Mar 2023 | Open letter "Pause Giant AI Experiments" signed by Elon Musk, Steve Wozniak, and over 1,000 researchers calling for a 6-month pause on advanced AI development |
| Nov 2023 | AI Safety Summit at Bletchley Park, UK — leaders from 28 countries signed the declaration on AI safety |
| Mar 2024 | EU passed the AI Act — the world's first AI law establishing a risk-based regulatory framework |
| 2025 | Protests by artists and writers worldwide over AI and copyright; AI lawsuits filed in multiple countries |
| Feb 2026 | Largest anti-AI protest at King's Cross, London + UN establishes new AI advisory panel |
UN Response: The New AI Advisory Panel
Around the same time, the United Nations (UN) established a new AI Advisory Panel to respond to growing concerns about AI's impact on global society.
Role of the UN's AI Advisory Panel
- Produce scientific reports on AI risks
- Build a "credible evidence base" for policymakers
- Does not set policy directly but provides information for each government to make decisions
The UN's approach is similar to the Intergovernmental Panel on Climate Change (IPCC), which collects scientific evidence but does not mandate action from any country. The establishment of this panel signals that AI concerns have escalated from "a tech industry issue" to "a global agenda item."
When the UN establishes a dedicated advisory panel for AI, it means the world views AI as an issue on par with climate change — no longer just a technology matter.
— Significance of the AI Advisory Panel establishment
Analysis Table: AI Concerns and Their Legitimacy
Not all concerns carry equal weight. Here is an analysis of how legitimate each concern is at present:
| Issues | Legitimacy | Evidence |
|---|---|---|
| Deepfakes harming people | Very high | Thousands of real cases worldwide, including deepfake pornography of children and women |
| Junk content flooding the internet | Very high | The volume of AI-generated content is growing rapidly, reducing the quality of information online |
| Job losses | Medium-High | Some positions have been replaced, but AI also creates new jobs. The net impact remains unclear. |
| Autonomous weapons | Medium-High | Already being developed and deployed in some countries. The Anthropic vs Pentagon conflict is a clear example. |
| Uncontrollable AI / Extinction | No evidence yet | Theoretical, not an actual threat — but many researchers warn it should not be ignored |
Business Perspective: How Does AI Fear Affect AI Adoption in Organizations?
Anti-AI protests are not just news — they directly impact organizational decisions about adopting technology. Growing fear leads to:
- Executives become more hesitant to invest in AI projects due to fear of backlash from employees and the public
- Employees resist the adoption of AI in daily work, viewing it as a threat to their positions
- Stricter regulations emerge as governments rush to pass AI laws faster than the technology matures
- Customers worry about privacy and the accuracy of AI-generated data
For organizations considering AI in ERP or management systems, the key is to distinguish between irrational fears and legitimate concerns. Understanding the appropriate roles of AI and humans will lead to better decisions.
A Balanced View: Legitimate Concerns vs Overreactions
Not every fear about AI is exaggerated, and not every criticism is anti-technology. What matters is seeing the full picture:
Legitimate concerns that deserve attention:
- Deepfakes are already being used to harm real people worldwide — laws must keep up
- Some workers are genuinely losing income with no transition plan — upskilling policies are needed
- AI should not be used to make life-and-death decisions without human oversight
- Cybersecurity must keep pace with AI adoption in organizations
Reactions that may be overblown:
- AI will make humans extinct soon — there is no evidence to support this
- AI will take every job — history shows that new technologies typically create new jobs as well
- All AI development must stop — unrealistic and could put countries at a disadvantage
Lessons for Thai Organizations
Although the protest took place in London, these lessons are directly relevant to Thai organizations, especially those considering using AI to replace employees:
1. Communicate Transparently with Employees
When deploying AI in an organization, communicate clearly what AI will be used for and what it will not be used for. Uncertainty breeds fear; transparency reduces it.
2. Establish a Clear AI Governance Policy
Organizations with a clear AI governance policy will build confidence among employees, customers, and partners that AI will be used responsibly.
3. Invest in Employee Upskilling
Instead of replacing people with AI, train employees to use AI as a productivity tool. Shift from "AI replacing humans" to "AI assisting humans."
4. Choose Technology with Built-in Security
Good ERP systems and enterprise software should have robust security and access control — with or without AI. Organizational data must always be properly protected.
5. Closely Monitor Thai AI Regulations
Thailand is also developing an AI legal framework. Organizations that prepare in advance will have an advantage when the law takes effect.
Conclusion: A Protest Voice That Should Not Be Ignored
The anti-AI protest in London may have had "only" a few hundred participants, but what they represent is far larger — a growing concern among people worldwide that AI is developing faster than society can keep up.
The UN responded by establishing a new AI advisory panel. Governments worldwide are passing laws, and organizations must adapt. For Thai businesses, the key is not choosing to "accept AI" or "reject AI" but rather adopting AI responsibly, transparently, and with consideration for all stakeholders affected.
