02-347-7730  |  Saeree ERP - Complete ERP Solution for Thai Organizations Contact Us

What is Claude MCP? Model Context Protocol

What is Claude MCP? Model Context Protocol explained in depth
  • 27
  • April

Before MCP, every developer who wanted to connect AI to an external system (GitHub, Slack, databases, etc.) had to write a new integration each time. This created an N×M problem — N AI tools and M data sources required N×M custom integrations. Model Context Protocol (MCP) is the open standard Anthropic introduced to solve this — this article takes a deep dive into what MCP is, how it works, and why it has become the standard accepted by Google, OpenAI, Cursor, and the entire AI industry.

Quick Summary — What is MCP?

MCP (Model Context Protocol) is an open standard introduced by Anthropic in late 2024 for connecting LLMs (Claude, GPT, Gemini) to external tools and data sources via JSON-RPC 2.0. Think of it as the USB-C of AI — one plug, every device. Today it has 97 million downloads/month and over 10,000+ public servers. In December 2025, Anthropic donated MCP to the Linux Foundation under the Agentic AI Foundation.

The Problem MCP Solves: N×M Integration Hell

Before MCP, connecting an AI Tool to a data source was a tangled affair — every vendor had to write its own integration:

  • 5 AI Tools (Claude, ChatGPT, Cursor, Copilot, Gemini)
  • 10 data sources (GitHub, Slack, Notion, Postgres, Google Drive, etc.)
  • Required 5 × 10 = 50 integrations for every pair to work together

The more AI Tools and data sources we have, the more integrations grow as N×M — not scalable, full of duplicate code, bugs, and security risks. Anthropic created MCP as a shared standard — write the MCP server once, use it with any AI that supports MCP. This reduces the equation from N×M to N+M.

Architecture: Client-Server over JSON-RPC 2.0

MCP uses a Client-Server architecture communicating over JSON-RPC 2.0 — the same protocol family as the Language Server Protocol (LSP) that VS Code and other IDEs use to connect with language servers.

Role Responsibility Examples
MCP Host The application the user opens — has the LLM inside Claude Desktop, Claude Code, Cursor, thClaws
MCP Client Host's representative that talks to each Server (1 client per server) Lives inside the Host — not directly visible to the user
MCP Server An application that exposes tools/resources/prompts to be called by Clients GitHub MCP server, Slack MCP server, Filesystem MCP server

The 3 Capabilities an MCP Server Exposes

Each MCP Server can expose 3 types of capability for the Host to use:

Capability What it is Examples
Tools Functions the AI can call — may have side effects (modify data, send messages) create_issue, send_slack, query_db, run_command
Resources Data the AI can read — read-only, no side effects file://path, postgres://table, gdrive://doc
Prompts Pre-built prompt templates the Host can pick "summarize_pr", "explain_query", "code_review"

Handshake & Discovery — Getting Started

When the Host starts up (e.g. Claude Desktop boots), it talks to the configured MCP Servers in this sequence:

  1. Initialize — Client sends an initialize request with version and supported capabilities
  2. Server Response — Server replies with its own version and capabilities
  3. List — Client requests the list of tools/resources/prompts via tools/list, resources/list, prompts/list
  4. Register — Host stores these into memory and offers them to the LLM during the conversation
  5. Invoke — when the LLM decides to call a tool, the Client sends tools/call to the Server and receives the result back

Transport Modes — How Clients Talk to Servers

MCP supports 3 ways for Clients and Servers to communicate, each suited to different environments:

Transport Best For Detail
stdio Local server on the same machine Host spawns the Server's process and talks via stdin/stdout — simplest and most secure
HTTP + SSE Remote server (legacy) HTTP POST for requests, Server-Sent Events for streaming — older approach; Streamable HTTP is now recommended
Streamable HTTP Remote server (modern) A single HTTP endpoint supporting both request/response and streaming — the new recommended standard

MCP Servers Available Today

There are now over 10,000 MCP servers in the ecosystem — from Anthropic, vendor-official, and community sources. Popular categories:

Category Example MCP Servers
Code & DevOps GitHub, GitLab, Git, Docker, Kubernetes, Sentry
Communication Slack, Discord, Email (Gmail/IMAP), Teams
Data Postgres, MySQL, MongoDB, Redis, BigQuery, Snowflake
Productivity Google Drive, Google Calendar, Notion, Linear, Jira
Browser & Web Puppeteer, Playwright, Brave Search, Fetch
Filesystem Local filesystem, S3, GCS, Azure Blob

Example: Configuring an MCP Server in Claude Desktop

Adding an MCP server in Claude Desktop is done via a config file at ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxx"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem",
               "/Users/me/projects"]
    }
  }
}

When Claude Desktop starts, it spawns both servers via stdio and begins the handshake automatically. The user can immediately tell Claude things like "create an issue on GitHub repo X" or "read a file in /Users/me/projects/foo".

Adoption in 2026 — The AI Industry Standard

MCP is no longer just an Anthropic project — it has become a de facto industry standard adopted by every major player:

  • 97 million downloads/month of the MCP SDK across the ecosystem (March 2026)
  • 10,000+ public MCP servers available
  • December 2025: Anthropic donated MCP to the Linux Foundation under the Agentic AI Foundation — co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, and AWS
  • Adopted by: Claude (Anthropic), ChatGPT (OpenAI), Gemini (Google), Cursor, Sourcegraph Amp, Factory, GitHub Copilot, thClaws

MCP vs Claude Skills — How Are They Different?

MCP and Claude Skills are complementary mechanisms — they are not replacements for one another:

Topic MCP Skills
Purpose Connects AI to external tools/data sources Bundles workflows + instructions + scripts for specialized tasks
Form A separate Server process speaking JSON-RPC A folder containing SKILL.md + scripts + assets
Triggering LLM calls a tool by name and schema Auto-trigger via whenToUse, or by slash command
Used Together A Skill can call tools from MCP servers internally A higher-level layer above MCP — encapsulating workflows that use MCP tools

Summary — Claude MCP in One Paragraph

Topic Summary
What it is An open standard from Anthropic for plug-and-play connection of AI Tools to external data/tool sources
Analogy The USB-C of AI — one plug, every device
Technical Client-Server architecture over JSON-RPC 2.0 (like LSP) — supports stdio, HTTP+SSE, and Streamable HTTP
3 Capabilities Tools (callable) / Resources (readable) / Prompts (preset templates)
Ecosystem 97M downloads/month, 10K+ servers, supported by Anthropic/OpenAI/Google/Cursor/Microsoft/AWS
Governance Linux Foundation (Agentic AI Foundation) since December 2025

MCP changed AI Integration from N×M to N+M — write a tool once and use it everywhere. It is the single standard that lets the AI Agent ecosystem grow exponentially in 2026.

- Saeree ERP Team

Read More — Related Articles

References

Interested in ERP for your organization?

Consult with our expert team at Grand Linux Solution — free of charge

Request Free Demo

Call 02-347-7730 | sale@grandlinux.com

Saeree ERP Author

About the Author

Paitoon Butri

Network & Server Security Specialist, Grand Linux Solution Co., Ltd.