The Model Context Protocol (MCP) is doing for AI what the internet did for information

By standardising how AI systems access tools and data, MCP is doing for AI what the internet did for information: opening up a universe of possibilities through connectivity.

Nishaan Vigneswaran
12 min read

Large language models (LLMs) have demonstrated capabilities in understanding and generating text, but they historically operated in isolation. Their knowledge was fixed at training time, and they couldn’t access fresh data or interact with external systems. In practice, this meant an LLM couldn’t perform tasks like booking a meeting or updating a database record on its own. This limitation has been a major hurdle in making AI truly useful in real-world workflows.

The Model Context Protocol (MCP) is emerging as a solution to this problem. Introduced as an open standard by Anthropic in late 2024, MCP provides a secure, standardised “language” for AI models to communicate with external tools, data sources, and services. In essence, it acts as a bridge that allows AI assistants to move beyond static knowledge and become dynamic agents that can retrieve current information and take actions in the world. By adopting MCP, AI systems are no longer confined to their training data – they can now safely tap into live databases, call APIs, control applications, and more, all through a unified protocol.

An MCP server is a software component that interfaces between an AI (the MCP client) and some external resource or tool. These servers expose standardised interfaces (commands, queries, or data) that an AI agent can invoke. You can think of MCP as providing a universal adapter or USB-C port for AI applications – a single standard plug that connects the AI to any external system. Just as USB-C standardised how devices connect and exchange data, MCP standardises how AI models integrate with files, databases, web services, and more.

What are MCP servers?

MCP follows a client–server architecture designed specifically for AI-tool interactions. In an MCP setup, an AI application (MCP host) – for example, a chatbot interface or an AI-powered IDE – runs an MCP client component that manages connections to one or more MCP servers. Each MCP server is essentially a wrapper around an external tool, service, or data source, exposing its capabilities to AI in a standardised way. The server could interface with anything from a local filesystem to a cloud API. The MCP client and server communicate over a defined protocol (built on JSON-RPC 2.0), exchanging requests and responses that let the AI discover available actions and invoke them securely.

At a high level, an MCP server provides context and actions to the AI. On startup, the AI’s MCP client will query the server to learn what it can do. MCP formalises this by defining a set of primitives that servers can offer:

  • Tools: Discrete operations or functions that the AI can call to perform actions (e.g. create a file, query a database, call an API). For example, a GitHub MCP server might offer tools to create an issue or run a code search, and a calendar server might offer a tool to schedule a meeting.
  • Resources: Data sources that the server can provide to the AI for context (e.g. contents of a document, rows from a database, results from a web search). These are typically read-only information the AI can retrieve. For instance, a database MCP server could expose a resource representing a particular table or query result.
  • Prompts: Predefined prompt templates or workflows that help shape the AI’s interactions. These can include system instructions or examples that the server wants the AI to use when interacting with its tools. Prompts help standardise how the AI asks for certain actions.

Using standardised methods (like tools/list, tools/call, resources/get, etc.), an AI agent can discover what a server offers and then utilise those capabilities. This design allows new tools to be plugged in dynamically. If a new MCP server is connected, the AI can list its tools and immediately gain new powers without coding. The AI no longer needs built-in knowledge of how to use a specific API – it just needs to understand the general MCP interface. The MCP server handles translating the AI’s high-level request into the actual API calls or file operations on the backend.

MCP servers are modular and interchangeable. Because all servers speak the same protocol, developers can add or swap out integrations easily, without changing the AI’s core logic. This modularity is very similar to how web browsers use plug-ins, or how the Language Server Protocol standardised the way code editors use language-specific servers. In the past, connecting an AI to a new service required bespoke integration work (each service had different APIs, auth, data formats, etc.). That led to a combinatorial explosion of connectors – a costly “N×M” problem where every new tool or model required new glue code. MCP greatly simplifies this by providing a single, open standard for integration, avoiding fragmented one-off solutions.

Security and control are also built into the MCP design. The protocol supports both local and remote servers, with appropriate authentication. For local tools (running on the same machine as the AI), MCP can use a lightweight STDIO transport (standard input/output streams) for speed. For remote services, MCP uses an HTTP-based transport with support for OAuth, API keys, and other auth mechanisms. Each server exposes only a limited set of actions that it was designed for, and often requires credentials or permissions to use sensitive operations. There are also features for human-in-the-loop confirmation – an MCP server can request user approval or additional input before proceeding with a potentially sensitive action.

MCP servers turn isolated AI models into fully integrated assistants

Once an AI application has one or more MCP server connections set up, the workflow for using them looks like this. Imagine a user asks an AI assistant: “Find the latest sales report in our database and email it to my manager.” Normally, an LLM alone could not do this – it lacks direct database access or email abilities. But with MCP, the AI can orchestrate a solution:

  1. Tool Discovery: The AI (via its MCP client) recognises it needs external tools – one to query the database and one to send an email. It queries its connected MCP servers to see what is available. Suppose it finds a database_query tool on a database server and an email_sender tool on an email/Mail server.
  2. Invoking Tools: The AI then issues a structured request to call the database_query tool, supplying parameters (e.g. “sales_Q3_report”) in the format the server expects. The MCP client forwards this JSON-RPC request to the appropriate server. The database MCP server receives it, runs the actual SQL query on the company’s database (securely), and returns the results (perhaps as a file or JSON data) back to the AI.
  3. Chaining Actions: Now the AI has the report data. Next, it calls the email_sender tool on the email server, providing the manager’s address and the report content. The email server executes the action (sending the email through an SMTP service or API) and returns a confirmation to the AI.
  4. Result: The AI confirms back to the user: “I have found the latest sales report and emailed it to your manager.”. From the user’s perspective, the AI handled a multi-step task seamlessly in one go.

This scenario highlights how MCP enables multi-tool orchestration. The AI effectively did tool usage planning: it broke the user request into two sub-tasks and leveraged two different MCP servers (each specialised for a domain) to accomplish them. All of this happens through the uniform interface of MCP, without hard-coding the AI to any specific database or email system. The same AI could switch out the underlying tools (say, use a different database or send via Slack) if those MCP servers are connected, and the process would be analogous.

MCP supports real-time notifications and streaming. For long-running actions, an MCP server can stream progress updates or results chunks back to the client (using Server-Sent Events over HTTP, for example). And if a server’s capabilities change (perhaps new tools become available), it can notify the client, so the AI always has an up-to-date view of what it can do. This dynamic discovery and update mechanism is crucial for building adaptive AI agents that can respond to changes in their toolset. It also means an AI can maintain context across tool usage – for example, results from one tool can inform the next action – without the developer writing glue code for that context passing (the MCP client and server handle packaging the necessary data).

MCP servers turn isolated AI models into fully integrated assistants. They handle the gritty details of connecting to external systems (APIs, databases, file systems), abstracting them into high-level actions the AI can invoke. This combination of standardisation, dynamic discovery, and secure execution is what makes MCP a game-changer.

A handful of real-world examples of MCP servers in use

Atlassian’s conceptual diagram illustrating an LLM (Claude) using an Atlassian Remote MCP Server to access Jira and Confluence data.
A number of MCP servers have emerged that connect AI to important platforms and data sources. These real-world examples show how broadly applicable MCP is – from cloud documentation to code repositories to enterprise knowledge bases:

  • AWS Documentation MCP Server: AWS has developed an MCP server that provides AI agents with direct access to AWS’s vast documentation and best-practices guides. This server exposes tools for reading AWS docs (fetching a given documentation page and returning it in a convenient format) and searching the documentation for relevant content. For instance, an AI assistant for developers could query this server to “look up how to configure an S3 bucket policy” and get an up-to-date answer straight from official AWS docs. The server can even provide recommendations for related content, acting like a smart documentation search engine for the AI.
  • GitHub MCP Server: GitHub has an official MCP server that connects AI assistants directly with the GitHub platform. This gives AI the ability to read and write code in repositories, manage issues and pull requests, analyse code changes, and automate development workflows – all through natural language commands.
  • Atlassian Jira/Confluence MCP Server: In the enterprise collaboration space, Atlassian has introduced a Remote MCP Server that brings Jira and Confluence (two widely used project tracking and documentation tools) into AI workflows. This server allows an AI like Anthropic’s Claude to summarise project information and documentation, create new Jira issues or Confluence pages, and even perform bulk or multi-step actions across those tools. For example, a product manager could ask the AI to “Summarise the open tasks for Project X and create a Confluence page with the summary,” and the AI would fetch all relevant Jira tickets, generate a summary, and publish it – all via MCP calls. The integration is permission-aware: it respects the user’s existing access controls and keeps data within those boundaries atlassian.com.

These examples are just the tip of the iceberg. Anthropic’s open-source repository of connectors includes servers for Google Drive (for file storage), Slack (for messaging), Git version control, SQL databases like Postgres, web browsers (using Puppeteer), and more anthropic.com. Virtually any service can be wrapped in an MCP server: from CRM systems to IoT devices, if it has an API or interface, it can be made AI-accessible.

MCP servers unlock a host of benefits that will fundamentally change how we use AI

By enabling AI to interact with external systems in a standardised way, MCP servers unlock a host of benefits that will fundamentally change how we use AI:

  • Real-Time Knowledge and Reduced Hallucination: With MCP, AI models are no longer limited to stale training data. They can fetch live, authoritative information on demand – whether it’s the latest documentation, a current database record, or breaking news. This greatly improves the accuracy and relevance of AI responses.
  • Dynamic Tool Use and Automation: MCP transforms LLMs from passive explainers into active agents that can perform tasks. An AI augmented with MCP can execute multi-step operations autonomously – like the example of querying a database then sending an email.
  • Seamless Integration, Lower Development Overhead: From a developer’s perspective, MCP drastically simplifies the integration of AI with new systems. It solves the “N × M integrations” nightmare – you don’t need custom adapters for every combination of AI model and service anymore.
  • Human-AI Collaboration and Workflow Improvement: As the Atlassian example showed, an AI agent can pull information from one system and inject it into another, or perform batch operations across systems, all through a single conversational interface.
  • Security and Governance Built-In: Because MCP is designed as an enterprise-grade protocol, it incorporates security measures that will be critical for world-scale adoption. All remote MCP servers use secure channels and require authentication (often OAuth tokens or API keys), and they respect the permission scopes of the user. For example, when an AI connects to the Atlassian Remote MCP Server, it operates within the user’s Jira/Confluence permissions and cannot access data the user couldn’t normally access.
  • Open Ecosystem and Innovation: MCP’s development as an open-source standard (stewarded by a community rather than a single vendor) is encouraging broad participation. Many industry players are rallying around MCP, contributing connectors and integrating it into their products.

By standardising how AI systems access tools and data, MCP is doing for AI what the internet did for information: opening up a universe of possibilities through connectivity. We are likely on the cusp of an AI revolution where any task that can be described can potentially be automated or assisted by an AI agent, thanks to the rich toolset provided by MCP servers. The excitement from industry leaders is palpable; as Atlassian’s CTO put it, “MCP hits every note – open by design, a healthy ecosystem, and key for human-AI collaboration… we can’t wait to see what customers do with it”. The coming years will show ever more inventive uses of MCP, and as the ecosystem matures, the line between what AI can do autonomously and what humans can do will blur in productive, empowering ways.

About the Author