The Model Context Protocol
Learning Objectives
- You know what the Model Context Protocol (MCP) is at a high level.
- You understand why a standard protocol is useful in the AI tooling ecosystem.
- You can explain the roles of host, client, and server in an MCP-style setup.
Towards standards
Early efforts on AI tool integrations were often one-off. A host application would define its own way to describe tools, a model provider would define its own way to request them, and every new connection would require custom code on both sides.
Without a shared protocol or a standard, every AI tool integration becomes a custom integration. One application defines its own tool format, another defines a different one, and every new connection requires custom code on both sides.
Model Context Protocol (MCP) addresses this by providing a shared protocol for exposing capabilities to AI systems. Instead of inventing one-off interfaces repeatedly, tools and resources can be described in a more standard way.
The basic idea
At a high level, an MCP setup often includes three roles. There is a host application where the user is working. Inside that host, there is an MCP client that manages communication. The client can then connect to one or more MCP servers that expose tools, resources, or prompts.
Figure 1 summarizes the relationship.
Tools, resources, and prompts
MCP discussions often mention three kinds of exposed capabilities. Tools represent actions or callable operations. Resources represent retrievable context or data. Prompts represent reusable prompt templates or prompt assets.
This is broader than simple function calling. Function calling focuses on “call this operation”. MCP also provides a way to think about reusable context and prompt assets as first-class capabilities.
An example from this course helps make the distinction clearer. In the tool-connected assistant tutorial that we’ll work on later in this part, a local issue lookup function would be a tool. A file of project guidelines that the assistant can read would be closer to a resource. A reusable system prompt for how the assistant should answer issue questions would be closer to a prompt asset. MCP gives names and structure to all three kinds of capability.
Tool use is an integration problem
You do not need to implement an MCP server to understand the code idea — the reason why MCP exists is architectural. Tool use is an integration problem. When every host application and every tool provider invent their own incompatible interfaces, engineering effort is wasted on one-off connections. Standardization reduces that duplicated effort.
That is why MCP has become relevant in discussions around coding tools, local assistants, and broader agent-like workflows.
You can also connect MCP back to the roles used in earlier chapters. If a learner uses an editor with an assistant extension, the editor is the host. The extension or internal integration layer is the MCP client. A local filesystem server, a documentation server, or an issue-tracker server would each be MCP servers. Seeing the mapping this way makes MCP less abstract.
At this stage, it is enough to understand MCP as a useful standardization effort around model-accessible capabilities. The practical coding in this part stays closer to direct function calling, so the examples remain small and transparent.