Large Language Models (LLMs) are already incredibly powerful. They can generate human-like text, answer complex questions, and even write code. LLMs are great at creating content we can easily read and use but what if we could elevate them beyond just content creation? What if we could give them tools they could understand, use, and execute on their own? They would just need a simple description of the tools they have access to. Imagine giving your LLM superpowers, allowing it to do more by simply providing it with context about the tools you use and have.
This isn’t a futuristic dream; it’s the reailty brought forth by Anthropic’s Model Context Protocol (MCP). Think of it as a the USB-C for AI which is an universal standard that lets LLMs plug into and use the vast digital world around them.
In this blog, we will dive into what the Model Context Protocol and explore what makes it so revolutionary and most importantly learn how to create and use one with fastmcp
to give your LLMs these incredible new capabilities.
What is MCP (Model-Context Protocol)?
What is it Model Context Protocol? It is designed to be an open standard to give LLMs a structured, consistent and secure way to access and interact with external “context”. Integrating LLMs with outside systems before often mean a patchwork of custom APIs and one-off solutions which led to complexity, limited interoperability, and security worries.
MCP changes the game by making this interaction formal. It uses a client-server setup where
- MCP Hosts/Clients: These are the AI Applications like an LLM chatbot, an AI-Powered IDE, or a Custom AI Agent. They need to access outside capabilities and send request to MCP servers.
- MCP Servers: These are lighweight services that offer specific capabilities through a standard interface. These capabilities can be:
- Tools: Executable functions that let the LLM perform actions. It can be used to call an API, interact with a database, send an email, or create a calendar event.
- Resources: Structured data streams that give information to the LLM such as database records or real-time sensor data.
- Prompts: Reusable templates that guide how the LLM interacts.
Why is this revolutionary?
- Standardization: No need to reinvent the wheel for every integration. MCP provides a universal language for AI models to communicate with any compliant tools or data source which is similar to how REST APIs standardized web service communication that allowed a massive ecosystem of interconnected applications to grow.
- Enhanced Context Awareness: Instead of solely relying only on information from their training data or limited prompts, LLMs now can dynamically pull in real-time, relevant context thus helping the LLM produce more accurate, relevant, and powerful responses and actions.
- Dynamic Tool Discovery: LLMs can not only use tools but find them as they need them. Meaning an LLM can figure out how which functionalities are available for the current task and intelligently pick the best tool to reach its goal.
- Security and Control: MCP is built with security in mind hence it includes ways for authentication, authorization, and permission management. Ensuring LLMs only access and act on what they’re allowed to which is crucial for sensitive business data and operations.
- Modular and Scalable AI: By offloading complex logic and external interactions to specialized MCP servers then making the LLM itself stay leaner thus helping it focus more on its main job of understanding and generating language. Makes AI systems easier to build, maintain, and scale.
How to build one?
Now that we understand the magic, let’s talk about how we can build one. While the raw MCP protocol can involve a fair bit of setup, thankfully there is a library that we can easily use to build one called fastmcp
for python. Make sure that you have Python 3.11+ installed and then using uv
as our python environment manager but pip
works too. We will be building a fastmcp based server to define our tools and a fastapi based client that will interact with our fastmcp server with gemini!
uv pip install fastmcpÂ
or
pip install fastmcp
After that you need to install the following dependencies.
uv pip install -r requirements.txt
or pip install -r requirements.txt
Next is the fastmcp based server, here we have some simple tools.
Finally, our fastapi based client to interact with our fastmcp with gemini.
You first need to run the fastmcp server by running it with python <fastmcp-server>.py
and then the client python <fastapi-client>.py
. Finally test it by sending request using your api testing software sending requests to http://localhost:8000/chat.
Sample Request:
{
"message": "Give me a cat fact"
}
Conclusion
The Model Context Protocol is not just another technical standard but it is a big shift in how we build and interact with AI. Giving LLMs a structured, secured and interoperable way to access external tools and data, MCP transforms them from advanced text generators into dynamic action-oriented agents. With libraries like fastmcp we are able to give LLMs “superpowers” in an easier way. Building sophisticated AI system or personal LLM assistant to do more than chat, exploring MCP and fastmcp is a crucial step toward unlocking the next generation of AI capabilities. Start building and witness the magic of truly context-aware LLMs!