Skip to main content

Building an MCP Agent with UV, Python & mcp-use

Model Context Protocol (MCP) is an open protocol designed to enable AI agents to interact with external tools and data in a standardized way. MCP is composed of three components: server, client, and host.


MCP host

The MCP host acts as the interface between the user and the agent  (such as Claude Desktop or IDE) and plays the role of connecting to external tools or data through MCP clients and servers. Previously, Anthropic’s Claude Desktop was introduced as a host, but it required a separate desktop app, license, and API key management, leading to dependency on the Claude ecosystem. mcp-use is an open-source Python/Node package that connects LangChain LLMs (e.g., GPT-4, Claude, Groq) to MCP servers in just six lines of code, eliminating dependencies and supporting multi-server and multi-model setups.

MCP Client

The MCP client manages the MCP protocol within the host and is responsible for connecting to MCP servers that provide the necessary functions for the host’s services. The MCP client uses a JSON file, which lists the server names and connection methods, to handle connections and supply tools or data as requested by the MCP host.

MCP server

The MCP server supports the actual connection and management of resources (such as databases, web data, local files) needed by the MCP host. It exposes available tools and manages session-level context. In this example, we use Astral UV to build a Python runtime and package management environment. UV is written in Rust, offering fast installation and dependency resolution, and allows immediate script execution with uv run, making it ideal for container and CI environments.

Overall Architecture

Agent System Architecture with MCP
Agent architecture with MCP

Development Environment Setup

As shown in the system architecture, this document explains how to set up an environment using UV to serve the MCP server and build the MCP host and client using the open-source solution mcp-use instead of Claude Desktop.

# uv install
curl -LsSf https://astral.sh/uv/install.sh | sh

# initialize uv project 
uv init "Your MCP Server Project Directory" 

# install required python packages for the uv project
uv add "mcp[cli]" mcp-use langchain-openai langchain-community python-dotenv

server.py – A Simple MCP Server for Wikipedia or Arxiv Search

This code shows how to implement an MCP server called "doc-server" that uses Wikipedia and Arxiv API wrappers provided by LangChain to search for and return content related to a user’s query.

# file name is 'server.py'

from mcp.server.fastmcp import FastMCP
from langchain_community.utilities import WikipediaAPIWrapper, ArxivAPIWrapper
from langchain_community.tools import WikipediaQueryRun, ArxivQueryRun

mcp = FastMCP("doc-server")

wiki_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=500)
wiki_tool = WikipediaQueryRun(api_wrapper=wiki_wrapper)

arxiv_wrapper = ArxivAPIWrapper(top_k_results=1, doc_content_chars_max=500)
arxiv_tool = ArxivQueryRun(api_wrapper=arxiv_wrapper)

@mcp.tool()
async def get_info(searchterm: str) -> str:
    try:
        result = wiki_tool.run(searchterm)
        return result
    except Exception as e:
        return f"Error fetching Wikipedia information: {str(e)}"
    
@mcp.tool()
async def get_research_paper(searchterm: str) -> str:
    try:
        result = arxiv_tool.run(searchterm)
        return result
    except Exception as e:
        return f"Error fetching research paper information: {str(e)}"
    
if __name__=="__main__":
    mcp.run(transport='sse') # for SSE serving
    # mcp.run(transport='STDIO') for local serving
   

Running doc-server with uv

You can run the above server.py using the uv run command. This will expose the functions defined in the file so they can be used by an MCP client. In this example, uv serves the doc-server on port 8000 of localhost.

(mcp_env)$ uv run server.py
INFO:     Started server process [79863]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)

Writing the doc_server.json File

Here is an example of a JSON file required to create an MCP client that uses functions from the 'doc-server'. You could also define this as a dictionary directly within your client code for the same result.

{
    "mcpServers": {
        "doc_server": {
            "command": "uv",
            "url": "http://127.0.0.1:8000/sse",
            "transport": "sse"
        }
    }
}

Creating MCP Host and Client Code Using mcp-use

This example demonstrates how to build an MCP host and client using mcp-use and Azure OpenAI as the GenAI backend. As shown below, mcp-use allows you to quickly set up an MCP host and client that can invoke custom or external MCP server functions.

import os
from dotenv import load_dotenv
from langchain_openai import AzureChatOpenAI
from mcp_use import MCPAgent, MCPClient

# Load environment variables
load_dotenv(".env")

# Create MCPClient from config file
client = MCPClient.from_config_file(
    "doc_server.json"
)

# Create LLM
llm = AzureChatOpenAI(
        azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
        azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
        api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
        verbose=False,
        temperature=0.0,
    )

# Create agent with the client
agent = MCPAgent(llm=llm, client=client, max_steps=30)

# Run the query
result = await agent.run(
    "Let me know ReciproCAM paper.",
    max_steps=30,
)
print(f"\nResult: {result}")

Output Example



Conclusion

mcp-use is an open-source, multi-LLM, multi-server oriented host solution that removes dependency on Claude Desktop. UV offers an ultra-fast package management and execution environment, making it ideal for reproducible MCP server deployments.

 


References

Comments

Popular

Understanding SentencePiece: A Language-Independent Tokenizer for AI Engineers

In the realm of Natural Language Processing (NLP), tokenization plays a pivotal role in preparing text data for machine learning models. Traditional tokenization methods often rely on language-specific rules and pre-tokenized inputs, which can be limiting when dealing with diverse languages and scripts. Enter SentencePiece—a language-independent tokenizer and detokenizer designed to address these challenges and streamline the preprocessing pipeline for neural text processing systems. What is SentencePiece? SentencePiece is an open-source tokenizer and detokenizer developed by Google, tailored for neural-based text processing tasks such as Neural Machine Translation (NMT). Unlike conventional tokenizers that depend on whitespace and language-specific rules, SentencePiece treats the input text as a raw byte sequence, enabling it to process languages without explicit word boundaries, such as Japanese, Chinese, and Korean. This approach allows SentencePiece to train subword models di...

Mastering the Byte Pair Encoding (BPE) Tokenizer for NLP and LLMs

Byte Pair Encoding (BPE) is one of the most important and widely adopted subword tokenization algorithms in modern Natural Language Processing (NLP), especially in training Large Language Models (LLMs) like GPT. This guide provides a deep technical dive into how BPE works, compares it with other tokenizers like WordPiece and SentencePiece, and explains its practical implementation with Python code. This article is optimized for AI engineers building real-world models and systems. 1. What is Byte Pair Encoding? BPE was originally introduced as a data compression algorithm by Gage in 1994. It replaces the most frequent pair of bytes in a sequence with a single, unused byte. In 2015, Sennrich et al. adapted BPE for NLP to address the out-of-vocabulary (OOV) problem in neural machine translation. Instead of working with full words, BPE decomposes them into subword units that can be recombined to represent rare or unseen words. 2. Why Tokenization Matters in LLMs Tokenization is th...

ZeRO: Deep Memory Optimization for Training Trillion-Parameter Models

In 2020, Microsoft researchers introduced ZeRO (Zero Redundancy Optimizer) via their paper "ZeRO: Memory Optimization Towards Training Trillion Parameter Models" (arXiv:1910.02054). ZeRO is a memory optimization technique that eliminates redundancy in distributed training, enabling efficient scaling to trillion-parameter models. This provides an in-depth technical breakdown of ZeRO's partitioning strategies, memory usage analysis, and integration with DeepSpeed. 1. What is ZeRO? ZeRO eliminates redundant memory copies of model states across GPUs. Instead of replicating parameters, gradients, and optimizer states across each GPU, ZeRO partitions them across all devices. This results in near-linear memory savings as the number of GPUs increases. 2. Limitations of Traditional Data Parallelism In standard data-parallel training, every GPU maintains: Model Parameters $\theta$ Gradients $\nabla \theta$ Optimizer States $O(\theta)$ This causes memory usage ...