image

What is MCP? 🧩

The Model Context Protocol (MCP) is an open standard that standardizes how AI models represent and process context, enabling more consistent and predictable interactions across different systems. Often referred to as “the USB-C port for AI,” MCP provides a uniform way for Large Language Models (LLMs) to interface with external tools, data, and functionality.

MCP allows developers to:

  • Expose data through Resources (similar to GET endpoints)
  • Provide functionality through Tools (similar to POST endpoints)
  • Define interaction patterns through Prompts (reusable templates)
  • Create standardized interfaces between LLMs and external capabilities

Getting Started with fastMCP 🚀

One of the easiest ways to begin working with MCP is through the fastMCP Python SDK. FastMCP provides a high-level, Pythonic interface for building and interacting with MCP servers with minimal boilerplate code.

In this tutorial, we’ll create a simple “Hello MCP” application that demonstrates how to build an MCP server with tools that a language model can use through AWS Bedrock.

Prerequisites 🛠️

For this tutorial, I’m using:

  • VSCode with Q Developer CLI
  • AWS Bedrock with Anthropic’s Claude 3.5 Haiku model
  • Configured AWS credentials

Make sure you have Python 3.8+ installed and the necessary AWS permissions to access Bedrock models.

▶️Watch this video to see the MCP client and server running.

Setting Up Our The Project 📁

Let’s start by creating a project directory and the necessary files from your terminal:

mkdir hello_mcp

cd hello_mcp

Building Our MCP Server 🖥️

First, let’s create our MCP server file (mcp_server.py) which will expose tools for our LLM to use:

cat > mcp_server.py << 'EOL'
from mcp.server.fastmcp import FastMCP
import random

# Initialize FastMCP server
mcp = FastMCP("hello-MCP-server")

# Define a greeting tool
@mcp.tool()
async def greet(name: str) -> str:
    """Greet a person with their name and explain what MCP is."""
    return f"Hello {name}! Welcome to MCP. Model Context Protocol (MCP) is an open standard that standardizes how AI models represent and process context, enabling more consistent and predictable interactions across different systems."

# Rock-Paper-Scissors game tool
@mcp.tool()
async def play_rock_paper_scissors(player_move: str) -> str:
    """
    Play Rock-Paper-Scissors against the computer.
    
    Args:
        player_move: Your choice of 'rock', 'paper', or 'scissors'
    
    Returns:
        The result of the game
    """
    # Normalize and validate input
    player_move = player_move.lower().strip()
    valid_moves = ['rock', 'paper', 'scissors']
    
    if player_move not in valid_moves:
        return f"Invalid move. Please choose one of: {', '.join(valid_moves)}"
    
    # Computer makes a random move
    computer_move = random.choice(valid_moves)
    
    # Determine winner
    if player_move == computer_move:
        result = "It's a tie!"
    elif (player_move == 'rock' and computer_move == 'scissors') or \
         (player_move == 'paper' and computer_move == 'rock') or \
         (player_move == 'scissors' and computer_move == 'paper'):
        result = "You win!"
    else:
        result = "Computer wins!"
    
    return f"You chose {player_move}, computer chose {computer_move}. {result}"

# Start the server
if __name__ == "__main__":
    mcp.run()
EOL

This server exposes two tools:

  1. greet - A function that greets users and explains what MCP is
  2. play_rock_paper_scissors - A tool that lets users play the classic game against the computer

Creating an MCP Client with AWS Bedrock 🧠

Next, let’s create our client file (mcp_client.py) which will connect to our MCP server and use AWS Bedrock to interact with Claude:

cat > mcp_client.py << 'EOL'
import asyncio
import os
import sys
from contextlib import AsyncExitStack
from typing import Any, List

import boto3
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client


class BedrockAgent:
    def __init__(self):
        # Initialize session and client objects
        self.session = None
        self.exit_stack = AsyncExitStack()
        self.model_id = "us.anthropic.claude-3-5-haiku-20241022-v1:0"
        self.bedrock_runtime = None

    async def connect_to_server(self, server_script_path: str):
        """Connect to an MCP server"""
        if not server_script_path.endswith(".py"):
            raise ValueError("Server script must be a Python file with .py extension")

        print(f"Starting the MCP server: {server_script_path}...")

        # Use environment variables for server process
        env = os.environ.copy()

        # Start the server as a subprocess
        server_params = StdioServerParameters(
            command="python3", args=[server_script_path], env=env
        )

        stdio_transport = await self.exit_stack.enter_async_context(
            stdio_client(server_params)
        )
        self.stdio, self.write = stdio_transport
        self.session = await self.exit_stack.enter_async_context(
            ClientSession(self.stdio, self.write)
        )

        # Initialize the session
        await self.session.initialize()

        # List available tools
        response = await self.session.list_tools()
        print(
            "\nConnected to server with tools:", [tool.name for tool in response.tools]
        )
        return response.tools

    async def initialize_bedrock(self):
        """Initialize the Amazon Bedrock client"""
        print("Initializing Amazon Bedrock client...")
        try:
            self.bedrock_runtime = boto3.client(
                "bedrock-runtime", region_name="us-west-2"
            )
            print(f"Using model: {self.model_id}")
            return True
        except Exception as e:
            print(f"Error initializing Bedrock client: {str(e)}")
            print(
                "Make sure you have the necessary AWS credentials and permissions set up."
            )
            return False

    def extract_tool_result(self, tool_result):
        """Extract content from a CallToolResult object or other result types"""
        try:
            # If it's a string, number, bool, etc., return it directly
            if isinstance(tool_result, (str, int, float, bool)):
                return tool_result

            # If it has a content attribute (like CallToolResult)
            if hasattr(tool_result, "content"):
                content = tool_result.content

                # If content is a list (like TextContent objects)
                if isinstance(content, list) and content:
                    # If the first item has a text attribute
                    if hasattr(content[0], "text"):
                        return content[0].text
                    # Otherwise return the list
                    return content

                # For other content types
                return str(content)

            # If it's already a dict or list, return as is
            if isinstance(tool_result, (dict, list)):
                return tool_result

            # Fallback to string representation
            return str(tool_result)

        except Exception as e:
            print(f"Error extracting tool result: {e}")
            return str(tool_result)

    async def process_query(self, query: str, available_tools: List[Any]):
        """Process a user query using Bedrock and the MCP tools"""
        if not self.bedrock_runtime:
            return "Bedrock client not initialized"

        # Format tools for Bedrock
        tool_list = [
            {
                "toolSpec": {
                    "name": tool.name,
                    "description": tool.description,
                    "inputSchema": {"json": tool.inputSchema},
                }
            }
            for tool in available_tools
        ]

        # Create the system message - Updated to match our actual tools
        system_prompt = """You are a helpful assistant with access to greeting and game tools.
When asked to greet someone, use the greet tool.
When asked to play rock-paper-scissors, use the play_rock_paper_scissors tool.
Always include the full text of any greeting or game result in your response to make sure the user can see it.
Respond in a friendly and helpful manner. Keep your answers brief but informative."""

        # Initialize messages array
        messages = [{"role": "user", "content": [{"text": query}]}]

        try:
            # Call Amazon Bedrock with the user query
            print("Sending query to Bedrock...")
            response = self.bedrock_runtime.converse(
                modelId=self.model_id,
                messages=messages,
                inferenceConfig={"temperature": 0.7},
                toolConfig={"tools": tool_list},
                system=[{"text": system_prompt}],
            )

            # Extract the assistant's response
            response_message = response["output"]["message"]
            final_responses = []
            tool_results = {}

            # Process each content block in the response
            for content_block in response_message["content"]:
                if "text" in content_block:
                    # Add text responses to our final output
                    final_responses.append(content_block["text"])

                elif "toolUse" in content_block:
                    # Handle tool usage
                    tool_use = content_block["toolUse"]
                    tool_name = tool_use["name"]
                    tool_input = tool_use["input"]
                    tool_use_id = tool_use["toolUseId"]

                    print(f"Calling tool: {tool_name} with input: {tool_input}")
                    final_responses.append(f"[Calling tool {tool_name}]")

                    # Call the tool through MCP session
                    raw_tool_result = await self.session.call_tool(
                        tool_name, tool_input
                    )

                    # Extract the actual content from the tool result
                    extracted_result = self.extract_tool_result(raw_tool_result)
                    print(f"Raw tool result type: {type(raw_tool_result)}")
                    print(f"Extracted result: {extracted_result}")

                    # Save the result for later display
                    tool_results[tool_name] = extracted_result

                    # Create follow-up message with tool result
                    tool_result_message = {
                        "role": "user",
                        "content": [
                            {
                                "toolResult": {
                                    "toolUseId": tool_use_id,
                                    "content": [{"json": {"result": extracted_result}}],
                                }
                            }
                        ],
                    }

                    # Add the AI message and tool result to messages
                    messages.append(response_message)
                    messages.append(tool_result_message)

                    # Make another call to get the final response
                    follow_up_response = self.bedrock_runtime.converse(
                        modelId=self.model_id,
                        messages=messages,
                        inferenceConfig={"temperature": 0.7},
                        toolConfig={"tools": tool_list},
                        system=[{"text": system_prompt}],
                    )

                    # Add the follow-up response to our final output
                    follow_up_text = follow_up_response["output"]["message"]["content"][
                        0
                    ]["text"]
                    final_responses.append(follow_up_text)

            # Compose the final response with explicit tool results
            final_text = "\n".join(final_responses)

            # If we have tool results but they're not obviously included in the response,
            # add them explicitly
            for tool_name, result in tool_results.items():
                if tool_name == "greet" and result not in final_text:
                    final_text += f"\n\nGreeting: {result}"
                elif tool_name == "play_rock_paper_scissors" and result not in final_text:
                    final_text += f"\n\nGame result: {result}"

            return final_text

        except Exception as e:
            print(f"Error in Bedrock API call: {str(e)}")
            import traceback

            traceback.print_exc()
            return f"Error: {str(e)}"

    async def chat_loop(self, available_tools: List[Any]):
        """Run an interactive chat loop"""
        print("\nYou can now chat with the agent. Type 'exit' to quit.")

        while True:
            try:
                # Get user input
                user_query = input("\nYou: ")
                if user_query.lower() in ["exit", "quit"]:
                    break

                # Process the query
                response = await self.process_query(user_query, available_tools)
                print("\nAssistant:", response)

            except Exception as e:
                print(f"\nError: {str(e)}")

    async def cleanup(self):
        """Clean up resources"""
        await self.exit_stack.aclose()
        print("\nShutting down and cleaning up resources...")


async def main():
    if len(sys.argv) < 2:
        print("Usage: python hello_world_mcp_client.py <path_to_server_script>")
        print("Example: python hello_world_mcp_client.py hello_world_mcp_server.py")
        sys.exit(1)

    server_script_path = sys.argv[1]
    agent = BedrockAgent()

    try:
        # Connect to the MCP server and get available tools
        available_tools = await agent.connect_to_server(server_script_path)

        # Initialize the Bedrock client
        if await agent.initialize_bedrock():
            # Run the chat loop
            await agent.chat_loop(available_tools)
    except Exception as e:
        print(f"Error: {str(e)}")
    finally:
        # Clean up resources
        await agent.cleanup()


if __name__ == "__main__":
    asyncio.run(main())
EOL

The full client code is quite extensive but handles:

  • Starting the MCP server as a subprocess
  • Establishing an MCP session
  • Connecting to AWS Bedrock
  • Processing user queries by sending them to Claude
  • Invoking tools when the model requests them
  • Returning results back to the model

Running the Example 🏃‍♂️

To run this example:

python3 mcp_client.py mcp_server.py

The client will start the server, connect to AWS Bedrock, and begin an interactive chat session.

Try these example prompts:

  • “Please greet me, my name is Alex”
  • “Let’s play rock paper scissors. I choose paper”
  • “Explain what MCP is”

image

image

How It Works 🔍

Let’s break down the key components:

The Server Side 🏗️

The MCP server is built using FastMCP, which makes it easy to define tools using Python decorators:

python @mcp.tool() async def greet(name: str) -> str: """Greet a person with their name and explain what MCP is.""" return f"Hello {name}! Welcome to MCP. ..."

Each tool:

  • Has a clear signature with type hints
  • Includes documentation via docstrings
  • Performs a specific task that the LLM can invoke

The Client Side 🔄

The client coordinates between:

  1. The MCP Server - Which hosts our tools
  2. AWS Bedrock - Which provides access to Claude 3.5 Haiku
  3. The User Interface - Which allows for interactive chat

When a user submits a query, the client:

  1. Sends the query to Claude via Bedrock
  2. Provides Claude with information about available tools
  3. Intercepts tool calls, executing them via the MCP server
  4. Returns the results back to Claude
  5. Presents Claude’s final response to the user

Why MCP Matters ✨

MCP solves several important challenges in LLM application development:

  1. Standardization: Creates a common interface across different LLM providers
  2. Separation of Concerns: Separates tool implementation from AI model
  3. Type Safety: Provides clear contracts for data exchange
  4. Security: Limits LLM access to only predefined capabilities
  5. Portability: Makes it easy to swap out LLM providers
  6. Composability: Allows building complex applications from simple components

Next Steps 🛤️

Now that you’ve built your first MCP application, here are some ways to expand:

  1. Add More Tools: Create tools for databases, APIs, or specialized calculations
  2. Implement Resources: Add data sources LLMs can reference
  3. Define Prompts: Create reusable prompt templates
  4. Try Different Models: Test with different providers like OpenAI, Anthropic, or Cohere

Conclusion 🎯

The Model Context Protocol represents a significant step forward in standardizing how AI models interact with external tools and data. With fastMCP, you can quickly build powerful applications that leverage the reasoning capabilities of LLMs while adding your own custom functionality.

By adopting MCP early, you’re future-proofing your AI applications and joining the growing ecosystem of interoperable AI tools. The simple example we’ve built today is just the beginning—MCP can support everything from simple prototypes to complex enterprise applications.

Happy building! 🤖🔌🚀


<
Previous Post
🚀 Getting Started with Strands SDK: Building Your First AI Agent
>
Next Post
📚 AWS Certified Cloud Practitioner (CLF-C02) Exam Glossary