Using your first MCP server

Now that you’ve deployed your first MCP server and confirmed it’s working, you can connect it to an LLM provider like OpenAI.

In this guide, you’ll learn how to build a chat-enabled app that automatically handles tool calls from your Metorial-powered MCP server.

What you will learn

How to use a Metorial MCP server

How to use the Metorial SDKs

1. Install the SDKs

Run the installer for your language of choice:

TypeScript

npm install metorial @metorial/openai openai

Python

pip install metorial openai

2. Configure Clients

Instantiate both clients with your API keys and your MCP server ID.

TypeScript

1import Metorial from 'metorial';
2import OpenAI from 'openai';
3
4const metorial = new Metorial({
5 apiKey: 'metorial_sk_...'
6});
7const openai = new OpenAI({
8 apiKey: '...your-openai-api-key...'
9});

Python

1from metorial import Metorial
2from openai import OpenAI
3
4metorial = Metorial(api_key="metorial_sk_...")
5openai = OpenAI(api_key="...your-openai-api-key...")

3. Fetch Your Server Tools

Retrieve the session object that exposes your deployed MCP tools.

TypeScript

1const session = await metorial.withProviderSession(
2 metorialOpenAI.chatCompletions,
3 { serverDeployments: ['...server-deployment-id...'] }
4);

Python

1session = metorial.with_provider_session(
2 metorial_openai.chat_completions,
3 { server_deployments: ['...server-deployment-id...'] }
4)

4. Send Your First Prompt

Kick off the loop by sending an initial message.

TypeScript

1let messages = [
2 { role: "user", content: "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub." }
3];

Python

1messages = [
2 {"role": "user", "content": "Summarize the README.md file of the metorial/websocket-explorer repository on GitHub."}
3]

5. Loop & Handle Tool Calls

  1. Send messages to OpenAI, passing tools: session.tools (TS) or tools=session.tools (Py).
  2. If the assistant response contains tool_calls, invoke it:

TypeScript

1const response = await openai.chat.completions.create({
2 model: 'gpt-4o',
3 messages,
4 tools: session.tools
5});
6const choice = response.choices[0]!;
7const toolCalls = choice.message.tool_calls;
8const toolResults = await session.callTools(toolCalls);

Python

1response = openai_client.chat.completions.create(
2 model="gpt-4o",
3 messages=messages,
4 tools=session.tools
5)
6choice = response.choices[0]
7tool_calls = choice.message.tool_calls
8tool_results = session.call_tools(tool_calls)
  1. Append both the tool call requests and their results to messages.
  2. Repeat until the assistant’s response has no more tool_calls.

6. Display the Final Output

Once there are no more tool calls, your assistant’s final reply is in:

TypeScript

console.log(choice.message.content);

Python

print(choice.message.content)

What’s Next?

You are all set on having a production ready MCP server to use in your AI apps. Next, you will learn about all the dev tooling available.