1

Install Concierge

Start by installing the concierge-sdk package. This gives you the Concierge CLI for scaffolding projects, free cloud hosting, and built-in debugging tools.

Terminal
pip install concierge-sdk
2

Scaffold Your Project

Create a new MCP server with one command. The --chatgpt flag adds widget support for building ChatGPT Apps.

Terminal
concierge init --chatgpt

This creates a new folder with everything you need:

Project Structure
my-app/
├── main.py           # Your MCP server with widgets
├── assets/           # Widget source files (JSX, CSS)
├── requirements.txt  # Dependencies
├── settings.json     # Deploy config
└── README.md
Terminal
cd my-app
3

Deploy to Production

Deploy your app to the cloud with a single command. You'll get a live HTTPS URL instantly.

Terminal
concierge deploy

You're Live!

Your app is now running at https://<project>.getconcierge.app/mcp

Want to see logs? Add the --logs flag:

Terminal
concierge deploy --logs
4

Test & Evaluate

Before connecting to ChatGPT, test your tools and widgets with Concierge. Run AI-powered evaluations to ensure everything works correctly.

Open Concierge

getconcierge.app

1

Connect your server

Enter your deployment URL or select from deployed servers

2

Test tools manually

Browse tools and resources, execute them with custom inputs

3

Run evaluation

Use the Evaluate tab to run AI-powered test scenarios

5

Connect to ChatGPT

Add your MCP server as a ChatGPT connector to use it in conversations.

1

Enable Developer Mode

Go to Settings → Apps & Connectors → Advanced settings and enable Developer mode

2

Create a Connector

Click Create, paste your URL https://<project>.getconcierge.app/mcp, add a name and description

3

Start Chatting

Open a new chat, click More menu, add your connector, and start using your tools!

6

Build Widgets

Widgets are interactive UI components that render inside ChatGPT. The @mcp.widget() decorator binds HTML to a tool, so when ChatGPT calls it, the widget renders automatically.

main.py — Widget Example
from concierge import Concierge

mcp = Concierge("my-app", stateless_http=True)

PIZZA_HTML = """<!DOCTYPE html>
<html><body style="font-family: system-ui; background: #1a1a2e; 
      color: white; text-align: center; padding: 32px;">
  <h1>Pizza Map</h1>
  <p>Finding the best pizza near you!</p>
</body></html>"""

@mcp.widget(
    uri="widget://pizza-map",
    html=PIZZA_HTML,
    title="Show Pizza Map",
)
async def pizza_map(topping: str) -> dict:
    """Show pizza spots for a given topping."""
    return {"topping": topping}

Pizza Map

Finding pepperoni near you

Widget Parameters

Parameter Type Description
uri str required Unique identifier for the widget
html str Inline HTML content for the widget
entrypoint str Path to HTML in assets/entrypoints/ (for JSX widgets)
url str External URL to embed in iframe
title str Display name shown in ChatGPT UI

Pro tip: Use entrypoint for React/JSX widgets. Concierge auto-builds them with Vite + Tailwind on deploy.

Congratulations!

You've built your first ChatGPT App. Now go build something amazing!

Join Discord · GitHub

1

Install Concierge

Start by installing the concierge-sdk package. This gives you the Concierge CLI for scaffolding projects and free cloud hosting.

Terminal
pip install concierge-sdk
2

Scaffold Your Project

Create a new MCP server with one command:

Terminal
concierge init

This creates a new folder with everything you need:

Project Structure
my-server/
├── main.py           # Your MCP server
├── requirements.txt  # Dependencies
├── settings.json     # Deploy config
└── README.md
Terminal
cd my-server
3

Deploy to Production

Deploy your MCP server to the cloud with a single command:

Terminal
concierge deploy

You're Live!

Your MCP server is now running at https://<project>.getconcierge.app/mcp

4

Test & Evaluate

Test your tools with Concierge before connecting to clients. Run AI-powered evaluations to validate behavior.

Open Concierge

getconcierge.app

1

Connect your server

Enter your deployment URL or select from deployed servers

2

Test tools manually

Browse and execute tools with custom inputs

3

Run evaluation

Use the Evaluate tab for AI-powered testing

5

Define Tools

Add tools to your MCP server using the @mcp.tool() decorator:

main.py
from concierge import Concierge
from pydantic import Field

mcp = Concierge("my-server", stateless_http=True)

@mcp.tool()
async def add(
    a: int = Field(description="First number"),
    b: int = Field(description="Second number")
) -> int:
    """Add two numbers together."""
    return a + b

@mcp.tool()
async def get_weather(city: str) -> dict:
    """Get weather for a city."""
    return {"city": city, "temp": "72°F", "condition": "Sunny"}

if __name__ == "__main__":
    import uvicorn
    app = mcp.streamable_http_app()
    uvicorn.run(app, host="0.0.0.0", port=8000)
6

Connect to Clients

Use your MCP server with any MCP-compatible client:

Claude Desktop

Add to claude_desktop_config.json under mcpServers

Cursor

Configure in Cursor settings → MCP Servers

ChatGPT

Use concierge init --chatgpt for widget support

You're all set!

Your MCP server is ready. Deploy changes anytime with concierge deploy

Join Discord · GitHub

1

Connect to Your Server

Open the Concierge Dashboard and connect to your MCP server.

1

Add Server

Enter your server URL or select from your deployed servers

2

Open Evaluation

Click the "Evaluate" tab in the studio sidebar

2

Configure LLM

Choose your preferred LLM provider to power the evaluation scenarios.

Supported Providers

OpenAI

GPT-4o, GPT-4o-mini, GPT-4-turbo

Anthropic

Claude Sonnet 4, Claude Haiku 4

Google

Gemini 2.5 Pro, Gemini 2.5 Flash

3

Run Evaluation

The evaluation runs automated test scenarios against your MCP tools.

What Gets Tested

🔧

Tool Discovery

Automatically discovers all tools, resources, and prompts exposed by your server

🤖

AI-Generated Scenarios

LLM generates realistic test prompts based on your tool descriptions

💬

Multi-Turn Conversations

Simulates real user conversations with tool invocations

📊

Performance Metrics

Measures response times, success rates, and tool coverage

4

Review Results

Get a comprehensive evaluation summary with actionable insights.

Evaluation Summary Includes

  • Conversation Transcript — Full ChatGPT-style view of all interactions
  • Tool Call History — Every tool invocation with arguments and results
  • Success/Failure Rates — Per-tool and overall success metrics
  • Response Times — Latency measurements for each tool call
  • Coverage Analysis — Which tools were tested vs. available
  • Per-Tool Scores — Individual ratings for each tool's performance

Advanced Features

Sampling Support

For tools that require server-side AI sampling (e.g., ctx.session.create_message()), Concierge automatically handles sampling requests using your configured LLM.

Elicitation Support

Tools that request additional user input via elicitation are fully supported. Dynamic forms are rendered for you to provide the required information.

Ready to Evaluate?

Connect to your MCP server and start testing with AI-powered evaluations.

Get Started →