Data object for passing log arguments to client-side handlers.This provides an interface to match the Python standard library logging,
for compatibility with structured logging.
Context object providing access to MCP capabilities.This provides a cleaner interface to MCP’s RequestContext functionality.
It gets injected into tool and resource functions that request it via type hints.To use context in a tool function, add a parameter with the Context type annotation:
@server.toolasync def my_tool(x: int, ctx: Context) -> str: # Log messages to the client await ctx.info(f"Processing {x}") await ctx.debug("Debug info") await ctx.warning("Warning message") await ctx.error("Error message") # Report progress await ctx.report_progress(50, 100, "Processing") # Access resources data = await ctx.read_resource("resource://data") # Get request info request_id = ctx.request_id client_id = ctx.client_id # Manage state across the session (persists across requests) await ctx.set_state("key", "value") value = await ctx.get_state("key") # Store non-serializable values for the current request only await ctx.set_state("client", http_client, serializable=False) return str(x)
State Management:
Context provides session-scoped state that persists across requests within
the same MCP session. State is automatically keyed by session, ensuring
isolation between different clients.State set during on_initialize middleware will persist to subsequent tool
calls when using the same session object (STDIO, SSE, single-server HTTP).
For distributed/serverless HTTP deployments where different machines handle
the init and tool calls, state is isolated by the mcp-session-id header.The context parameter name can be anything as long as it’s annotated with Context.
The context is optional - tools that don’t need it can omit the parameter.Methods:
True when this context is running in a background task (Docket worker).When True, certain operations like elicit() and sample() will use
task-aware implementations that can pause the task and wait for
client input.
Get the request ID that originated this execution, if available.In foreground request mode, this is the current request_id.
In background task mode, this is the request_id captured when the task
was submitted, if one was available.
Access to the underlying request context.Returns None when the MCP session has not been established yet.
Returns the full RequestContext once the MCP session is available.For HTTP request access in middleware, use get_http_request() from fastmcp.server.dependencies,
which works whether or not the MCP session is available.Example in middleware:
async def on_request(self, context, call_next): ctx = context.fastmcp_context if ctx.request_context: # MCP session available - can access session_id, request_id, etc. session_id = ctx.session_id else: # MCP session not available yet - use HTTP helpers from fastmcp.server.dependencies import get_http_request request = get_http_request() return await call_next(context)
Access the server’s lifespan context.Returns the context dict yielded by the server’s lifespan function.
Returns an empty dict if no lifespan was configured or if the MCP
session is not yet established.In background tasks (Docket workers), where request_context is not
available, falls back to reading from the FastMCP server’s lifespan
result directly.Example:
@server.tooldef my_tool(ctx: Context) -> str: db = ctx.lifespan_context.get("db") if db: return db.query("SELECT 1") return "No database connection"
Get the current transport type.Returns the transport type used to run this server: “stdio”, “sse”,
or “streamable-http”. Returns None if called outside of a server context.
Check whether the connected client supports a given MCP extension.Inspects the extensions extra field on ClientCapabilities
sent by the client during initialization.Returns False when no session is available (e.g., outside a
request context) or when the client did not advertise the extension.Example::from fastmcp.apps.config import UI_EXTENSION_ID@mcp.tool
async def my_tool(ctx: Context) -> str:
if ctx.client_supports_extension(UI_EXTENSION_ID):
return “UI-capable client”
return “text-only client”
Get the MCP session ID for ALL transports.Returns the session ID that can be used as a key for session-based
data storage (e.g., Redis) to share data between tool calls within
the same client session.Returns:
The session ID for StreamableHTTP transports, or a generated ID
Access to the underlying session for advanced usage.In request mode: Returns the session from the active request context.
In background task mode: Returns the session stored at Context creation.Raises RuntimeError if no session is available.
Send a DEBUG-level message to the connected MCP Client.Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
Send a INFO-level message to the connected MCP Client.Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
Send a WARNING-level message to the connected MCP Client.Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
Send a ERROR-level message to the connected MCP Client.Messages sent to Clients are also logged to the fastmcp.server.context.to_client logger with a level of DEBUG.
Close the current response stream to trigger client reconnection.When using StreamableHTTP transport with an EventStore configured, this
method gracefully closes the HTTP connection for the current request.
The client will automatically reconnect (after retry_interval milliseconds)
and resume receiving events from where it left off via the EventStore.This is useful for long-running operations to avoid load balancer timeouts.
Instead of holding a connection open for minutes, you can periodically close
and let the client reconnect.
Make a single LLM sampling call.This is a stateless function that makes exactly one LLM call and optionally
executes any requested tools. Use this for fine-grained control over the
sampling loop.Args:
messages: The message(s) to send. Can be a string, list of strings,
or list of SamplingMessage objects.
system_prompt: Optional system prompt for the LLM.
temperature: Optional sampling temperature.
max_tokens: Maximum tokens to generate. Defaults to 512.
model_preferences: Optional model preferences.
tools: Optional list of tools the LLM can use.
tool_choice: Tool choice mode (“auto”, “required”, or “none”).
execute_tools: If True (default), execute tool calls and append results
to history. If False, return immediately with tool_calls available
in the step for manual execution.
mask_error_details: If True, mask detailed error messages from tool
execution. When None (default), uses the global settings value.
Tools can raise ToolError to bypass masking.
tool_concurrency: Controls parallel execution of tools:
None (default): Sequential execution (one at a time)
0: Unlimited parallel execution
N > 0: Execute at most N tools concurrently
If any tool has sequential=True, all tools execute sequentially
regardless of this setting.
Returns:
SampleStep containing:
.response: The raw LLM response
.history: Messages including input, assistant response, and tool results
.is_tool_use: True if the LLM requested tool execution
Send a sampling request to the client and await the response.This method runs to completion automatically. When tools are provided,
it executes a tool loop: if the LLM returns a tool use request, the tools
are executed and the results are sent back to the LLM. This continues
until the LLM provides a final text response.When result_type is specified, a synthetic final_response tool is
created. The LLM calls this tool to provide the structured response,
which is validated against the result_type and returned as .result.For fine-grained control over the sampling loop, use sample_step() instead.Args:
messages: The message(s) to send. Can be a string, list of strings,
or list of SamplingMessage objects.
system_prompt: Optional system prompt for the LLM.
temperature: Optional sampling temperature.
max_tokens: Maximum tokens to generate. Defaults to 512.
model_preferences: Optional model preferences.
tools: Optional list of tools the LLM can use. Accepts plain
functions or SamplingTools.
result_type: Optional type for structured output. When specified,
a synthetic final_response tool is created and the LLM’s
response is validated against this type.
mask_error_details: If True, mask detailed error messages from tool
execution. When None (default), uses the global settings value.
Tools can raise ToolError to bypass masking.
tool_concurrency: Controls parallel execution of tools:
None (default): Sequential execution (one at a time)
0: Unlimited parallel execution
N > 0: Execute at most N tools concurrently
If any tool has sequential=True, all tools execute sequentially
regardless of this setting.
Returns:
SamplingResult[T] containing:
.text: The text representation (raw text or JSON for structured)
.result: The typed result (str for text, parsed object for structured)
Send an elicitation request to the client and await the response.Call this method at any time to request additional information from
the user through the client. The client must support elicitation,
or the request will error.Note that the MCP protocol only supports simple object schemas with
primitive types. You can provide a dataclass, TypedDict, or BaseModel to
comply. If you provide a primitive type, an object schema with a single
“value” field will be generated for the MCP interaction and
automatically deconstructed into the primitive type upon response.If the response_type is None, the generated schema will be that of an
empty object in order to comply with the MCP protocol requirements.
Clients must send an empty object ("")in response.Args:
message: A human-readable message explaining what information is needed
response_type: The type of the response, which should be a primitive
type or dataclass or BaseModel. If it is a primitive type, an
object schema with a single “value” field will be generated.
Set a value in the state store.By default, values are stored in the session-scoped state store and
persist across requests within the same MCP session. Values must be
JSON-serializable (dicts, lists, strings, numbers, etc.).For non-serializable values (e.g., HTTP clients, database connections),
pass serializable=False. These values are stored in a request-scoped
dict and only live for the current MCP request (tool call, resource
read, or prompt render). They will not be available in subsequent
requests.The key is automatically prefixed with the session identifier.
Get a value from the state store.Checks request-scoped state first (set with serializable=False),
then falls back to the session-scoped state store.Returns None if the key is not found.
Enable components matching criteria for this session only.Session rules override global transforms. Rules accumulate - each call
adds a new rule to the session. Later marks override earlier ones
(Visibility transform semantics).Sends notifications to this session only: ToolListChangedNotification,
ResourceListChangedNotification, and PromptListChangedNotification.Args:
names: Component names or URIs to match.
keys: Component keys to match (e.g., ).
version: Component version spec to match.
tags: Tags to match (component must have at least one).
components: Component types to match (e.g., ).
match_all: If True, matches all components regardless of other criteria.
Disable components matching criteria for this session only.Session rules override global transforms. Rules accumulate - each call
adds a new rule to the session. Later marks override earlier ones
(Visibility transform semantics).Sends notifications to this session only: ToolListChangedNotification,
ResourceListChangedNotification, and PromptListChangedNotification.Args:
names: Component names or URIs to match.
keys: Component keys to match (e.g., ).
version: Component version spec to match.
tags: Tags to match (component must have at least one).
components: Component types to match (e.g., ).
match_all: If True, matches all components regardless of other criteria.
Clear all session visibility rules.Use this to reset session visibility back to global defaults.Sends notifications to this session only: ToolListChangedNotification,
ResourceListChangedNotification, and PromptListChangedNotification.