Skip to main content
Mixpeek exposes its entire platform as agent-callable tools. Connect via MCP for zero-code setup, use the built-in Agent Runtime for stateful conversations, or wire retrievers into LangChain, OpenAI, or any framework via REST.

MCP (Model Context Protocol)

The fastest way to connect an AI agent to Mixpeek. Four hosted servers expose different tool scopes:
ScopeURLTools
Fullhttps://mcp.mixpeek.com/mcp48
Ingestionhttps://mcp.mixpeek.com/ingestion/mcp20
Retrievalhttps://mcp.mixpeek.com/retrieval/mcp11
Adminhttps://mcp.mixpeek.com/admin/mcp17
{
  "mcpServers": {
    "mixpeek": {
      "url": "https://mcp.mixpeek.com/mcp",
      "headers": { "Authorization": "Bearer YOUR_API_KEY" }
    }
  }
}

Per-Retriever Server

For a focused search agent, scope the MCP server to a single retriever. It reads your retriever’s input_schema and generates a typed search tool:
pip install mixpeek-mcp-retriever

mixpeek-mcp-retriever \
  --retriever-id ret_xxx \
  --namespace-id ns_xxx \
  --api-key YOUR_API_KEY
Exposes three tools: search (typed to your schema), describe (retriever metadata), and explain (pipeline walkthrough). Full MCP reference →

Agent Sessions

Mixpeek’s built-in agent runtime gives you stateful, multi-turn conversations backed by your data. Each session runs as a dedicated process with tool access, conversation memory, and SSE streaming.
# Create a session
curl -X POST "https://api.mixpeek.com/v1/agents/sessions" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "X-Namespace: $NAMESPACE_ID" \
  -H "Content-Type: application/json" \
  -d '{
    "agent_config": {
      "system_prompt": "You help users search and analyze video content.",
      "available_tools": ["execute_retriever", "list_collections", "get_taxonomy"]
    }
  }'
# Send a message (SSE streaming)
curl -N -X POST "https://api.mixpeek.com/v1/agents/sessions/$SESSION_ID/messages" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "X-Namespace: $NAMESPACE_ID" \
  -H "Content-Type: application/json" \
  -d '{ "content": "Find videos about machine learning", "stream": true }'
The agent reasons through a analyze → plan → execute → synthesize workflow, calling tools as needed and streaming events back:
EventDescription
thinkingAgent is analyzing or planning
tool_callAgent is calling a tool
tool_resultTool execution result
messageResponse text chunk
doneProcessing complete

Available Tools

ToolDescription
execute_retrieverSearch documents via a retriever pipeline
search_retrieversFind available retrievers
get_retrieverGet retriever configuration
list_collectionsList collections in the namespace
get_collectionGet collection details
list_taxonomiesList taxonomies
get_taxonomyGet taxonomy details
list_clustersList cluster configurations
get_objectGet object metadata
Sessions persist for 7 days and automatically rehydrate after idle periods. Agent Sessions API →

LangChain

The langchain-mixpeek package provides a retriever, individual tools, and a full toolkit:
pip install langchain-mixpeek
from langchain_mixpeek import MixpeekToolkit
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic

toolkit = MixpeekToolkit(
    api_key="mxp_...",
    namespace="my-namespace",
    bucket_id="bkt_...",
    collection_id="col_...",
    retriever_id="ret_...",
)

agent = create_react_agent(
    ChatAnthropic(model="claude-sonnet-4-20250514"),
    toolkit.get_tools(),
)

result = agent.invoke({
    "messages": [("user", "Find product demos and summarize what's shown")]
})

Toolkit Tools

ToolWhat it does
mixpeek_searchSearch video, images, audio, documents
mixpeek_ingestUpload content (text, images, video, audio, PDFs)
mixpeek_processTrigger feature extraction
mixpeek_classifyRun taxonomy classification
mixpeek_clusterGroup similar documents
mixpeek_alertSet up monitoring (webhook, Slack, email)
Scope tools to what your agent needs with toolkit.get_tools(actions=["search", "ingest"]). Full LangChain guide →

OpenAI Function Calling

Define a Mixpeek retriever as an OpenAI function schema:
from openai import OpenAI
from mixpeek import Mixpeek

openai_client = OpenAI()
mixpeek_client = Mixpeek(api_key="mxp_...")

tools = [{
    "type": "function",
    "function": {
        "name": "search_mixpeek",
        "description": "Search video, image, and audio content",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {"type": "string", "description": "Search query"},
                "limit": {"type": "integer", "description": "Max results"}
            },
            "required": ["query"]
        }
    }
}]

response = openai_client.chat.completions.create(
    model="gpt-4o", messages=messages, tools=tools
)

# Handle tool_calls by calling mixpeek_client.retrievers.execute()
Works with both the Chat Completions API and Assistants API. Full OpenAI guide →

Any Framework (REST)

The same pattern works with CrewAI, LlamaIndex, Haystack, Autogen, or plain HTTP — wrap the retriever execute endpoint as a tool:
import requests

def search(query: str, limit: int = 10) -> list:
    resp = requests.post(
        f"https://api.mixpeek.com/v1/retrievers/{RETRIEVER_ID}/execute",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "X-Namespace": NAMESPACE_ID,
            "Content-Type": "application/json",
        },
        json={"inputs": {"query_text": query}, "limit": limit},
    )
    return resp.json()["results"]
Retriever Execute API →

Choosing an Integration

I want to…Use
Connect Claude or Cursor with no codeMCP
Build a stateful conversational agentAgent Sessions
Build a LangChain/LangGraph agentLangChain
Add tools to GPT modelsOpenAI Function Calling
Use any other frameworkREST