MCP Server
Give AI agents perception over multimodal data via the Model Context Protocol
A Model Context Protocol (MCP) server that exposes Mixpeek's multimodal search, classification, and metadata extraction as agent-callable tools. Any MCP-compatible AI agent (Claude, ChatGPT, Cursor, Windsurf, Cline) can discover and invoke these tools to search video, classify images, extract metadata from documents, and detect brands or faces -- all through natural language.
Built for AI agent developers, platform teams building agent toolkits, and enterprises adding multimodal perception to their AI workflows.
Get Started
Integrations
Works with any AI agent or IDE that supports the Model Context Protocol standard.
pip install mixpeek-mcpUse Cases
Give Claude, ChatGPT, or custom agents the ability to search your video archive
Enable AI agents to classify content against brand safety or IAB taxonomies
Build compliance review agents that reason across documents, images, and recordings
Add visual product search to customer support agents
Create media intelligence agents that monitor and alert on content signals
Features
Example
Claude Desktop config (mcp.json)
{"mcpServers": {"mixpeek": {"command": "python","args": ["-m", "mixpeek_mcp"],"env": {"MIXPEEK_API_KEY": "your_api_key","MIXPEEK_NAMESPACE": "your_namespace"}}}}
Agent invocation example
Agent: "Find all video segments showing a red Nike logo"Tool call: searchquery: "red Nike logo"modalities: ["video", "image"]Result:3 matches found (confidence: 0.91-0.96)warehouse-footage.mp4 @ 02:34-02:41product-shoot.mp4 @ 00:12-00:18storefront-cam.jpg (full frame)
Get Started
Integrations
Details
Quick Info
Frequently Asked Questions
Ready to integrate?
Get started with MCP Server in minutes. Check out the documentation or explore the source code on GitHub.
