Get a Mixpeek API key from mixpeek.com/start, then pick your integration path.Documentation Index
Fetch the complete documentation index at: https://docs.mixpeek.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
- LangChain Agent
- MCP (Claude)
- REST API
Build a LangChain agent with a Poll
video_search tool backed by Mixpeek.pip install mixpeek langchain langchain-openai
export MIXPEEK_API_KEY="sk_live_replace_me"
export OPENAI_API_KEY="sk-replace_me"
Create namespace, bucket, and collection
from mixpeek import Mixpeek
client = Mixpeek(api_key="YOUR_MIXPEEK_API_KEY")
ns = client.namespaces.create(
namespace_name="agent-video-demo",
feature_extractors=[
{"feature_extractor_name": "multimodal_extractor", "version": "v1"}
]
)
bucket = client.buckets.create(
bucket_name="demo-videos",
namespace_id=ns.namespace_id,
schema={"properties": {"video_url": {"type": "url", "required": True}}}
)
col = client.collections.create(
collection_name="video-scenes",
namespace_id=ns.namespace_id,
source={"type": "bucket", "bucket_id": bucket.bucket_id},
feature_extractor={
"feature_extractor_name": "multimodal_extractor",
"version": "v1",
"input_mappings": {"video": "payload.video_url"},
"parameters": {"split_method": "scene", "run_transcription": True, "run_multimodal_embedding": True}
}
)
Upload and process
obj = client.objects.create(
bucket_id=bucket.bucket_id,
namespace_id=ns.namespace_id,
key_prefix="/samples",
blobs=[{"property": "video_url", "type": "video", "url": "https://storage.googleapis.com/mixpeek-public-demo/videos/sample-product-demo.mp4"}]
)
batch = client.batches.create(bucket_id=bucket.bucket_id, namespace_id=ns.namespace_id, object_ids=[obj.object_id])
result = client.batches.submit(bucket_id=bucket.bucket_id, batch_id=batch.batch_id, namespace_id=ns.namespace_id)
client.tasks.get(task_id=result.task_id) until status == "COMPLETED" (1-5 min).Create a retriever and wire it as a tool
from langchain.tools import Tool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
ret = client.retrievers.create(
retriever_name="agent-video-search",
namespace_id=ns.namespace_id,
input_schema={"properties": {"query_text": {"type": "text", "required": True}}},
collection_ids=[col.collection_id],
stages=[{
"stage_type": "filter", "stage_id": "feature_search",
"parameters": {
"feature_uri": "mixpeek://multimodal_extractor@v1/multimodal_embedding",
"input": {"text": "{{INPUT.query_text}}"}, "limit": 20
}
}]
)
def search_video(query: str) -> str:
results = client.retrievers.execute(
retriever_id=ret.retriever_id, namespace_id=ns.namespace_id,
inputs={"query_text": query}, limit=5
)
return "\n".join(
f"[{r.metadata.get('start_time','?')}s] (score: {r.score:.3f}) {r.metadata.get('description','')}"
for r in results.results
) or "No results found."
video_tool = Tool(name="video_search", description="Search indexed video by natural language", func=search_video)
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You answer questions about video content. Use video_search to find relevant moments."),
("human", "{input}"), MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_openai_tools_agent(llm, [video_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[video_tool], verbose=True)
print(executor.invoke({"input": "What product features are shown?"})["output"])
Add Mixpeek as a tool in Claude Desktop or Claude Code — no code required.Restart Claude and ask: “What Mixpeek tools do you have access to?”
See the full MCP reference for per-retriever servers and troubleshooting.
- Claude Desktop
- Claude Code
Open
claude_desktop_config.json and add:{
"mcpServers": {
"mixpeek-retrieval": {
"url": "https://mcp.mixpeek.com/retrieval/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
Add
.mcp.json to your project root:{
"mcpServers": {
"mixpeek-retrieval": {
"url": "https://mcp.mixpeek.com/retrieval/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
| Scope | URL | Tools |
|---|---|---|
| Full | https://mcp.mixpeek.com/mcp | 48 |
| Ingestion | https://mcp.mixpeek.com/ingestion/mcp | 20 |
| Retrieval | https://mcp.mixpeek.com/retrieval/mcp | 11 |
| Admin | https://mcp.mixpeek.com/admin/mcp | 17 |
Call Each result contains
POST /v1/retrievers/{id}/execute from any language.import requests
HEADERS = {
"Authorization": f"Bearer {MIXPEEK_API_KEY}",
"X-Namespace": MIXPEEK_NAMESPACE,
"Content-Type": "application/json",
}
resp = requests.post(
f"https://api.mixpeek.com/v1/retrievers/{RETRIEVER_ID}/execute",
headers=HEADERS,
json={"inputs": {"query_text": "safety regulations"}, "limit": 10},
)
results = resp.json()["results"]
score, metadata, and content. Wrap this call as a tool in any agent framework — OpenAI function calling, CrewAI, LlamaIndex, or plain HTTP.See the REST integration guide and API reference for full details.
