Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mixpeek.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Bring your own embeddings, upsert them directly, and query with dense, sparse, BM25, or hybrid search. No collections or extractors required.

Quickstart

1

Create a namespace

curl -X POST "https://api.mixpeek.com/v1/namespaces/standalone" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace_id": "product-search",
    "mode": "standalone",
    "vector_configs": [
      {"name": "text_embedding", "dimension": 1536, "metric": "cosine"}
    ]
  }'
2

Upsert documents with your vectors

curl -X POST "https://api.mixpeek.com/v1/namespaces/product-search/documents/direct" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "documents": [{
      "document_id": "prod-001",
      "vectors": {"text_embedding": [0.12, -0.34, 0.56, "...1536 floats"]},
      "payload": {"title": "Wireless Headphones", "category": "audio", "price": 79.99}
    }]
  }'
3

Search

curl -X POST "https://api.mixpeek.com/v1/search" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "namespace_id": "product-search",
    "queries": [{
      "vector_name": "text_embedding",
      "vector": [0.15, -0.28, 0.44, "...query embedding"],
      "top_k": 10,
      "filters": {"category": "audio"}
    }]
  }'

Architecture

Standalone vector store architecture — your pipeline feeds vectors into Mixpeek's sharded storage with dense, BM25, and payload indexes

Standalone vs Managed

Every namespace runs in one of two modes. Start standalone and promote when you’re ready — no reindexing.
StandaloneManaged
Query latencyLower — no embedding at query time+50-200ms for auto-embedding
Embedding costYou pay your provider directlyIncluded in platform pricing
Model flexibilityAny model, any fine-tuneBound to registered inference services
Write pathDirect upsert onlyCollections auto-process + direct upsert
Search inputPre-computed vectors, text (BM25), sparseAlso accepts raw text/URLs (auto-embedded)
Best forExisting ML infra, low-latency, custom modelsEnd-to-end processing, file pipelines
Start standalone if you already have embeddings. Promotion is additive — all existing data is preserved.

Features

Dense, sparse, BM25, and hybrid search. Payload filtering and indexes. Document versioning, namespace cloning, and usage metering. Everything works identically in both modes.

Next Steps

Namespaces

Vector indexes, metrics, BM25

Documents & Search

Upsert, query, manage

Promote

Standalone → managed