Bring your own embeddings, upsert them directly, and query with dense, sparse, BM25, or hybrid search. No collections or extractors required.Documentation Index
Fetch the complete documentation index at: https://docs.mixpeek.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Quickstart
Architecture
Standalone vs Managed
Every namespace runs in one of two modes. Start standalone and promote when you’re ready — no reindexing.| Standalone | Managed | |
|---|---|---|
| Query latency | Lower — no embedding at query time | +50-200ms for auto-embedding |
| Embedding cost | You pay your provider directly | Included in platform pricing |
| Model flexibility | Any model, any fine-tune | Bound to registered inference services |
| Write path | Direct upsert only | Collections auto-process + direct upsert |
| Search input | Pre-computed vectors, text (BM25), sparse | Also accepts raw text/URLs (auto-embedded) |
| Best for | Existing ML infra, low-latency, custom models | End-to-end processing, file pipelines |
Features
Dense, sparse, BM25, and hybrid search. Payload filtering and indexes. Document versioning, namespace cloning, and usage metering. Everything works identically in both modes.Next Steps
Namespaces
Vector indexes, metrics, BM25
Documents & Search
Upsert, query, manage
Promote
Standalone → managed

