Skip to main content

Collections

Collections bind a bucket to a feature extractor. When you submit a batch, the engine runs the extractor against each object and produces searchable documents.
curl -X POST "https://api.mixpeek.com/v1/collections" \
  -H "Authorization: Bearer $MIXPEEK_API_KEY" \
  -H "X-Namespace: $NAMESPACE_ID" \
  -H "Content-Type: application/json" \
  -d '{
    "collection_name": "product-embeddings",
    "source": { "type": "bucket", "bucket_id": "'$BUCKET_ID'" },
    "feature_extractor": {
      "feature_extractor_name": "multimodal_extractor",
      "version": "v1",
      "input_mappings": { "image": "payload.hero_image", "text": "payload.product_text" }
    }
  }'
A single object can feed multiple collections — each running a different extractor. Documents retain lineage to the source object via root_object_id. Collection API →

Feature URIs

Every extracted feature is addressed by a URI that pins it to a specific extractor version:
mixpeek://{extractor_name}@{version}/{output_name}
Examples:
  • mixpeek://multimodal_extractor@v1/multimodal_embedding
  • mixpeek://text_extractor@v1/text_embedding
  • mixpeek://face_detector@v1/face_embedding
Feature URIs are referenced by retriever stages, taxonomies, and clustering jobs. They guarantee query-time compatibility with the extraction pipeline — swap the URI, re-embed, everything downstream stays consistent.

Tiered Pipelines

When a batch is submitted, the engine runs a DAG of extractors:
  1. Tier 1 collections process raw objects from the bucket
  2. Tier 2 collections consume Tier 1 documents as input
  3. Each tier waits for dependencies before executing
video → scenes (Tier 1) → faces per scene (Tier 2) → expressions per face (Tier 3)
Collections define the pipeline through their source and feature_extractor configuration. Dependencies are resolved automatically.

Built-in Extractors

ExtractorModalityOutput
MultimodalVideo, image, audioVertex AI 1408D embeddings, transcripts, scene descriptions
TextTextE5-Large 1024D embeddings
ImageImageSigLIP 768D embeddings, descriptions, structured extraction
Face IdentityVideo, imageArcFace 512D face embeddings, bounding boxes
DocumentPDF, DOCXText chunks, OCR, embeddings
Gemini Multi-fileAnyGemini-powered cross-file analysis
Web ScraperURLsScraped text content + embeddings
Course ContentVideoLecture segments, slides, transcripts
PassthroughAnyForward metadata without extraction
See the full Extractor Reference for configuration details.

Custom Plugins

For extraction logic beyond built-in models, build custom plugins:
pip install mixpeek
mixpeek plugin init my-plugin     # Scaffold from template
mixpeek plugin test my-plugin     # Validate locally
mixpeek plugin publish my-plugin  # Upload and deploy
Plugins run on managed infrastructure with access to GPU/CPU resources, HuggingFace models, and LLM services. They support batch processing, real-time endpoints, and custom model loading. See the full plugins guide for manifest format, pipeline hooks, security constraints, and deployment lifecycle. Plugin API →