Mixpeek Recipes
Composable ML patterns for unstructured data retrieval & analysis
12 recipes available
Multimodal Enrichment
Turn raw media into structured intelligence
Semantic Multimodal Retrieval
Find anything across video, image, audio, and documents
Multimodal Deduplication
Detect reuse, reposts, and synthetic variants
Hierarchical Taxonomy Classification
Auto-label content into structured ontologies
Multimodal RAG
LLMs that cite real clips, frames, and documents
Unsupervised Clustering & Theme Discovery
Reveal structure you didn't know existed
Content Moderation & Policy Enforcement
Automated compliance at platform scale
Temporal Event Detection
Know when something happens
Dataset Versioning
Rebuild any dataset state deterministically
Scalable Multimodal Processing
Make everything work at scale
Cross-Modal Join
Link video, images, text, and logs
Dataset Audit & Drift Detection
Trust your data over time
What are Mixpeek Recipes?
Mixpeek recipes are practical blueprints for multimodal retrieval pipelines. They demonstrate how to combine feature extractors, retriever stages, and enrichment resources to solve real ML problems.
Composable Patterns
Each recipe shows how to combine extractors, stages, and enrichment resources. Copy the pattern and customize for your use case.
ML-Native Workflows
Semantic search, anomaly detection, dataset engineering—recipes are organized by the ML patterns you're trying to implement.
Production Ready
Clone templates directly into Mixpeek Studio. Each recipe includes Python snippets, retriever stages, and enrichment configurations.
