Former lead of MongoDB's Search Team, Ethan noticed the most common problem customers faced was building indexing and search infrastructure on their S3 buckets. Mixpeek was born.
Contextual advertising is changing. To adapt, businesses need to understand how multimodal AI works, why taxonomies matter, and what this means for the future of advertising.
Migrate from IAB Content Taxonomy 2.x to 3.0 in minutes with this free open-source mapper. Runs locally, offers a demo UI, pip/npm CLI, and AI-powered methods (TF-IDF, BM25, KNN, LLM re-rank) for accurate results.
Accelerate your migration to IAB 3.0 and map messy internal taxonomies with AI. Learn how semantic tagging boosts monetization, RAG precision, and multimodal understanding—without lifting a finger.
Intentflow is an open-source UX engine to trigger modals, tooltips, and banners using YAML, flags, and LLMs—built for growth teams.
Move beyond text-only search. Learn to build AI agents that reason across documents, videos, images, and audio for comprehensive multimodal research and analysis
Milo the Meerkat is the official mascot of Mixpeek.
Learn how to build a scalable ASR pipeline using Ray and Whisper, with batching, GPU optimization, and real-world tips from production deployments
Why the future of AI isn’t about bigger models — it’s about better data.
Late interaction models enable precise retrieval from multimodal data like PDFs and images by comparing query tokens with token or patch embeddings—ideal for RAG, search, and document understanding.
Segmentation turns raw video into searchable chunks—objects, actions, scenes—boosting precision in multimodal search. It bridges unstructured content with structured queries like “man falling in warehouse,” enabling faster, more accurate retrieval across large datasets.
Even with data-driven targeting, most ads still miss. Contextual AI changes that—boosting relevance, clicks, and ROI without cookies.
As Google phases out third-party cookies, advertisers face declining performance from behavioral targeting. Learn how Contextual AI offers a privacy-safe, high-precision alternative.
📢 Quick Take (TL;DR) * Major multimodal model releases: Meta unveiled Llama 4 Scout & Maverick – open Mixture-of-Experts models with native text+image (and even video/audio) support – and Microsoft introduced Phi-4-Multimodal, a compact 3.8B-param model integrating vision, text, and spee (Today is the start of a new era of natively multimodal AI… | AI at Meta | 190 comments) ([2503.01743] Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs)7】. Bot
By applying the classic group_by pattern to structured video data at index time, you can turn raw frames into searchable, analyzable DataFrames aligned with how your users explore footage.
Researchers introducing new methods to replace embeddings with discrete IDs for faster cross-modal search