Building an Exploratory Multimodal Retriever with the National Gallery
About this video
Discover how to build a powerful exploratory image board using multimodal search across 120,000 images from the National Gallery. This walkthrough demonstrates combining text search, reverse image search, and document-based queries into a unified retrieval experience using hybrid search with Reciprocal Rank Fusion (RRF). π Live Demo: https://mxp.co/r/npg What you'll learn: β‘ Building exploratory search interfaces for visual content β‘ Combining text, image, and document reference queries β‘ Implementing hybrid search with RRF for optimal results β‘ Using Google SigLIP embeddings for image understanding β‘ Creating multi-stage retriever pipelines with feature search β‘ Capturing user signals for recommendation systems β‘ Architecture patterns: Objects β Buckets β Collections β Retrievers Real-world demo: Visual curation across 120k images, 12GB of data, with text + image + document hybrid queries. Full source code available in the Mixpeek showcase repository.
