Set-and-forget media intelligence for your entire Iconik library
Connect Iconik to Mixpeek and every asset in your DAM becomes searchable by what's inside it — scenes, faces, objects, spoken words, and on-screen text. Sync runs continuously in the background. New uploads are indexed automatically. Deleted assets are cleaned up. No manual tagging, no maintenance.
What teams see after connecting Iconik to Mixpeek
the exact shot without scrubbing
A producer searching for 'red dress rooftop sunset' gets the frame — not a list of filenames someone tagged months ago
talent and brand usage across libraries
Compliance teams search by face or logo across every asset to confirm licensing, usage rights, and brand guidelines before publish
assets that were buried and forgotten
Thousands of hours of footage sit unused because nobody remembers they exist. Mixpeek makes the entire backlog searchable by content
the manual tagging bottleneck
New uploads are decomposed and indexed automatically — editors stop spending hours tagging every asset before it's findable
that your index matches your DAM
Continuous sync, modification detection, and deletion reconciliation keep search results accurate as your Iconik library evolves
From connected to searching
Connect your Iconik account, scope the sync to the collections that matter, and start finding assets by what's inside them
Media teams store thousands of assets in Iconik — videos, images, audio — but finding the right one means remembering filenames, hoping someone tagged it correctly, or scrubbing through footage manually. Metadata is only as good as whoever entered it at upload time, and it goes stale fast. When a producer needs a specific shot, a compliance team needs to verify talent usage, or a creative director wants to find every frame containing a product — they're stuck with keyword search against hand-entered tags. The asset library grows every day, but discoverability doesn't keep up.
Mixpeek connects directly to Iconik via connection sync. Once configured, it polls your Iconik account on a schedule, downloads proxy files, and runs multimodal extractors — visual embeddings, object detection, face recognition, OCR, and speech transcription — across every asset. The entire pipeline is set-and-forget: new assets are indexed automatically on the next sync cycle, modified assets are re-processed, and deleted assets are cleaned up via reconciliation or real-time webhooks. Your team searches across scenes, faces, spoken words, and on-screen text from a single query — no manual tagging required.
Hover over each step to see how the components connect
Iconik Connection Sync
Poll + Filters
Mixpeek connects to your Iconik account using App ID and Auth Token. Sync runs on a configurable schedule, pulling new and modified assets. Provider filters scope sync to specific collections, statuses, or media types.
Asset Resolution
3 API Calls per Asset
For each asset, Mixpeek fetches core metadata, custom metadata fields, and resolves the best proxy download URL — pre-signed links that work without additional auth.
Multimodal Extraction
Extractors
Downloaded proxy files are decomposed into frames and audio segments. Extractors run in parallel: visual embeddings, object detection, face identity, OCR, and speech transcription.
Feature Indexing
Collections
Extracted features are stored in Mixpeek collections with full lineage back to the source Iconik asset ID, timestamps, and metadata fields.
Search Retriever
Visual + Text + Face
A retriever combines vector similarity, face identity matching, metadata filters, and full-text search across transcripts and OCR. Find any scene, face, or spoken word in seconds.
Lifecycle Sync
Webhooks + Reconciliation
Iconik webhooks notify Mixpeek of deletions and updates in real time. Reconciliation on each sync cycle catches anything webhooks missed — keeping your index perfectly in sync with your DAM.
Create a connection with your Iconik App ID and Auth Token, point a sync config at your account, and Mixpeek handles the rest. Provider filters let you scope sync to specific collections, statuses, or media types — so you index only what matters. Modification detection (skip_duplicates) ensures re-syncs only process assets that changed since the last cycle, keeping API usage and processing costs low. Reconciliation automatically removes objects whose source assets were deleted from Iconik. For real-time responsiveness, configure Iconik webhooks to notify Mixpeek of deletions and updates instantly — no waiting for the next poll. Three API calls per asset resolve core metadata, custom metadata fields, and the best available proxy download URL. Extracted features are indexed into retrievers with visual search, face identity, and full-text stages across transcripts and OCR output.
Get started with Mixpeek + Iconik in minutes. Read the docs, create a free account, or schedule a walkthrough with our team.