NEWWhy single embeddings fail for video.Read the post →
    storage
    Iconik logo

    Iconik

    Set-and-forget media intelligence for your entire Iconik library

    Connect Iconik to Mixpeek and every asset in your DAM becomes searchable by what's inside it — scenes, faces, objects, spoken words, and on-screen text. Sync runs continuously in the background. New uploads are indexed automatically. Deleted assets are cleaned up. No manual tagging, no maintenance.

    Outcomes

    Measurable impact from day one

    What teams see after connecting Iconik to Mixpeek

    Find

    the exact shot without scrubbing

    A producer searching for 'red dress rooftop sunset' gets the frame — not a list of filenames someone tagged months ago

    Verify

    talent and brand usage across libraries

    Compliance teams search by face or logo across every asset to confirm licensing, usage rights, and brand guidelines before publish

    Unlock

    assets that were buried and forgotten

    Thousands of hours of footage sit unused because nobody remembers they exist. Mixpeek makes the entire backlog searchable by content

    Skip

    the manual tagging bottleneck

    New uploads are decomposed and indexed automatically — editors stop spending hours tagging every asset before it's findable

    Trust

    that your index matches your DAM

    Continuous sync, modification detection, and deletion reconciliation keep search results accurate as your Iconik library evolves

    <4 hrs

    From connected to searching

    Connect your Iconik account, scope the sync to the collections that matter, and start finding assets by what's inside them

    Making your DAM searchable
    Before
    Manual tagging at upload time
    Tags go stale, inconsistent
    Can't search by face or speech
    Deleted assets leave ghost entries
    After
    Auto-extracted on every sync cycle
    Always current, always complete
    Face, speech, OCR, visual search
    Webhooks + reconciliation clean up

    The Problem

    Media teams store thousands of assets in Iconik — videos, images, audio — but finding the right one means remembering filenames, hoping someone tagged it correctly, or scrubbing through footage manually. Metadata is only as good as whoever entered it at upload time, and it goes stale fast. When a producer needs a specific shot, a compliance team needs to verify talent usage, or a creative director wants to find every frame containing a product — they're stuck with keyword search against hand-entered tags. The asset library grows every day, but discoverability doesn't keep up.

    The Solution

    Mixpeek connects directly to Iconik via connection sync. Once configured, it polls your Iconik account on a schedule, downloads proxy files, and runs multimodal extractors — visual embeddings, object detection, face recognition, OCR, and speech transcription — across every asset. The entire pipeline is set-and-forget: new assets are indexed automatically on the next sync cycle, modified assets are re-processed, and deleted assets are cleaned up via reconciliation or real-time webhooks. Your team searches across scenes, faces, spoken words, and on-screen text from a single query — no manual tagging required.

    Pipeline Architecture

    Hover over each step to see how the components connect

    1

    Iconik Connection Sync

    Poll + Filters

    Mixpeek connects to your Iconik account using App ID and Auth Token. Sync runs on a configurable schedule, pulling new and modified assets. Provider filters scope sync to specific collections, statuses, or media types.

    2

    Asset Resolution

    3 API Calls per Asset

    For each asset, Mixpeek fetches core metadata, custom metadata fields, and resolves the best proxy download URL — pre-signed links that work without additional auth.

    3

    Multimodal Extraction

    Extractors

    Downloaded proxy files are decomposed into frames and audio segments. Extractors run in parallel: visual embeddings, object detection, face identity, OCR, and speech transcription.

    4

    Feature Indexing

    Collections

    Extracted features are stored in Mixpeek collections with full lineage back to the source Iconik asset ID, timestamps, and metadata fields.

    5

    Search Retriever

    Visual + Text + Face

    A retriever combines vector similarity, face identity matching, metadata filters, and full-text search across transcripts and OCR. Find any scene, face, or spoken word in seconds.

    6

    Lifecycle Sync

    Webhooks + Reconciliation

    Iconik webhooks notify Mixpeek of deletions and updates in real time. Reconciliation on each sync cycle catches anything webhooks missed — keeping your index perfectly in sync with your DAM.

    Iconik Integration Deep Dive

    Create a connection with your Iconik App ID and Auth Token, point a sync config at your account, and Mixpeek handles the rest. Provider filters let you scope sync to specific collections, statuses, or media types — so you index only what matters. Modification detection (skip_duplicates) ensures re-syncs only process assets that changed since the last cycle, keeping API usage and processing costs low. Reconciliation automatically removes objects whose source assets were deleted from Iconik. For real-time responsiveness, configure Iconik webhooks to notify Mixpeek of deletions and updates instantly — no waiting for the next poll. Three API calls per asset resolve core metadata, custom metadata fields, and the best available proxy download URL. Extracted features are indexed into retrievers with visual search, face identity, and full-text stages across transcripts and OCR output.

    dam
    media-asset-management
    video
    images
    audio
    search
    iconik
    sync

    Ready to integrate?

    Get started with Mixpeek + Iconik in minutes. Read the docs, create a free account, or schedule a walkthrough with our team.