Philip’s expertise in RAG, data engineering, and AI agents makes him a natural fit to write about scaling multimodal data warehousing systems.
Week of Nov 24-30, 2025: Alibaba's 6B Z-Image impresses, Tencent's 1B HunyuanOCR beats larger models and APIs, VisionRAG uses 6-9x less memory than ColPali, and RynnVLA-002 boosts real-world robot success by 50%.
Week of Nov 17-23, 2025: Nano Banana Pro creates coherent visualizations, SAM 3 segments by concept not pixels, HunyuanVideo 1.5 leads open-source video, and Step-Audio-R1 matches Gemini 3 Pro on audio reasoning.
Week of November 10 - November 16, 2025: Pelican-VL gives humanoid robots spatial intelligence, DeepMind teaches AI to see like humans, Marble creates 3D worlds from single images, and Meta opens speech recognition to 1,600+ languages.
Multimodal Monday 32: AMER shows 4-21% gains on complex queries by generating multiple embeddings, Adobe MotionStream hits 29 fps with interactive motion controls, Step-Audio-EditX edits voice emotion and style through text prompts, and GEN-0 trains robots for general skills.
Google Latent Sketchpad lets models sketch thoughts before acting, Amazon Nova MME unifies search, Emu3.5 matches Google's Nano Banana locally, BEAR reveals why AI fails physical tasks.
Multimodal Monday #30: WALT and UltraCUA make websites API-smart, Seed3D 1.0 builds 3D assets from one image, DeepSeek-OCR compresses docs 10x with 97% accuracy via optical mapping and AGILE lifts VLM accuracy from 9.5% to 82.8% with interactive puzzles.
Multimodal Monday #29: Claude Haiku 4.5 runs twice as fast at one-third cost, Trace Anything maps videos to 3D trajectories for motion search, and VIST3A stitches text-to-3D without retraining.
Multimodal Monday #28: Fast-dLLM v2 diffuses text 2.5x faster, Omni-Embed-Nemotron hunts across modalities, and Think-Then-Embed reasons to top MMEB-V2.
Multimodal Monday #27: ModernVBERT's 250M beats 10x larger, DocPruner slashes storage 60%, and Claude Sonnet 4.5 codes 30+ hours. Scale reimagined!
Multimodal Monday #26: MetaEmbed scales retrieval on-the-fly, EmbeddingGemma beats giants with 308M params, and Veo3 develops reasoning.
AI reads intentions in video, Moondream delivers frontier performance at 2B params, Alibaba open-source matches OpenAI. Understanding "why" changes everything!
RecA boosts quality 17% with 27 GPU-hours, RenderFormer replaces graphics pipelines with transformers, and Lucy-14B delivers instant video. Alignment beats retraining!
Multimodal Monday #23: REFRAG speeds RAG by 30x, WebWatcher crushes GPT-4o by 27%, and embeddings hit theoretical limits. Efficiency wins big!
Multimodal Monday #22: MLLMs fail basic rotations, Intern-S1 beats GPT on science, and MultiTrust-X exposes vulnerabilities. Trust rebuilds AI!
Multimodal Monday #21: Text crushes visuals in recommendations, GPT-5 beats doctors by 24-29%, and Spotify's AI evaluates podcasts. AI surpasses human limits!