122Kdl/month
102likes
Identifier
Model ID
HuggingFaceTB/SmolVLM2-256M-Video-InstructTags
transformersonnxsafetensorssmolvlmimage-text-to-textconversationalendataset:HuggingFaceM4/the_cauldrondataset:HuggingFaceM4/Docmatixdataset:lmms-lab/LLaVA-OneVision-Datadataset:lmms-lab/M4-Instruct-Datadataset:HuggingFaceFV/finevideodataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12Mdataset:lmms-lab/LLaVA-Video-178Kdataset:orrzohar/Video-STaRdataset:Mutonix/Vriptdataset:TIGER-Lab/VISTA-400Kdataset:Enxin/MovieChat-1K_traindataset:ShareGPT4Video/ShareGPT4Videoarxiv:2504.05299base_model:HuggingFaceTB/SmolVLM-256M-Instructbase_model:quantized:HuggingFaceTB/SmolVLM-256M-Instructlicense:apache-2.0endpoints_compatibleregion:us
Use SmolVLM2-256M-Video-Instruct on Mixpeek
Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.
Open Pipeline BuilderSpecification
OrganizationHuggingFaceTB
TaskImage Text To Text
Librarytransformers
Licenseapache-2.0
Downloads/mo122K
Likes102
View on HuggingFace
See model card, files, and community discussion