39Kdl/month
1likes
Identifier
Model ID
lmms-lab/LLaVA-OneVision-1.5-4B-BaseTags
transformerssafetensorsfeature-extractionimage-text-to-textconversationalcustom_codeendataset:lmms-lab/LLaVA-One-Vision-1.5-Mid-Training-85Marxiv:2509.23661base_model:DeepGlint-AI/rice-vit-large-patch14-560base_model:finetune:DeepGlint-AI/rice-vit-large-patch14-560license:apache-2.0region:us
Use LLaVA-OneVision-1.5-4B-Base on Mixpeek
Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.
Open Pipeline BuilderSpecification
Organizationlmms-lab
TaskImage Text To Text
Librarytransformers
Licenseapache-2.0
Downloads/mo39K
Likes1
View on HuggingFace
See model card, files, and community discussion