17dl/month
Identifier
Model ID
amitha/mllava-llama2-en-zhTags
transformerssafetensorsllava_llamavisual-question-answeringllavavlmcustom_codeenzhdataset:LinkSoul/Chinese-LLaVA-Vision-Instructionsarxiv:2406.11665license:apache-2.0region:us
Use mllava-llama2-en-zh on Mixpeek
Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.
Open Pipeline BuilderSpecification
Organizationamitha
TaskVisual Question Answering
Librarytransformers
Licenseapache-2.0
Downloads/mo17
View on HuggingFace
See model card, files, and community discussion