Models/Visual Question Answering/GeorgyGUF/INFRL-Qwen2.5-VL-72B-Preview-q8-with-bf16-output-and-bf16-embedding.gguf
INFRL-Qwen2.5-VL-72B-Preview-q8-with-bf16-output-and-bf16-embedding.gguf
by GeorgyGUF
20dl/month
Identifier
Model ID
GeorgyGUF/INFRL-Qwen2.5-VL-72B-Preview-q8-with-bf16-output-and-bf16-embedding.ggufTags
transformersggufmultimodalvisual-question-answeringenbase_model:Qwen/Qwen2.5-VL-72B-Instructbase_model:quantized:Qwen/Qwen2.5-VL-72B-Instructlicense:apache-2.0endpoints_compatibleregion:usconversational
Use INFRL-Qwen2.5-VL-72B-Preview-q8-with-bf16-output-and-bf16-embedding.gguf on Mixpeek
Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.
Open Pipeline BuilderSpecification
OrganizationGeorgyGUF
TaskVisual Question Answering
Librarytransformers
Licenseapache-2.0
Downloads/mo20
View on HuggingFace
See model card, files, and community discussion