NEWAgents can now see video via MCP.Try it now →
    Models/Visual Question Answering/OpenFace-CQUPT/Human_LLaVA
    Visual Question Answeringtransformersllama3

    Human_LLaVA

    by OpenFace-CQUPT

    Identifier
    Model ID
    OpenFace-CQUPT/Human_LLaVA

    Tags

    transformerssafetensorsllavaimage-text-to-textAIGCLLaVAvisual-question-answeringdataset:OpenFace-CQUPT/FaceCaption-15Mbase_model:meta-llama/Meta-Llama-3-8B-Instructbase_model:finetune:meta-llama/Meta-Llama-3-8B-Instructdoi:10.57967/hf/3092license:llama3endpoints_compatibleregion:us

    Use Human_LLaVA on Mixpeek

    Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.

    Open Pipeline Builder