567dl/month
65likes
Identifier
Model ID
HPAI-BSC/Llama3-Aloe-8B-AlphaTags
transformerspytorchsafetensorsllamatext-generationbiologymedicalquestion-answeringendataset:argilla/dpo-mix-7kdataset:nvidia/HelpSteerdataset:jondurbin/airoboros-3.2dataset:hkust-nlp/deita-10k-v0dataset:LDJnr/Capybaradataset:HPAI-BSC/CareQAdataset:GBaker/MedQA-USMLE-4-optionsdataset:lukaemon/mmludataset:bigbio/pubmed_qadataset:openlifescienceai/medmcqadataset:bigbio/med_qadataset:HPAI-BSC/better-safe-than-sorrydataset:HPAI-BSC/pubmedqa-cotdataset:HPAI-BSC/medmcqa-cotdataset:HPAI-BSC/medqa-cotarxiv:2405.01886license:cc-by-nc-4.0text-generation-inferenceendpoints_compatibleregion:us
Use Llama3-Aloe-8B-Alpha on Mixpeek
Build multimodal processing pipelines with this model and others. Extract features, run inference, and set up retrieval, all through the Mixpeek pipeline builder.
Open Pipeline BuilderSpecification
OrganizationHPAI-BSC
TaskQuestion Answering
Librarytransformers
Licensecc-by-nc-4.0
Downloads/mo567
Likes65
View on HuggingFace
See model card, files, and community discussion
Related Question Answering Models
ahotrod/electra_large_discriminator_squad2_512
865K
deepset/roberta-base-squad2
604K
google-bert/bert-large-uncased-whole-word-masking-finetuned-squad
377K
deepset/roberta-large-squad2
284K
distilbert/distilbert-base-cased-distilled-squad
233K
deepset/bert-large-uncased-whole-word-masking-squad2
186K
timpal0l/mdeberta-v3-base-squad2
163K
monologg/koelectra-small-v2-distilled-korquad-384
155K