A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
vision
8b
233.3K Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
44c161b1f465 · 5.5GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
projector
archclip
·
parameters312M
·
quantizationF16
624MB
params
{
"num_ctx": 4096,
"num_keep": 4,
"stop": [
"<|start_header_id|>",
"<|en
124B
template
{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .P
254B
Readme
llava-llama3
is a LLaVA model fine-tuned from Llama 3 Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.