A strong, economical, and efficient Mixture-of-Experts language model.
16b
236b
72.7K Pulls Updated 6 months ago
Updated 6 months ago
6 months ago
8ba51e080b59 · 148GB
model
archdeepseek2
·
parameters236B
·
quantizationQ4_1
148GB
params
{
"stop": [
"User:",
"Assistant:"
]
}
32B
template
{{ if .System }}{{ .System }}
{{ end }}{{ if .Prompt }}User: {{ .Prompt }}
{{ end }}Assistant:{{ .
111B
license
DEEPSEEK LICENSE AGREEMENT
Version 1.0, 23 October 2023
Copyright (c) 2023 DeepSeek
Section I: PR
14kB
Readme
Note: this model requires Ollama 0.1.40.
DeepSeek-V2 is a a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference.
Note: this model is bilingual in English and Chinese.
The model comes in two sizes:
- 16B Lite:
ollama run deepseek-v2:16b
- 236B:
ollama run deepseek-v2:236b