Store
Assistants
Tools
Models
Back to Discovery
Mistral
Mixtral Nemo 12B
by
ollama
Mistral Nemo, developed in collaboration with Mistral AI and NVIDIA, is a high-performance 12B model.
Share
Providers Supporting This Model
Ollama
Mistral
mistral-nemo
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
GitHub
Mistral
mistral-nemo
Maximum Context Length
128K
Maximum Output Length
4K
Input Price
--
Output Price
--
Novita
Mistral
mistral-nemo
Maximum Context Length
--
Maximum Output Length
--
Input Price
--
Output Price
--
Mistral
Mistral
mistral-nemo
Maximum Context Length
--
Maximum Output Length
--
Input Price
--
Output Price
--
Higress
Mistral
mistral-nemo
Maximum Context Length
128K
Maximum Output Length
4K
Input Price
--
Output Price
--
Related Recommendations
ollama
Meta
Llama 3.1 8B
Llama 3.1 is a leading model launched by Meta, supporting up to 405B parameters, applicable in complex dialogues, multilingual translation, and data analysis.
128K
ollama
Meta
Llama 3.1 70B
70b.description
128K
ollama
Meta
Llama 3.1 405B
405b.description
128K
ollama
Meta
Code Llama 7B
Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments.
16K
ollama
Meta
Code Llama 13B
13b.description
16K
ollama
Meta
Code Llama 34B
34b.description
16K
ollama
Meta
Code Llama 70B
70b.description
16K
ollama
Qwen
QwQ 32B
QwQ is an experimental research model focused on improving AI reasoning capabilities.
128K
View More