Unitalk
Back to Discovery

StripedHyena Nous (7B)

bytogetherai
StripedHyena Nous (7B) provides enhanced computational capabilities through efficient strategies and model architecture.

Providers Supporting This Model

Together AI
togethercomputer/StripedHyena-Nous-7B
Maximum Context Length
32K
Maximum Output Length
--
Input Price
--
Output Price
--

Related Recommendations

togetherai
Meta

Llama 3.2 3B Instruct Turbo

LLaMA 3.2 is designed for tasks involving both visual and textual data. It excels in tasks like image description and visual question answering, bridging the gap between language generation and visual reasoning.
128K
togetherai
Meta

Llama 3.2 11B Vision Instruct Turbo (Free)

LLaMA 3.2 is designed for tasks involving both visual and textual data. It excels in tasks like image description and visual question answering, bridging the gap between language generation and visual reasoning.
128K
togetherai
Meta

Llama 3.2 11B Vision Instruct Turbo

LLaMA 3.2 is designed for tasks involving both visual and textual data. It excels in tasks like image description and visual question answering, bridging the gap between language generation and visual reasoning.
128K
togetherai
Meta

Llama 3.2 90B Vision Instruct Turbo

LLaMA 3.2 is designed for tasks involving both visual and textual data. It excels in tasks like image description and visual question answering, bridging the gap between language generation and visual reasoning.
128K
togetherai
Meta

Llama 3.1 8B Instruct Turbo

Llama 3.1 8B model utilizes FP8 quantization, supporting up to 131,072 context tokens, making it a standout in open-source models, excelling in complex tasks and outperforming many industry benchmarks.
128K
togetherai
Meta

Llama 3.1 70B Instruct Turbo

Llama 3.1 70B model is finely tuned for high-load applications, quantized to FP8 for enhanced computational efficiency and accuracy, ensuring outstanding performance in complex scenarios.
128K
togetherai
Meta

Llama 3.1 405B Instruct Turbo

The 405B Llama 3.1 Turbo model provides massive context support for big data processing, excelling in large-scale AI applications.
130K
togetherai
Meta

Llama 3.1 Nemotron 70B

Llama 3.1 Nemotron 70B is a large language model customized by NVIDIA, designed to enhance the helpfulness of LLM-generated responses to user queries. The model has excelled in benchmark tests such as Arena Hard, AlpacaEval 2 LC, and GPT-4-Turbo MT-Bench, ranking first in all three automatic alignment benchmarks as of October 1, 2024. The model is trained using RLHF (specifically REINFORCE), Llama-3.1-Nemotron-70B-Reward, and HelpSteer2-Preference prompts based on the Llama-3.1-70B-Instruct model.
32K