Unitalk
Store
Ctrl K
Back to Discovery
GithubGithub
@GitHub
31 models
With GitHub models, developers can become AI engineers and build using industry-leading AI models.

Supported Models

Github
Maximum Context Length
200K
Maximum Output Length
97K
Input Price
$15.00
Output Price
$60.00
Maximum Context Length
--
Maximum Output Length
64K
Input Price
$3.00
Output Price
$12.00
Maximum Context Length
--
Maximum Output Length
32K
Input Price
$15.00
Output Price
$60.00
Maximum Context Length
--
Maximum Output Length
16K
Input Price
$0.15
Output Price
$0.60

Using GitHub Models in LobeChat

cover

GitHub Models is a new feature recently launched by GitHub, designed to provide developers with a free platform to access and experiment with various AI models. GitHub Models offers an interactive sandbox environment where users can test different model parameters and prompts, and observe the responses of the models. The platform supports advanced language models, including OpenAI's GPT-4o, Meta's Llama 3.1, and Mistral's Large 2, covering a wide range of applications from large-scale language models to task-specific models.

This article will guide you on how to use GitHub Models in LobeChat.

Rate Limits for GitHub Models

Currently, the usage of the Playground and free API is subject to limits on the number of requests per minute, the number of requests per day, the number of tokens per request, and the number of concurrent requests. If you hit the rate limit, you will need to wait for the limit to reset before making further requests. The rate limits vary for different models (low, high, and embedding models). For model type information, please refer to the GitHub Marketplace.

GitHub Models Rate Limits

These limits are subject to change at any time. For specific information, please refer to the GitHub Official Documentation.


Configuration Guide for GitHub Models

Step 1: Obtain a GitHub Access Token

  • Log in to GitHub and open the Access Tokens page.
  • Create and configure a new access token.
Creating Access Token
  • Copy and save the generated token from the results returned.
Saving Access Token
  • During the testing phase of GitHub Models, users must apply to join the waitlist in order to gain access.

  • Please store the access token securely, as it will only be displayed once. If you accidentally lose it, you will need to create a new token.

Step 2: Configure GitHub Models in LobeChat

  • Navigate to the Settings interface in LobeChat.
  • Under Language Models, find the GitHub settings.
Entering Access Token
  • Enter the access token you obtained.
  • Select a GitHub model for your AI assistant to start the conversation.
Selecting GitHub Model and Starting Conversation

You are now ready to use the models provided by GitHub for conversations within LobeChat.

Related Providers

@Unitalk
19 models
Unitalk facilitates access to AI models via officially deployed APIs. It monitors model usage through a Credits system, where these Credits act as an equivalent to the Tokens used by LLMs
OpenAIOpenAI
@OpenAI
23 models
OpenAI is a leading global artificial intelligence research organization, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is dedicated to transforming multiple industries through innovative and efficient AI solutions. Their products are characterized by significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
45 models
Ollama offers a wide range of models covering code generation, mathematical operations, multilingual processing, and conversational interactions, catering to diverse needs for enterprise-level and localized deployments.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, providing a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models strike an ideal balance between intelligence, speed, and cost, making them suitable for various applications, from enterprise workloads to fast-response scenarios. Claude 3.5 Sonnet, as its latest model, performs exceptionally well in multiple evaluations while maintaining high cost-effectiveness.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS that focuses on offering advanced AI language models and visual models for enterprises. Its model family includes the Claude series from Anthropic and the Llama 3.1 series from Meta, covering a range from lightweight to high-performance options, supporting tasks such as text generation, conversation, and image processing, suitable for enterprise applications of various scales and needs.