Supported Models
DeepMyst provides a unified API for accessing various language models with built-in token optimization. The platform supports models from multiple providers through a single, consistent interface.Enabled Providers
The following providers are enabled by default for all users:OpenAI
gpt-5.2-pro- GPT-5.2 Progpt-5.2- GPT-5.2gpt-5- GPT-5gpt-5-chat-latest- GPT-5 Chat Latestgpt-5-mini- GPT-5 Minigpt-5-nano- GPT-5 Nanogpt-4.1- GPT-4.1chatgpt-4o-latest- ChatGPT-4o Latesto3-deep-research- O3 Deep Researcho4-mini-deep-research- O4 Mini Deep Researchgpt-audio- GPT Audiogpt-audio-mini- GPT Audio Minigpt-realtime- GPT Realtimegpt-realtime-mini- GPT Realtime Minigpt-oss-120b- GPT OSS 120Bgpt-oss-20b- GPT OSS 20B
Claude (Anthropic)
claude-opus-4-5- Claude Opus 4.5claude-opus-4-1- Claude Opus 4.1claude-opus-4- Claude Opus 4claude-sonnet-4-5- Claude Sonnet 4.5claude-sonnet-4- Claude Sonnet 4claude-haiku-4-5- Claude Haiku 4.5claude-haiku-3- Claude Haiku 3
Gemini (Google)
gemini-3-pro-preview- Gemini 3 Pro Previewgemini-3-flash-preview- Gemini 3 Flash Previewgemini-2.5-pro- Gemini 2.5 Progemini-2.5-flash- Gemini 2.5 Flashgemini-2.5-flash-lite- Gemini 2.5 Flash Litegemini-2.0-flash- Gemini 2.0 Flashgemini-2.0-flash-lite- Gemini 2.0 Flash Lite
Grok (xAI)
grok-4- Grok 4grok-4-fast- Grok 4 Fastgrok-4.1-fast- Grok 4.1 Fastgrok-3- Grok 3grok-3-mini- Grok 3 Minigrok-2-vision- Grok 2 Visiongrok-2-vision-latest- Grok 2 Vision Latestgrok-code-fast-1- Grok Code Fast
Groq
llama-4-maverick- Llama 4 Maverickllama-4-scout- Llama 4 Scoutllama-3.3-70b-versatile- Llama 3.3 70B Versatilellama-3.1-8b-instant- Llama 3.1 8B Instantllama-guard-4-12b- Llama Guard 4 12B
DeepSeek
deepseek-chat- DeepSeek Chatdeepseek-reasoner- DeepSeek Reasoner
AWS Bedrock
bedrock-claude-3-7-sonnet- Bedrock Claude 3.7 Sonnetbedrock-claude-3-5-sonnet- Bedrock Claude 3.5 Sonnetbedrock-claude-3-opus- Bedrock Claude 3 Opusbedrock-claude-3-sonnet- Bedrock Claude 3 Sonnetbedrock-claude-3-haiku- Bedrock Claude 3 Haikubedrock-claude-v2- Bedrock Claude V2bedrock-claude-instant- Bedrock Claude Instantbedrock-deepseek-r1- Bedrock DeepSeek R1bedrock-llama3-1-405b- Bedrock Llama 3.1 405Bbedrock-llama3-1-70b- Bedrock Llama 3.1 70Bbedrock-llama3-1-8b- Bedrock Llama 3.1 8Bbedrock-llama3-70b- Bedrock Llama 3 70Bbedrock-mixtral-8x7b- Bedrock Mixtral 8x7Bbedrock-mistral-7b- Bedrock Mistral 7Bbedrock-titan-express- Bedrock Titan Express
OpenRouter
Access 400+ models through OpenRouter integration. Use theopenrouter/ prefix:
openrouter-auto- Auto-router (selects best model)openrouter/*- Wildcard access to all OpenRouter models
openrouter/openai/gpt-5.2- GPT-5.2openrouter/openai/gpt-5- GPT-5openrouter/openai/gpt-4.1- GPT-4.1openrouter/openai/o3-mini- O3 Miniopenrouter/anthropic/claude-opus-4.5- Claude Opus 4.5openrouter/anthropic/claude-sonnet-4.5- Claude Sonnet 4.5openrouter/anthropic/claude-haiku-4.5- Claude Haiku 4.5openrouter/google/gemini-3-pro-preview- Gemini 3 Proopenrouter/google/gemini-2.5-pro- Gemini 2.5 Proopenrouter/deepseek/deepseek-r1- DeepSeek R1openrouter/deepseek/deepseek-v3.2- DeepSeek V3.2openrouter/x-ai/grok-4- Grok 4openrouter/mistralai/mistral-large-2512- Mistral Largeopenrouter/qwen/qwen3-coder- Qwen 3 Coder
Additional Providers
The following providers are available upon request. Contact us to enable these for your account:| Provider | Models Available |
|---|---|
| Anyscale | Various open-source models |
| Azure AI | Azure-hosted AI models |
| Azure OpenAI | azure-gpt-4o, azure-gpt-4o-mini, azure-gpt-4-turbo, azure-gpt-4, azure-o1, azure-o1-mini |
| Baseten | baseten-llama-3-1-70b, baseten-mistral-7b |
| Cerebras | cerebras-llama3-3-70b, cerebras-llama3-1-70b, cerebras-llama3-1-8b |
| Cohere | command-a-03-2025, command-r-plus, command-r, command-nightly, command-light |
| Databricks | databricks-dbrx-instruct, databricks-llama-3-1-70b, databricks-mixtral-8x7b |
| Google AI Studio | Direct Google AI Studio access |
| Gradient AI | Gradient-hosted models |
| Heroku | Heroku-deployed models |
| HuggingFace | huggingface-gemma-7b, huggingface-llama-3-1-8b, huggingface-mistral-7b, huggingface-phi-3-mini, huggingface-qwen2-72b |
| IBM Watsonx | watsonx-llama-3-1-70b, watsonx-mixtral-8x7b, watsonx-granite-13b |
| Meta Llama | Direct Meta Llama API access |
| Mistral AI | mistral-large-latest, mistral-medium-latest, mistral-small-latest, codestral-latest, magistral-medium, open-mixtral-8x22b |
| Moonshot AI | moonshot-v1-32k, moonshot-v1-8k |
| NVIDIA NIM | nvidia-llama-3-1-405b, nvidia-llama-3-1-70b, nvidia-llama-3-1-8b, nvidia-mixtral-8x7b |
| Oracle Cloud | oci-cohere-command-r-plus, oci-cohere-command-r |
| Perplexity AI | sonar, sonar-pro, sonar-reasoning, sonar-reasoning-pro, sonar-deep-research |
| Replicate | replicate-llama-2-70b-chat, replicate-mistral-7b-instruct, replicate-mixtral-8x7b |
| AWS SageMaker | sagemaker-llama-3-1-70b, sagemaker-mistral-7b |
| Snowflake | snowflake-arctic-instruct, snowflake-llama-3-1-405b |
| Together AI | together-llama-2-70b-chat, together-wizardlm-70b, together-codellama-34b |
| Vertex AI | vertex-gemini-2-0-flash, vertex-gemini-1-5-pro, vertex-claude-3-5-sonnet, vertex-llama3-1-405b |
| Voyage AI | voyage-large-2, voyage-code-2, voyage-lite-02-instruct |
Using Models with Direct API Requests
Standard Request
Optimized Request
Streaming Request
Using Models with OpenAI Library
You can use the OpenAI SDK with DeepMyst by simply changing the base URL. This allows you to leverage familiar OpenAI patterns while accessing all supported models.Installation
Configuration
Standard Request
Optimized Request
Streaming Request
Model Selection Guidance
- Use
-optimizesuffix when token efficiency is important - Choose smaller models (mini, nano, flash variants) for faster responses and lower costs
- Choose larger models (opus, pro variants) for more complex reasoning tasks
- For high-throughput applications, consider models like
llama-3.1-8b-instantorgemini-2.0-flash-lite - Consider using the router to automatically route to the best model

