Get Started
Supported models
Supported Models
DeepMyst provides a unified API for accessing various language models with built-in token optimization. The platform supports models from multiple providers through a single, consistent interface.
Available Models
DeepMyst currently supports the following models:
OpenAI Models
gpt-4o-mini
- GPT-4o Minigpt-4o
- GPT-4oo1
- OpenAI o1o1-mini
- OpenAI o1-minio3-mini
- OpenAI o3-minichatgpt-4o-latest
- ChatGPT-4o Latest
Anthropic Models
claude-3-7-sonnet-20250219
- Claude 3.7 Sonnetclaude-3-5-sonnet-latest
- Claude 3.5 Sonnetclaude-3-5-haiku-latest
- Claude 3 Haikuclaude-3-opus-latest
- Claude 3 Opus
Google Models
gemini-2.0-flash
- Gemini 2.0 Flashgemini-2.0-flash-lite-preview-02-05
- Gemini 2.0 Flash Litegemini-1.5-pro
- Gemini 1.5 Progemini-1.5-flash
- Gemini 1.5 Flashgemini-1.5-flash-8b
- Gemini 1.5 Flash 8B
Groq Models
llama-3.1-8b-instant
- Llama 3.1 8B Instantllama-3.3-70b-versatile
- Llama 3.3 70B Versatilellama-guard-3-8b
- Llama Guard 3 8Bmixtral-8x7b-32768
- Mixtral 8x7B 32Kgemma2-9b-it
- Gemma2 9B ITqwen-2.5-32b
- Qwen 2.5 32Bdeepseek-r1-distill-qwen-32b
- Deepseek R1 Qwen 32Bdeepseek-r1-distill-llama-70b-specdec
- Deepseek R1 Llama 70B Specdeepseek-r1-distill-llama-70b
- Deepseek R1 Llama 70B
Using Models with Direct API Requests
Standard Request
Optimized Request
Streaming Request
Using Models with OpenAI Library
You can use the OpenAI SDK with DeepMyst by simply changing the base URL. This allows you to leverage familiar OpenAI patterns while accessing all supported models.
Installation
Configuration
Standard Request
Optimized Request
Streaming Request
Model Selection Guidance
- Use
-optimize
suffix when token efficiency is important - Choose smaller models (mini variants) for faster responses and lower costs
- Choose larger models (opus, pro variants) for more complex reasoning tasks
- For high-throughput applications, consider models like
llama-3.1-8b-instant
orgemini-1.5-flash
- Consider using the router to automatically route to the best model