Supported Models
Using models from multiple providers with GATE/0
GATE/0 is designed to be model-agnostic, giving you the flexibility to access and switch between a wide range of large language models (LLMs) through a single, unified API. Whether you're using OpenAI’s latest models, Anthropic’s Claude, or models served via AWS Bedrock or Google Vertex AI, GATE/0 makes it seamless.
Built-in Support for Top Providers
GATE/0 natively supports multiple major model providers, including:
- OpenAI
- Azure OpenAI
- Anthropic
- Google Vertex AI
- Amazon Bedrock
- Together.ai
- Cohere
- Mistral
- DeepSeek
This built-in integration means you don’t need to manage multiple SDKs or custom authentication logic. Just set your GATE/0 API key, specify the model, and you’re ready to go.
A Universal API for LLMs
With GATE/0’s universal API, your application code remains consistent, even as you switch between providers or models. This simplifies experimentation, reduces vendor lock-in, and allows for intelligent routing and fallback strategies.
For example, this code works regardless of the underlying model provider:
client = OpenAI(
base_url="https://gateway.gate0.io/v1",
api_key="your-gate0-api-key"
)
response = client.chat.completions.create(
model="openai/gpt-4o", # provider/model format
messages=[{"role": "user", "content": "Hello!"}]
)
Specifying Model Providers
Some models are offered by multiple providers. For example:
gpt-4o
is available via bothopenai
andazure
claude-3-sonnet
is available fromanthropic
,bedrock
, andvertex
To avoid ambiguity, GATE/0 requires you to use the provider-prefixed model name, in the format:
{provider_slug}/{model_name}
The following table lists the available providers and their slugs:
Provider | Slug |
---|---|
OpenAI | openai |
Azure OpenAI | azure |
Anthropic | anthropic |
Google Vertex AI | vertex |
Amazon Bedrock | bedrock |
Cohere | cohere |
DeepSeek | deepseek |
Examples:
openai/gpt-4o
azure/gpt-4o
bedrock/claude-3-sonnet
vertex/claude-3-haiku
This convention ensures clarity and precise routing of your requests to the correct backend provider.
Provider-prefixed model names
If you have only a single AI provider configure, you can use the model name without the provider prefix. However, it is recommended to use the provider prefix for consistency and clarity.
Why It Matters
- Portability: Easily switch providers without changing your app logic.
- Redundancy: Fall back to another provider if one is unavailable.
- Cost Optimization: Choose the provider with the best pricing or performance for your needs.
- Compliance: Use the provider that meets your region’s regulatory requirements.
Conclusion
GATE/0 gives you a unified, scalable way to interact with the growing ecosystem of LLMs — without the overhead of juggling multiple APIs. Just plug in the model name with the provider prefix and you’re ready to build.
Need help choosing the right model or provider for your use case? Contact us for personalized recommendations.