What are Models in Agent Cloud?
🚧 These docs are still under construction. Reach out on Discord if you’d like more information on anything about Agent Cloud.
Models attach to an agent to specify the specific model and provider the agent will be using. Models are also used in Datasources to embed data, to learn more about embedding visit the Datasources page.
When you first register for the cloud platform you will be prompted to configure a simple OpenAI GPT model, if you’ve already done this then there is no need to configure more unless you intend to use multiple agents.
Models don’t require much to set up, all that’s required is a valid api key for the selected model to allow the platform to access the selected LLM. The model you choose depends on a variety of factors including token budget, datasource size (for embedding), task complexity and more. maybe more info on model selection?
Select your vendor, we support a range of model vendors, select one and configure it accordingly. Different vendors require different configuration for their models. Below is our current list of supported vendors:
OpenAI
Setup for an OpenAI model is very straightforward, all that’s required is a valid OpenAI api key
.
These are the following agent compatible models we support:
gpt-4o-mini
. Read more about it here gpt-4o
. Read more about it heregpt-4-turbo
. Read more about it here gpt-4
. Read more about it here These are the following embedding models we support:
Azure OpenAI
Need someone to explain to me how azure openai works
Anthropic
Similar to OpenAI, Anthropic setup only requires a valid Anthropic api key
.
These are the follwoing agent compatible models we support:
Ollama
Need someone to explain how Ollama locally works
FastEmbed
FastEmbed is a library provided by Qdrant to allow for easy embedding of data, it requires no api key an you simply select the model to use.
You can read more about FastEmbed here
We support the following embedding models:
fast-bge-small-en
fast-bge-base-en
fast-all-MiniLM-L6-v2
fast-multilingual-e5-large
Groq
Need someone to explain Groq to me
Google Vertex
Google AI
What are Models in Agent Cloud?
🚧 These docs are still under construction. Reach out on Discord if you’d like more information on anything about Agent Cloud.
Models attach to an agent to specify the specific model and provider the agent will be using. Models are also used in Datasources to embed data, to learn more about embedding visit the Datasources page.
When you first register for the cloud platform you will be prompted to configure a simple OpenAI GPT model, if you’ve already done this then there is no need to configure more unless you intend to use multiple agents.
Models don’t require much to set up, all that’s required is a valid api key for the selected model to allow the platform to access the selected LLM. The model you choose depends on a variety of factors including token budget, datasource size (for embedding), task complexity and more. maybe more info on model selection?
Select your vendor, we support a range of model vendors, select one and configure it accordingly. Different vendors require different configuration for their models. Below is our current list of supported vendors:
OpenAI
Setup for an OpenAI model is very straightforward, all that’s required is a valid OpenAI api key
.
These are the following agent compatible models we support:
gpt-4o-mini
. Read more about it here gpt-4o
. Read more about it heregpt-4-turbo
. Read more about it here gpt-4
. Read more about it here These are the following embedding models we support:
Azure OpenAI
Need someone to explain to me how azure openai works
Anthropic
Similar to OpenAI, Anthropic setup only requires a valid Anthropic api key
.
These are the follwoing agent compatible models we support:
Ollama
Need someone to explain how Ollama locally works
FastEmbed
FastEmbed is a library provided by Qdrant to allow for easy embedding of data, it requires no api key an you simply select the model to use.
You can read more about FastEmbed here
We support the following embedding models:
fast-bge-small-en
fast-bge-base-en
fast-all-MiniLM-L6-v2
fast-multilingual-e5-large
Groq
Need someone to explain Groq to me
Google Vertex
Google AI