Step by step guide to setup a conversational chat app to RAG a Google Big Query datasource (or any other data source)
/models
screen and add two models:
Add Embedding Model
text-embedding-3-small
Add LLM Model
gpt-4o-mini
docker compose logs -f
in your terminal to follow along and catch any errors if they occur.
For Advanced debugging you can also open up apps like Qdrant or Airbyte to see progress as data passes through each system.
Click Advanced Debugging for instructions on how to access these UIs.Advanced Debugging
App | Location | Authentication |
---|---|---|
Airbyte | http://localhost:8000 | username: airbyte password: password |
Qdrant | http://localhost:6333/dashboard#/collections | N/A |
Rabbit MQ | http://localhost:15672/ | username: guest password: guest |
Sample Elon Musk tweets dataset
/datasources
screen, select New Connection and add a Bigquery data source:
Provide a name and sync schedule
Manual
for now.
full refresh - overwrite
. We still need to enable other refresh options such as Incremental - Append
or Full Refresh - Append
in future versions, which will only sync new data via a provided cursor field (such as create date field). You can read more about it here Provide Data Source Credentials
keyfile.json
.Select Table and Fields to Sync
Select field to Embed
text-embedding-3-small
which we setup in step 1 or add it here if you haven’t done so already.Sync will begin
/agents
screen and create a new Agent
Provide a Role, Goal and Backstory
Select the model
gpt-4o-mini
and optionally add it as the Function Calling model.Select the datasource
/apps
screen and create a new Chat App
Select Conversational App Type
Select the Agent
Run the App