path
stringlengths 22
47
| title
stringlengths 8
50
| contents
stringlengths 143
82.3k
|
|---|---|---|
docs/semantic_catalog/more-questions.md
|
More Questions to Try
|
# More Questions to Try
| ID | Complexity | Question |
|----|-----------:|----------|
| 1 | easy | What is the total number of aircraft in the fleet? |
| 2 | easy | How many international airports are in the database? |
| 3 | easy | What is the average flight duration for all flights? |
| 4 | easy | Which airports are located in Tokyo? |
| 5 | easy | What is the maximum range of any aircraft in the fleet? |
| 6 | easy | Which aircraft model has the highest velocity? |
| 7 | easy | List all flights departing from New York's JFK airport in July 2024. |
| 8 | easy | What are the names of all airports located in the United States? |
| 9 | easy | How many flights were delayed in June 2024? |
| 10 | easy | What are the top 10 cities with the most airports? |
| 11 | easy | What is the average age of passengers who traveled in August 2024? |
| 12 | easy | How many Boeing aircraft are in the fleet? |
| 13 | easy | Which airports are located in Asia? |
| 14 | easy | How many unique aircraft models are in the fleet? |
| 15 | easy | Which passenger has the highest number of award points in the frequent flyer program? |
| 16 | easy | Which airports are located in Europe? |
| 17 | easy | What is the most common aircraft model used by Postgres Air? |
| 18 | easy | Which passengers have precheck status on their boarding passes? |
| 19 | easy | How many flights have an actual arrival time earlier than the scheduled arrival time? |
| 20 | easy | What is the average delay time in minutes for flights departing from LAX airport in July 2024? |
| 21 | easy | Which flights had the longest delays in August 2024? |
| 22 | easy | What percentage of flights arrived on time in the first half of 2024? |
| 23 | intermediate | How many flights were operated by each aircraft model in Q2 2024? |
| 24 | easy | List all the booked seats for flight number 4946 from MEX to MLM on July 7th, 2024. |
| 25 | intermediate | Which route (origin-destination pair) had the most flights in the first half of 2024? |
| 26 | easy | Which frequent flyers have reached level 4 status with more than 100,000 award points? |
| 27 | intermediate | How many passengers flew from Tokyo to Delhi in the first half of 2024? |
| 28 | intermediate | Which aircraft model was most frequently used for transatlantic flights in 2024? |
| 29 | intermediate | What is the average age of passengers traveling on international flights? |
| 30 | hard | Which frequent flyers have traveled on more than 10 different flights in 2024? |
| 31 | intermediate | What is the total number of passengers who traveled between London and Paris in July 2024? |
| 32 | easy | Which continents have the most international airports? |
| 33 | intermediate | What is the average booking price for flights to European destinations? |
| 34 | intermediate | What is the distribution of aircraft types used for domestic versus international flights? |
| 35 | intermediate | Find the busiest time of day for departures from each major international airport. |
| 36 | intermediate | Which booking had the most passengers in the second quarter of 2024? |
| 37 | intermediate | What is the average delay time for flights to Tokyo compared to flights to New York? |
| 38 | hard | Which passenger has flown the most miles in the first half of 2024? |
| 39 | intermediate | What is the average number of days between booking and flight departure for international flights? |
| 40 | hard | Which passengers have traveled on both domestic and international flights in the same month? |
| 41 | hard | How many passengers have flown on more than three different aircraft types in 2024? |
| 42 | easy | What are the most common first names among frequent flyers? |
| 43 | intermediate | Which city pairs have the most flights between them in July 2024? |
| 44 | intermediate | For each continent, what is the average flight delay time in minutes? |
| 45 | intermediate | What is the average number of passengers per booking? |
| 46 | intermediate | Which aircraft model has the best on-time arrival performance? |
| 47 | intermediate | Which passengers have made bookings but never actually flown? |
| 48 | intermediate | Which routes have the highest average ticket prices? |
| 49 | intermediate | What is the percentage of children (under 12) traveling on international flights versus domestic flights? |
| 50 | intermediate | Which routes have the highest cancellation rates in 2024? |
| 51 | hard | What is the average connecting time between flights for passengers traveling from Asia to South America? |
| 52 | hard | How many passengers have flown with Postgres Air more than 5 times but are not frequent flyer members? |
| 53 | intermediate | What is the most common departure day of the week for international flights? |
| 54 | intermediate | Who are the top 10 frequent flyers who flew in Q2 2024 that have the most award points? |
| 55 | hard | What is the average age difference between passengers traveling together on the same booking? |
| 56 | intermediate | What is the distribution of flight durations across different aircraft models? |
| 57 | intermediate | What is the average number of boarding passes issued per flight? |
| 58 | intermediate | Which airports have the highest percentage of on-time departures in the summer of 2024? |
| 59 | intermediate | What is the average velocity of aircraft flying international routes compared to domestic routes? |
| 60 | hard | What is the total flight distance traveled by each aircraft in the fleet during 2024? |
| 61 | intermediate | What's the most popular seat position (window, middle, aisle) chosen by passengers on long-haul flights? |
| 62 | hard | Which frequent flyers have traveled on flights with all three aircraft models (Airbus 340, Airbus 380, and Boeing 777-300)? |
| 63 | hard | Which passengers traveled with family members (same last name) on the same flight in June 2024? |
| 64 | easy | Which frequent flyers have the same first name but different last names? |
| 65 | intermediate | How many passengers over 65 years old flew on international flights in March 2024? |
| 66 | intermediate | What percentage of bookings are made by frequent flyers versus regular customers? |
| 67 | intermediate | What is the median age of passengers flying to Asian destinations? |
| 68 | intermediate | How many flights were canceled due to maintenance issues in the first quarter of 2024? |
| 69 | hard | How many passengers flew between Delhi and Tokyo direct versus with connections in 2024? |
| 70 | intermediate | What is the average daily revenue from bookings in May 2024? |
| 71 | hard | What percentage of international flights have at least one passenger under 10 years old? |
| 72 | intermediate | What is the average booking price for flights departing from Tokyo airports? |
| 73 | intermediate | How many flights depart from each continent during weekends (Saturday and Sunday) versus weekdays? |
| 74 | intermediate | What is the average age of frequent flyers in each membership level? |
| 75 | hard | Which passengers have experienced the most flight delays in 2024? |
| 76 | hard | What is the total number of flights between the top 5 busiest airports in the database? |
| 77 | intermediate | Which flight had the highest number of passengers over 70 years old? |
| 78 | hard | What is the most profitable route based on average booking price in 2024? |
| 79 | hard | Which passengers have the longest average flight time in the first half of 2024? |
| 80 | hard | Which flight routes have the highest ratio of business class (seats with row less than 10) to economy class passengers for aircraft in class 0 and 1? |
| 81 | easy | What is the average age of all passengers who have booked flights with Postgres Air? |
| 82 | hard | How many passengers traveled with their children (passengers under 18 with same last name) in August 2024? |
| 83 | hard | What percentage of flights arriving in Tokyo have at least one passenger over 60 years old? |
| 84 | hard | Which flights had the highest percentage of no-show passengers in August 2024? |
| 85 | hard | Which passenger has flown the most total miles in September 2024? |
| 86 | hard | Which routes have the highest proportion of business travelers (indicated by work phone numbers) in August 2024? |
|
docs/semantic_catalog/quickstart-demo-data.md
|
Quickstart with demo data
|
# Quickstart with demo data
## Overview
We are going to use an open source postgres database named "postgres air" to demonstrate SQL generation.
There are a few setup steps to take first.
We will use pgai to find database objects in the postgres air database and automatically generate natural language descriptions of them using an LLM.
We will then create a semantic catalog in another postgres database, import our descriptions, and embed them.
Once our semantic catalog is loaded with embedded descriptions, we can start generating SQL to answer our questions.
## Prerequisites
* [Python 3](https://www.python.org/downloads/)
* [docker](https://www.docker.com/products/docker-desktop/)
* An [OpenAI key](https://platform.openai.com/api-keys)
## Quickstart Instructions
This quickstart uses the:
* Open source "postgres air" database: to demonstrate SQL generation.
* pgai: to find database objects in the postgres air database and automatically generate natural language descriptions
using an LLM.
Using these tools, you create a semantic catalog in another PostgreSQL database, then import and embed the descriptions.
Once the semantic catalog is loaded with embedded descriptions, you start generating SQL to answer questions.
1. **Install pgai**
```bash
pip install "pgai[semantic-catalog]"
pgai --version
```
2. **Run a PostgreSQL container**
```bash
docker run -d --name postgres-air \
-p 127.0.0.1:5555:5432 \
-e POSTGRES_HOST_AUTH_METHOD=trust \
pgvector/pgvector:pg17
```
3. **Load the postgres_air dataset**
1. Unzip [https://github.com/hettie-d/postgres_air](https://drive.google.com/file/d/1C7PVxeYvLDr6n_7qjdA2k0vahv__jMEo/view?usp=drive_link) and put `postgres_air_2024.sql` in your current directory.
1. Load the postgres_air dataset.
```bash
psql -d "postgres://postgres@localhost:5555/postgres" -v ON_ERROR_STOP=1 -f postgres_air_2024.sql
```
Wait for psql to finish before moving to the next step.
4. **Create a `.env` file**
In the current working directory, create a `.env` file define the following variables.
The `TARGET_DB` is the database for which you want an LLM to generate queries.
The `CATALOG_DB` is the database in which you will create a new semantic catalog.
We will use the same database for both purposes and thus only specify the `TARGET_DB`.
```
OPENAI_API_KEY="your-OpenAPI-key-goes-here"
TARGET_DB="postgres://postgres@localhost:5555/postgres"
```
5. **Create a semantic catalog**
First, you need a place to house our semantic descriptions of your database.
We create a semantic catalog for this.
You can house multiple semantic catalogs in a single database if you wish.
Each semantic catalog may have one or more embedding configurations.
For now, we only need one semantic catalog with a single embedding configuration.
By default, the first new semantic catalog has the catchy name of, _default_.
Run the following command to create the semantic catalog and add a default embedding configuration using OpenAI's `text-embedding-3-small`.
```bash
pgai semantic-catalog create
```
6. **Generate descriptions of the postgres_air database**
Now, we need to populate the semantic catalog with information about your database that would be helpful
to an LLM trying to author SQL statements.
The following command finds database objects in the postgres_air database, generates descriptions for them
using an LLM, and outputs a yaml file containing the content for the semantic catalog.
```bash
pgai semantic-catalog describe -f descriptions.yaml
```
Take a look at `descriptions.yaml`. You can manually edit the descriptions to improve them if you wish.
The semantic catalog can contain:
- database object descriptions - tables, views, functions, procedures
- SQL examples - a SQL statement and description
- facts - standalone pieces of information
Tables and views are described like this:
```yaml
---
schema: postgres_air
name: aircraft
type: table
description: Lists aircraft models with performance characteristics and unique codes.
columns:
- name: model
description: Commercial name of the aircraft model.
- name: range
description: Maximum flight range in kilometers.
- name: class
description: Airframe class category or configuration indicator.
- name: velocity
description: Cruising speed of the aircraft.
- name: code
description: Three-character aircraft code serving as the primary key.
...
```
Functions and procedures look like this:
```yaml
---
schema: postgres_air
name: advance_air_time
args:
- integer
- pg_catalog.text
- boolean
type: procedure
description: Advances every timestamp/timestamptz column in all tables of the specified
schema by a given number of weeks, executing or merely displaying the generated
UPDATE statements according to the p_run flag.
...
```
Facts look like this:
```yaml
---
type: fact
description: The names of cities in the city column of the airport table are in all
capital letters. e.g. "TOKYO"
...
```
The `pgai semantic-catalog describe` uses an LLM to get you started, but the better the content in your semantic catalog, the better your results will be.
The YAML file makes it easy to put a human editor in the loop. You can store the YAML file in version control and manage it with a git-ops strategy if you wish.
7. **Import the descriptions into the semantic catalog in your database**
The content is useless unless it is available to the LLM, so we need to load it into the database.
The following command will load the contents of the YAML file into the semantic catalog and generate embeddings.
```bash
pgai semantic-catalog import -f descriptions.yaml
```
8. **Now the fun part, search the semantic catalog using natural language**
With a semantic catalog loaded with descriptions, you can now perform a semantic search using a natural
language prompt. This finds the database objects, SQL examples, and/or facts that are relevant to the prompt
provided. For example:
```bash
pgai semantic-catalog search -p "Which passengers have experienced the most flight delays in 2024?"
```
9. **See how these search results are rendered to a prompt for an LLM**
```bash
pgai semantic-catalog search -p "Which passengers have experienced the most flight delays in 2024?" --render
```
10. **More fun, generate SQL statements on the command line**
What we really want are SQL queries. The `generate-sql` command uses the prompt rendered from the semantic search to get an LLM to author a query.
Moreover, the SQL statement is **deterministically checked** using the Postgres query planner (using `EXPLAIN`).
Thus, not only is the syntax validated but the statement is verified against the actual database objects in the database.
```bash
pgai semantic-catalog generate-sql -p "Which passengers have experienced the most flight delays in 2024?"
```
11. **Generate SQL statements directly from your Python app**
While using the command line to author queries is fun, you're more likely going to want to embed this capability in your app.
The functionality is available as a library too!
Create a `main.py` file with the contents below:
```python
import os
import logging
import asyncio
import psycopg
from dotenv import find_dotenv, load_dotenv
import pgai.semantic_catalog as sc
load_dotenv(dotenv_path=find_dotenv(usecwd=True))
logging.basicConfig(
level="INFO",
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
async def main():
async with await psycopg.AsyncConnection.connect(os.environ["TARGET_DB"]) as con:
# get a handle to our "default" semantic catalog
catalog = await sc.from_name(con, "default")
# generate sql
response = await catalog.generate_sql(
con,
con,
"openai:gpt-4.1",
"Which passengers have experienced the most flight delays in 2024?",
)
print(response.sql_statement)
if __name__ == "__main__":
asyncio.run(main())
```
Then, run the following:
```bash
python3 main.py
```
## Try a few more questions.
Here is a list of [more questions to try.](more-questions.md)
|
docs/semantic_catalog/README.md
|
Semantic Catalog
|
# Semantic Catalog
## What is "text-to-sql"?
Text-to-SQL is a natural language processing technology that converts questions made in human language into SQL queries
that you execute in relational databases such as PostgreSQL. Text-to-SQL is a tool that enables non-technical users to
interact with data and database systems using everyday language in the place of complex SQL syntax.
## Want to jump right in?
See [How do I get started?](#how-do-i-get-started)
## The Struggle
Why can't we **just** ask an LLM to write the query?
- The main problem is database schemas by themselves are often missing the context that would be necessary to answer a question. For example, is the `orders` table referring to customer orders or purchase orders?
- LLMs are non-deterministic and unpredictable
- They hallucinate tables, columns, and logic
- Yet precision is critical when querying real data
- Small errors can break trust in production systems
Okay, what if we give it the full context of the database?
We could dump the entire schema to the LLM prompt, add comments and some sample data.
But even that may not work because the **schema may miss crucial context** about business logic and table/column names may not have enough semantic context to allow the LLM to make good decisions.
And, as schemas get larger, this approach will consume too many tokens, too much time, and risks leading the LLM astray with irrelevant details.
## How do you write a SQL query?
**How do human experts approach writing SQL?**
- Understand the question being asked, the *intent*
- Explore the database structure: schemas (DDL), comments
- Inspect the data: sample rows, distributions, NULL patterns
- Infer meaning from data patterns (e.g. deleted_at IS NULL means the row is "active")
- Combine all of the above with context about the company: business rules, logic, etc.
- Look at the application code and logic.
- Ask other teammates for information.
LLMs are capable of writing valid SQL syntax without further training on the SQL language. However, knowing SQL is not
enough to author queries that accurately answer business questions.
**What does an LLM need to replicate the human expert?**
- The *intent*: the user’s natural language question.
- The *structure*: access to DDL and descriptions.
- The *data semantics*: sample rows and facts to ground meaning.
- The *validators*: tools to check syntax, object existence, and logical coherence.
- And then a way to *bridge syntax with semantics*.
LLMs know the language. The postgres catalog contains the definitions of the schema and data. The semantic catalog acts
as a bridge between the structural definitions and a human-level understanding in natural language. Furthermore, the semantic catalog
leverages the Postgres planner to **validate the queries written by the LLM**.
For demanding applications, we use the semantic catalog to include only the schema and sample data that is *relevant* to
the question posed. This light-touch solution reduces the number of tokens consumed and avoids introducing irrelevant
details to the LLM.
Even with the perfect context, an LLM may hallucinate tables, columns, functions, etc. creating an invalid SQL query.
By automatically running `EXPLAIN` on the generated SQL, we utilize the realtime state of the database to verify the query.
If the query is invalid, `EXPLAIN` returns detailed error information that is fed back to the LLM with a prompt for correction.
## Creating an Agent that can write accurate queries
We need two tools to create an LLM agent that can write SQL:
1. "search for more context"
2. "provide a SQL statement"
The LLM "knows" whether it has enough information to answer confidently, and if it doesn't have enough, it knows where the gaps in context are.
Giving it a tool to search with allows it to **agentically** build the context it needs.
When the LLM provides a SQL statement, we automatically validate it, checking the work of a non-deterministic LLM with a deterministic planner.
We do this using Postgres’s `EXPLAIN` statement, where we ask the database to return a query plan. That plan gives us:
- **Syntax validation**: Did the query parse correctly?
- **Object validation**: Do all referenced schemas, tables, columns, views, procedures, exist
- **Estimated cost**: How many rows are expected to be scanned? What operations will be performed?
This is performed against your actual database via its actual query planner. So, it’s the ground truth and prevents hallucinations.
But because we’re just doing query planning, rather than query execution, it’s both inexpensive and safe.
As pseudocode:
```
1. Get user question
2. Find related context (RAG)
3. Ask the LLM to either:
a. Respond with a SQL query
GOTO: 4
b. Respond with search parameters for more context
GOTO: 2
4. Validate the SQL query with `EXPLAIN`
a. If valid
RETURN SQL query
b. If NOT valid
Append error info to the prompt and GOTO: 3
```
Or as a diagram:
```mermaid
---
title: Generate SQL
---
flowchart TB
Z[User Question]
A[Embed Search Prompt]
B[Find Context]
E[Render LLM Prompt]
F[LLM]
G[*Provide New Search Prompts]
H[*Provide SQL Answer]
I[Validate with EXPLAIN]
J[Return SQL]
Z-->A
A-->B
B-->E
E-->F
F-->|Context is insufficient|G
G-->A
F-->|Context is sufficient|H
H-->I
I-->|SQL is invalid|F
I-->|SQL is valid|J
```
`*` items are tools called by the LLM
BUT how do we **find context**?
### The semantic catalog: how we provide context
A semantic catalog is a comprehensive knowledge repository that bridges the gap between natural language and database
structures. The term _semantic_ refers to the meaning or interpretation of language and symbols, focusing on the
relationships between words, phrases, and concepts rather than just their literal definitions. The semantic catalog is
a layer that enriches raw database objects with meaningful context that natural language processing models can leverage.
The semantic catalog consists of the following components:
- **Database objects**: with natural language descriptions that provide human-readable context for technical elements such
as tables, columns, and functions. This mapping helps the system understand what users mean when they reference
business concepts rather than technical database terms.
- **SQL examples**: paired with natural language descriptions that demonstrate how specific questions translate into query
structures.
- **Facts**: expressed as natural language statements about the dataset/schema, facts provide additional domain knowledge
that might not be explicitly encoded in the database structure but is crucial for understanding user intent.
We generate vector embeddings of the database objects, sql examples, and facts.
With these embeddings, your semantic catalog enables vector similarity searches that can retrieve relevant
context when processing new queries. This retrieval-augmented generation (RAG) approach ensures that when constructing
SQL queries from natural language inputs, the agent has access to the most pertinent information about your database
structure, usage patterns, and domain knowledge.
### What does the context "look like"?
LLMs operate on text. We have to render the relevant context from the semantic catalog to present it to the LLM.
It turns out that SQL is the perfect format for this! LLMs are already extensively trained on the SQL language,
and there is no more precise way to describe database contents. We can describe structures, semantic descriptions, and sample data all with SQL.
Below is an example of the rendering we provide for a table.
```sql
<table id="76">
CREATE TABLE postgres_air.flight
( flight_id integer NOT NULL nextval('postgres_air.flight_flight_id_seq'::regclass)
, flight_no text NOT NULL
, scheduled_departure timestamp with time zone NOT NULL
, scheduled_arrival timestamp with time zone NOT NULL
, departure_airport character(3) NOT NULL
, arrival_airport character(3) NOT NULL
, status text NOT NULL
, aircraft_code character(3) NOT NULL
, actual_departure timestamp with time zone
, actual_arrival timestamp with time zone
, update_ts timestamp with time zone
);
ALTER TABLE postgres_air.flight ADD CONSTRAINT flight_pkey PRIMARY KEY (flight_id);
ALTER TABLE postgres_air.flight ADD CONSTRAINT aircraft_code_fk FOREIGN KEY (aircraft_code) REFERENCES postgres_air.aircraft(code);
ALTER TABLE postgres_air.flight ADD CONSTRAINT arrival_airport_fk FOREIGN KEY (arrival_airport) REFERENCES postgres_air.airport(airport_code);
ALTER TABLE postgres_air.flight ADD CONSTRAINT departure_airport_fk FOREIGN KEY (departure_airport) REFERENCES postgres_air.airport(airport_code);
COMMENT ON TABLE postgres_air.flight IS $$The flight table tracks scheduled and actual flight details, including timing, status, airports, and aircraft used.$$;
COMMENT ON COLUMN postgres_air.flight.flight_id IS $$A unique identifier for each flight.$$;
COMMENT ON COLUMN postgres_air.flight.flight_no IS $$The flight number assigned to the flight.$$;
COMMENT ON COLUMN postgres_air.flight.scheduled_departure IS $$Scheduled departure time for the flight.$$;
COMMENT ON COLUMN postgres_air.flight.scheduled_arrival IS $$Scheduled arrival time for the flight.$$;
COMMENT ON COLUMN postgres_air.flight.departure_airport IS $$IATA code of the departure airport.$$;
COMMENT ON COLUMN postgres_air.flight.arrival_airport IS $$IATA code of the arrival airport.$$;
COMMENT ON COLUMN postgres_air.flight.status IS $$The status of the flight, such as 'Delayed' or 'On Time.'$$;
COMMENT ON COLUMN postgres_air.flight.aircraft_code IS $$Aircraft code identifying the model used for the flight.$$;
COMMENT ON COLUMN postgres_air.flight.actual_departure IS $$The actual time of departure, if available.$$;
COMMENT ON COLUMN postgres_air.flight.actual_arrival IS $$The actual time of arrival, if available.$$;
COMMENT ON COLUMN postgres_air.flight.update_ts IS $$Timestamp of the last update to the flight record.$$;
COPY (SELECT * FROM "postgres_air"."flight" LIMIT 3) TO STDOUT WITH (FORMAT TEXT, HEADER true);
/*
flight_id flight_no scheduled_departure scheduled_arrival departure_airport arrival_airport status aircraft_code actual_departure actual_arrival update_ts
181960 4946 2024-07-12 15:05:00+00 2024-07-12 15:55:00+00 MEX MLM Delayed CR2 2024-07-12 17:06:43.32+00 2024-07-12 15:46:30.24+00 2024-07-12 15:45:43.32+00
203092 2167 2024-07-16 00:35:00+00 2024-07-16 07:35:00+00 IAD DUB Delayed 773 2024-07-16 02:37:35.52+00 2024-07-16 07:32:35.64+00 2024-07-16 01:09:35.52+00
203296 2941 2024-07-21 17:25:00+00 2024-07-21 19:05:00+00 ORD ICT Delayed 773 2024-07-21 19:28:52.2+00 2024-07-21 18:56:11.52+00 2024-07-21 18:12:52.2+00
*/
</table>
```
### Features
* The library can automatically generate descriptions of database objects
* The contents of a semantic catalog can be exported to and imported from YAML
* You can search the semantic catalog using natural language prompts
* You can install the semantic catalog in a different database from the one you are generating SQL statements for.
* You can manage multiple, independent semantic catalogs in a single database.
* For A/B testing, you can create multiple embedding configurations on a single semantic catalog.
## How do I get started?
- [I want to create a semantic catalog for my database.](quickstart-your-data.md)
- [I want a walkthrough of capabilities on demo data.](quickstart-demo-data.md)
- Check out the [CLI reference.](cli.md)
|
docs/semantic_catalog/cli.md
|
Semantic Catalog CLI Reference
|
# Semantic Catalog CLI Reference
The pgai semantic catalog feature provides a comprehensive command-line interface for managing semantic catalogs that enable natural language to SQL functionality. This document provides detailed information about each CLI command, their usage, and their purpose in the text-to-SQL workflow.
## Overview
The semantic catalog CLI commands are accessed through the `pgai semantic-catalog` command group. Each command serves a specific purpose in the workflow of creating, managing, and using semantic catalogs for natural language database interactions.
### Basic Workflow
1. **Describe** - Generate natural language descriptions of database objects
2. **Create** - Create a new semantic catalog with embedding configuration
3. **Import** - Import descriptions into the semantic catalog
4. **Vectorize** - Generate embeddings for semantic search capabilities
5. **Search** - Perform semantic searches to find relevant database objects
6. **Generate SQL** - Generate SQL statements from natural language prompts
## Database Connections
A semantic catalog is used to describe and generate SQL for a postgres database.
The semantic catalog itself is also stored in a postgres database. It can be
colocated with the database being described, or it can be stored in a separate
database.
We refer to the database being described as the `target database`. We use the `-d`
or `--db-url` argument or the `TARGET_DB` environment variable to specify this
postgres connection string.
The database containing the semantic catalog is the `catalog database`. We use the `-c`
or `--catalog-db-url` argument or the `CATALOG_DB` environment variable to specify
this postgres connection string.
Some commands only need a connection to one database or the other.
Some commands need a connection to both databases.
If you store the semantic catalog in the target database, you can specify just
the target database connection string (`-d`, `--db-url` or `TARGET_DB`) and omit
the catalog database connection string.
## Commands
### `pgai semantic-catalog describe`
**Purpose**: Automatically generate natural language descriptions for database objects using AI.
This command connects to a database, analyzes its schema, and uses a large language model to create human-readable descriptions of tables, views, procedures, and other database objects. These descriptions form the foundation of your semantic catalog.
The command creates or appends to a YAML file. You may review and edit the descriptions. The file can be imported into a semantic catalog with the `import` command.
#### Usage
```bash
pgai semantic-catalog describe [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-d, --db-url` | Database connection URL | Required | `TARGET_DB` |
| `-m, --model` | LLM model for generating descriptions | `openai:gpt-4.1` | |
| `-f, --yaml-file` | Output YAML file path | stdout | |
| `--include-schema` | Regex pattern to include schemas | | |
| `--exclude-schema` | Regex pattern to exclude schemas | | |
| `--include-table` | Regex pattern to include tables | | |
| `--exclude-table` | Regex pattern to exclude tables | | |
| `--include-view` | Regex pattern to include views | | |
| `--exclude-view` | Regex pattern to exclude views | | |
| `--include-proc` | Regex pattern to include procedures | | |
| `--exclude-proc` | Regex pattern to exclude procedures | | |
| `--include-extension` | Extension names to include objects from | | |
| `-a, --append` | Append to output file instead of overwriting | `false` | |
| `--sample-size` | Number of sample rows per table/view | `3` | |
| `--batch-size` | Objects to process per LLM request | `5` | |
| `--request-limit` | Maximum LLM requests (cost control) | | |
| `--total-tokens-limit` | Maximum LLM tokens (cost control) | | |
| `--dry-run` | List objects without describing them | `false` | |
| `-q, --quiet` | Suppress progress messages | `false` | |
#### Examples
```bash
# Generate descriptions for all objects in a database
pgai semantic-catalog describe -f descriptions.yaml
# Only include specific schemas and exclude system objects
pgai semantic-catalog describe \
--include-schema "public|app_.*" \
--exclude-schema "pg_.*|information_schema" \
-f descriptions.yaml
# Use a different model with cost controls
pgai semantic-catalog describe \
--model anthropic:claude-3-sonnet-20240229 \
--request-limit 100 \
--total-tokens-limit 50000 \
-f descriptions.yaml
# Dry run to see what would be processed
pgai semantic-catalog describe --dry-run
```
#### When to Use
- Starting a new text-to-SQL project with an existing database
- Adding new database objects that need descriptions
- Refreshing descriptions after significant schema changes
- Creating documentation for database objects
---
### `pgai semantic-catalog create`
**Purpose**: Create a new semantic catalog with an embedding configuration.
This command initializes a semantic catalog in your database and sets up the necessary embedding configuration for generating vector embeddings. The catalog requires at least one embedding configuration to enable semantic search.
#### Usage
```bash
pgai semantic-catalog create [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` or `TARGET_DB` |
| `-n, --catalog-name` | Name for the semantic catalog | `default` | |
| `-e, --embed-config` | Name for the embedding configuration | | |
| `-p, --provider` | Embedding provider | `openai` | |
| `-m, --model` | Embedding model | `text-embedding-3-small` | |
| `-v, --vector-dimensions` | Vector dimensions | `1536` | |
| `--base-url` | Custom base URL for embedding provider | | |
| `--api-key-name` | Environment variable containing API key | | |
#### Supported Providers and Models
**OpenAI** (requires API key):
- `text-embedding-3-small` (1536 dimensions)
- `text-embedding-3-large` (3072 dimensions)
- `text-embedding-ada-002` (1536 dimensions)
**Ollama** (local inference):
- `nomic-embed-text`
- `mxbai-embed-large`
- Any embedding model available in your Ollama instance
**Sentence Transformers** (local inference):
- `all-MiniLM-L6-v2`
- `all-mpnet-base-v2`
- Any model from Hugging Face sentence-transformers
#### Examples
```bash
# Create a catalog with OpenAI embeddings (default)
pgai semantic-catalog create
# Create a catalog with custom name and embedding configuration
pgai semantic-catalog create \
--catalog-name "production_catalog" \
--embed-config "openai_embeddings"
# Create a catalog with Ollama (local model)
pgai semantic-catalog create \
--provider ollama \
--model nomic-embed-text \
--vector-dimensions 768
# Create a catalog with custom OpenAI base URL
pgai semantic-catalog create \
--provider openai \
--base-url "https://api.openai.com/v1" \
--api-key-name "CUSTOM_OPENAI_KEY"
```
#### When to Use
- Setting up your first semantic catalog
- Creating separate catalogs for different environments (dev, staging, prod)
- Setting up A/B testing with different embedding configurations
- Migrating to a new embedding provider or model
---
### `pgai semantic-catalog import`
**Purpose**: Import database object descriptions from a YAML file into a semantic catalog.
This command reads descriptions (typically generated by the `describe` command) from a YAML file and loads them into your semantic catalog. After importing, it will automatically generate embeddings for the imported items.
#### Usage
```bash
pgai semantic-catalog import [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|--------------------------------------|---------|---------------------|
| `-d, --db-url` | Target database connection URL | Required | `TARGET_DB` |
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` |
| `-f, --yaml-file` | Input YAML file path | stdin | |
| `-n, --catalog-name` | Semantic catalog name | `default` | |
| `-e, --embed-config` | Embedding configuration to vectorize | All configs | |
| `-b, --batch-size` | Embeddings per batch | | |
#### Examples
```bash
# Import from a YAML file
pgai semantic-catalog import -f descriptions.yaml
# Import to a specific catalog and vectorize only a specific embedding config
pgai semantic-catalog import \
-f descriptions.yaml \
--catalog-name "my_catalog" \
--embed-config "openai_embeddings"
# Import from stdin
cat descriptions.yaml | pgai semantic-catalog import
# Import with custom batch size for vectorization
pgai semantic-catalog import \
-f descriptions.yaml \
--batch-size 16
```
#### When to Use
- Loading initial descriptions into a new semantic catalog
- Updating descriptions after running the `describe` command
- Migrating descriptions between environments
- Restoring a semantic catalog from backup
---
### `pgai semantic-catalog vectorize`
**Purpose**: Generate vector embeddings for items in the semantic catalog.
This command processes database objects, SQL examples, and facts in your semantic catalog that don't yet have embeddings and generates vector representations using your configured embedding provider. These embeddings enable semantic search capabilities.
#### Usage
```bash
pgai semantic-catalog vectorize [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` or `TARGET_DB` |
| `-n, --catalog-name` | Semantic catalog name | `default` | |
| `-e, --embed-config` | Embedding configuration to use | All configs | |
| `-b, --batch-size` | Items per vectorization batch | `32` | |
#### Examples
```bash
# Vectorize all items using all embedding configurations
pgai semantic-catalog vectorize
# Vectorize using a specific embedding configuration
pgai semantic-catalog vectorize --embed-config "openai_embeddings"
# Vectorize with custom batch size
pgai semantic-catalog vectorize --batch-size 16
# Vectorize a specific catalog
pgai semantic-catalog vectorize --catalog-name "production_catalog"
```
#### When to Use
- After importing new descriptions into the catalog
- When adding a new embedding configuration to existing data
- Regenerating embeddings after model updates
- Processing items that failed during initial vectorization
---
### `pgai semantic-catalog search`
**Purpose**: Search the semantic catalog using natural language queries.
This command performs semantic search across database objects, SQL examples, and facts using natural language. It's useful for exploring your database schema, finding relevant examples, and understanding what data is available.
#### Usage
```bash
pgai semantic-catalog search [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-d, --db-url` | Target database connection URL | Required | `TARGET_DB` |
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` |
| `-n, --catalog-name` | Semantic catalog name | `default` | |
| `-e, --embed-config` | Embedding configuration to use | First available | |
| `-p, --prompt` | Natural language search query | Required | |
| `-s, --sample-size` | Sample rows per table/view | `3` | |
| `--render` | Show formatted results for LLM prompts | `false` | |
#### Examples
```bash
# Search for user-related objects
pgai semantic-catalog search --prompt "user accounts and profiles"
# Search with specific question
pgai semantic-catalog search --prompt "How are orders related to customers?"
# Search and see how results would be rendered for LLM
pgai semantic-catalog search \
--prompt "product inventory and stock levels" \
--render
# Search with more sample data
pgai semantic-catalog search \
--prompt "sales data" \
--sample-size 5
```
#### When to Use
- Exploring unfamiliar database schemas
- Finding relevant tables for a specific business question
- Discovering existing SQL examples for similar queries
- Understanding relationships between database objects
- Testing the quality of your semantic catalog
---
### `pgai semantic-catalog generate-sql`
**Purpose**: Generate SQL statements from natural language prompts using the semantic catalog.
This is the primary command for text-to-SQL functionality. It uses the semantic catalog to find relevant context and generates SQL statements that answer your natural language questions.
#### Usage
```bash
pgai semantic-catalog generate-sql [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-d, --db-url` | Target database connection URL | Required | `TARGET_DB` |
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` |
| `-m, --model` | LLM model for SQL generation | `openai:gpt-4.1` | |
| `-n, --catalog-name` | Semantic catalog name | `default` | |
| `-e, --embed-config` | Embedding configuration to use | First available | |
| `-p, --prompt` | Natural language query | Required | |
| `--iteration-limit` | Maximum refinement attempts | `5` | |
| `-s, --sample-size` | Sample rows per table/view | `3` | |
| `--request-limit` | Maximum LLM requests | | |
| `--total-tokens-limit` | Maximum LLM tokens | | |
| `--print-messages` | Show LLM conversation | `false` | |
| `--print-usage` | Show token usage | `false` | |
| `--print-query-plan` | Show query execution plan | `false` | |
| `--save-final-prompt` | Save final LLM prompt to file | | |
#### Examples
```bash
# Generate SQL for a simple question
pgai semantic-catalog generate-sql \
--prompt "Find all users who signed up last month"
# Use a specific model with debugging enabled
pgai semantic-catalog generate-sql \
--model "anthropic:claude-3-opus-20240229" \
--prompt "What are the top 5 products by revenue?" \
--print-usage \
--print-messages
# Generate SQL with cost controls
pgai semantic-catalog generate-sql \
--prompt "Show customer order history" \
--request-limit 10 \
--total-tokens-limit 20000
# Save the final prompt for analysis
pgai semantic-catalog generate-sql \
--prompt "Find inactive customers" \
--save-final-prompt debug_prompt.txt
```
#### When to Use
- Converting business questions to SQL queries
- Exploring data through natural language
- Rapid prototyping of data analysis queries
- Training and education on database querying
- Building natural language interfaces to your database
---
### `pgai semantic-catalog export`
**Purpose**: Export semantic catalog contents to a YAML file.
This command exports all database objects, SQL examples, and facts from a semantic catalog to a YAML file. This is useful for backups, migration between environments, or editing catalog contents (file can be subsequently imported).
#### Usage
```bash
pgai semantic-catalog export [OPTIONS]
```
#### Key Options
| Option | Description | Default | Environment Variable |
|--------|-------------|---------|---------------------|
| `-c, --catalog-db-url` | Catalog database connection URL | | `CATALOG_DB` or `TARGET_DB` |
| `-f, --yaml-file` | Output YAML file path | stdout | |
| `-n, --catalog-name` | Semantic catalog name | `default` | |
#### Examples
```bash
# Export to a YAML file
pgai semantic-catalog export -f catalog_backup.yaml
# Export a specific catalog
pgai semantic-catalog export \
--catalog-name "production_catalog" \
-f production_backup.yaml
# Export to stdout and pipe to another command
pgai semantic-catalog export | gzip > catalog_backup.yaml.gz
```
#### When to Use
- Creating backups of semantic catalogs
- Migrating catalogs between environments
- Sharing catalog contents with team members
- Version controlling semantic catalog contents
---
### `pgai semantic-catalog fix`
**Purpose**: Fix database object references in the semantic catalog after database changes.
When database operations like dumps/restores, renames, or schema changes occur, the internal references in your semantic catalog may become outdated. This command updates these references to maintain accuracy.
#### Usage
```bash
pgai semantic-catalog fix [OPTIONS]
```
#### Key Options
| Option | Description | Default |
|--------|-------------|---------|
| `-d, --db-url` | Target database connection URL | Required |
| `-c, --catalog-db-url` | Catalog database connection URL | |
| `-n, --catalog-name` | Semantic catalog name | `default` |
| `-m, --mode` | Fix mode: `fix-ids` or `fix-names` | `fix-ids` |
| `--dry-run` | Show what would be changed | `false` |
#### Fix Modes
**fix-ids**: Updates internal PostgreSQL object IDs
- Use after database dumps/restores
- Use after major schema changes
- Updates classid, objid, objsubid references
**fix-names**: Updates object name identifiers
- Use after object renames
- Use after schema renames
- Updates objnames arrays
#### Examples
```bash
# Fix internal IDs after database restore
pgai semantic-catalog fix --mode fix-ids
# Fix object names after renames
pgai semantic-catalog fix --mode fix-names
# Dry run to see what would be fixed
pgai semantic-catalog fix --mode fix-ids --dry-run
# Fix a specific catalog
pgai semantic-catalog fix \
--catalog-name "production_catalog" \
--mode fix-names
```
#### When to Use
- After database dumps and restores
- After renaming database objects or schemas
- When semantic catalog searches return incorrect results
- After major database schema changes
- When object references become stale
## Best Practices
### 1. Development Workflow
```bash
# 1. Generate descriptions
pgai semantic-catalog describe -f descriptions.yaml
# 2. Review and edit descriptions.yaml if needed
# 3. Create catalog
pgai semantic-catalog create
# 4. Import descriptions
pgai semantic-catalog import -f descriptions.yaml
# 5. Test search functionality
pgai semantic-catalog search --prompt "your test query"
# 6. Generate SQL
pgai semantic-catalog generate-sql --prompt "your business question"
```
### 2. Production Deployment
```bash
# Use separate catalog database
export CATALOG_DB="postgres://user:pass@catalog-host:5432/catalog_db"
export TARGET_DB="postgres://user:pass@app-host:5432/app_db"
# Create named catalog with specific embedding config
pgai semantic-catalog create \
--catalog-name "production" \
--embed-config "production_embeddings"
# Import
pgai semantic-catalog import \
-f descriptions.yaml \
--catalog-name "production"
```
### 3. Maintenance
```bash
# Regular backup
pgai semantic-catalog export \
--catalog-name "production" \
-f "backup-$(date +%Y%m%d).yaml"
# After database changes
pgai semantic-catalog fix --mode fix-ids
# Update descriptions periodically
pgai semantic-catalog describe -f new_descriptions.yaml
pgai semantic-catalog import -f new_descriptions.yaml
```
## Troubleshooting
### Common Issues
**"No embedding configuration found"**
- Run `pgai semantic-catalog create` to set up embedding configuration
- Check that the catalog name and embedding config name are correct
**"Connection refused"**
- Verify database URLs are correct
- Ensure databases are running and accessible
- Check authentication credentials
**"API key not found"**
- Set the appropriate environment variable (e.g., `OPENAI_API_KEY`)
- Use `--api-key-name` to specify custom environment variable
**"Object not found" after database restore**
- Run `pgai semantic-catalog fix --mode fix-ids` to update object references
**Poor SQL generation quality**
- Review and improve object descriptions in YAML files
- Add more SQL Examples and/or Facts to the catalog
- Use higher quality embedding models
- Use a more powerful LLM model
- Increase sample size for more context
### Getting Help
Use the `--help` flag with any command for detailed option information:
```bash
pgai semantic-catalog --help
pgai semantic-catalog describe --help
pgai semantic-catalog generate-sql --help
```
|
docs/semantic_catalog/quickstart-your-data.md
|
Quickstart with your data
|
# Quickstart with your data
## Overview
This quickstart will help you get up and running with the semantic catalog on your own database.
We will first need to create a semantic catalog in a database.
This semantic catalog will house all the semantic descriptions of your database model.
You can put the semantic catalog in the same database you are describing, or in a separate database.
If you are just trying things out, or if you do not have privileges to make changes to the database you are describing, use a separate database for the semantic catalog.
After creating the semantic catalog, we need to populate it with descriptions of your database.
We will use an LLM to bootstrap these descriptions, but you can edit them as you please.
This content is loaded into the semantic catalog and embedded, making it ready for use.
At this point, you can use pgai both as a CLI tool and as a Python library to generate SQL queries from natural language.
## Prerequisites
* [Python 3](https://www.python.org/downloads/)
* An [OpenAI key](https://platform.openai.com/api-keys)
* A PostgreSQL connection string to the database you want to have an LLM build queries for
* (Optionally) a PostgreSQL connection string to a second database to house the semantic catalog
## Quickstart Instructions
1. **Install pgai**
```bash
pip install "pgai[semantic-catalog]"
pgai --version
```
2. **Create a `.env` file**
In the current working directory, create a `.env` file define the following variables.
The `TARGET_DB` is the database for which you want an LLM to generate queries.
The `CATALOG_DB` is the database in which you will create a new semantic catalog.
If you want to use one database for both purposes, only specify the `TARGET_DB`.
```
OPENAI_API_KEY="your-OpenAPI-key-goes-here"
TARGET_DB="postgres://user:password@host:port/database"
CATALOG_DB="postgres://user:password@host:port/database"
```
3. **Create a semantic catalog**
First, you need a place to house our semantic descriptions of your database.
We create a semantic catalog for this.
You can house multiple semantic catalogs in a single database if you wish.
Each semantic catalog may have one or more embedding configurations.
For now, we only need one semantic catalog with a single embedding configuration.
By default, the first new semantic catalog has the catchy name of, _default_.
Run the following command to create the semantic catalog and add a default embedding configuration using OpenAI's `text-embedding-3-small`.
It will connect to the `CATALOG_DB`, install pgai into it if it doesn't exist, and configure a semantic catalog.
```bash
pgai semantic-catalog create
```
If you wish to customize the semantic catalog, use the `--help` flag to display your options:
```bash
pgai semantic-catalog create --help
```
4. **Generate descriptions of your database**
Now, we need to populate the semantic catalog with information about your database that would be helpful
to an LLM trying to author SQL statements.
The following command connects to your database, finds database objects, generates descriptions for them
using an LLM, and outputs a yaml file containing the content for the semantic catalog.
```bash
pgai semantic-catalog describe -f descriptions.yaml
```
Take a look at `descriptions.yaml`. You can manually edit the descriptions to improve them if you wish.
The semantic catalog can contain:
- database object descriptions - tables, views, functions, procedures
- SQL examples - a SQL statement and description
- facts - standalone pieces of information
Tables and views are described like this:
```yaml
---
schema: postgres_air
name: aircraft
type: table
description: Lists aircraft models with performance characteristics and unique codes.
columns:
- name: model
description: Commercial name of the aircraft model.
- name: range
description: Maximum flight range in kilometers.
- name: class
description: Airframe class category or configuration indicator.
- name: velocity
description: Cruising speed of the aircraft.
- name: code
description: Three-character aircraft code serving as the primary key.
...
```
Functions and procedures look like this:
```yaml
---
schema: postgres_air
name: advance_air_time
args:
- integer
- pg_catalog.text
- boolean
type: procedure
description: Advances every timestamp/timestamptz column in all tables of the specified
schema by a given number of weeks, executing or merely displaying the generated
UPDATE statements according to the p_run flag.
...
```
Facts look like this:
```yaml
---
type: fact
description: The names of cities in the city column of the airport table are in all
capital letters. e.g. "TOKYO"
...
```
The `pgai semantic-catalog describe` uses an LLM to get you started, but the better the content in your semantic catalog, the better your results will be.
The YAML file makes it easy to put a human editor in the loop. You can store the YAML file in version control and manage it with a git-ops strategy if you wish.
5. **Import the descriptions into the semantic catalog in your database**
The content is useless unless it is available to the LLM, so we need to load it into the database.
The following command will load the contents of the YAML file into the semantic catalog and generate embeddings.
```bash
pgai semantic-catalog import -f descriptions.yaml
```
6. **Now the fun part, search the semantic catalog using natural language**
With a semantic catalog loaded with descriptions, you can now perform a semantic search using a natural
language prompt. This finds the database objects, SQL examples, and/or facts that are relevant to the prompt
provided. For example:
```bash
pgai semantic-catalog search -p "Your natural language question goes here!"
```
7. **See how these search results are rendered to a prompt for an LLM**
```bash
pgai semantic-catalog search -p "Your natural language question goes here!" --render
```
8. **More fun, generate SQL statements on the command line**
What we really want are SQL queries. The `generate-sql` command uses the prompt rendered from the semantic search to get an LLM to author a query.
Moreover, the SQL statement is **deterministically checked** using the Postgres query planner (using `EXPLAIN`).
Thus, not only is the syntax validated but the statement is verified against the actual database objects in the database.
```bash
pgai semantic-catalog generate-sql -p "Your natural language question goes here!"
```
9. **Generate SQL statements directly from your Python app**
While using the command line to author queries is fun, you're more likely going to want to embed this capability in your app.
The functionality is available as a library too!
Create a `main.py` file with the contents below:
```python
import os
import logging
import asyncio
import psycopg
from dotenv import find_dotenv, load_dotenv
import pgai.semantic_catalog as sc
load_dotenv(dotenv_path=find_dotenv(usecwd=True))
logging.basicConfig(
level="INFO",
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler()],
)
async def main():
async with await psycopg.AsyncConnection.connect(os.environ["TARGET_DB"]) as con:
# get a handle to our "default" semantic catalog
catalog = await sc.from_name(con, "default")
# generate sql
response = await catalog.generate_sql(
con,
con,
"openai:gpt-4.1",
"Your natural language question goes here!",
)
print(response.sql_statement)
if __name__ == "__main__":
asyncio.run(main())
```
Then, run the following:
```bash
python3 main.py
```
|
docs/utils/chunking.md
|
Chunk text with SQL functions
|
# Chunk text with SQL functions
The `ai.chunk_text` and `ai.chunk_text_recursively` functions allow you to split text into smaller chunks.
## Example usage
Given a table like this
```sql
create table blog
( id int not null primary key
, title text
, body text
);
```
You can chunk the text in the `body` column like this
```sql
select
b.id
, b.title
, c.seq
, c.chunk
from blog b
cross join lateral ai.chunk_text(b.body) c
order by b.id, c.seq
;
```
## chunk_text
Splits text into chunks using a separator.
This uses the [CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html) from the `langchain_text_splitters` Python package.
| Name | Type | Default | Required | Description |
|---------------------|-------|----------|----------|-----------------------------------------------------------|
| input | text | - | ✔ | The text to split into chunks |
| chunk_size | int | *4000 | ✖ | The target size of a chunk in characters |
| chunk_overlap | int | *200 | ✖ | The target amount of overlapping characters in each chunk |
| separator | text | *E'\n\n' | ✖ | The text to split on |
| is_separator_regex | text | false | ✖ | `true` if the separator represents a regular expression |
*defaulted by the underlying Python implementation rather than in SQL
```sql
select *
from ai.chunk_text
($$if two witches watch two watches, which witch watches which watch?$$
, separator=>' '
, chunk_size=>10
, chunk_overlap=>0
);
```
The query above will return the results below:
```
seq | chunk
-----+-----------
0 | if two
1 | witches
2 | watch two
3 | watches,
4 | which
5 | witch
6 | watches
7 | which
8 | watch?
(9 rows)
```
## chunk_text_recursively
Recursively splits text into chunks using multiple separators in sequence.
This uses the [RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) from the `langchain_text_splitters` Python package.
| Name | Type | Default | Required | Description |
|--------------------|--------|---------------------------------|----------|-----------------------------------------------------------|
| input | text | - | ✔ | The text to split into chunks |
| chunk_size | int | *4000 | ✖ | The target size of a chunk in characters |
| chunk_overlap | int | *200 | ✖ | The target amount of overlapping characters in each chunk |
| separators | text[] | *array[E'\n\n', E'\n', ' ', ''] | ✖ | An array of texts to split on |
| is_separator_regex | text | false | ✖ | `true` if the separators represents regular expressions |
*defaulted by the underlying Python implementation rather than in SQL
```sql
select *
from ai.chunk_text_recursively
($$if two witches watch two watches, which witch watches which watch?$$
, separators=>array[' ', '.', '?']
, chunk_size=>2
, chunk_overlap=>0
);
```
The query above will return the results below:
```
seq | chunk
-----+-----------
0 | if
1 | two
2 | witches
3 | watch
4 | two
5 | watches,
6 | which
7 | witch
8 | watches
9 | which
10 | watch
11 | ?
(12 rows)
```
|
docs/vectorizer/document-embeddings.md
|
Document embeddings in pgai
|
# Document embeddings in pgai
This is a comprehensive walkthrough of how embedding generation for documents work in pgai. If you want to get started quickly check out the [runnable example](/examples/embeddings_from_documents).
To process documents, you need to:
1. [Set up document storage](#setting-up-document-storage) - This creates a table that stores metadata about the documents either a reference to the document in an external system or the document content itself.
2. [Create a vectorizer](#configuring-document-vectorizers) - This defines how the documents are processed and embedded.
Then you can [query the generated embeddings](#query-document-embeddings) using the usual pgvector semantic search.
In this guide, we'll also cover how to [monitor and troubleshoot](#monitoring-and-troubleshooting) your vectorizers.
If you are storing documents in AWS S3, you can use the [S3 documentation](s3-documents.md) to learn more about how to configure S3 for document storage and synchronize your S3 buckets with your document table.
## Introduction
While RAG (Retrieval Augmented Generation) applications typically require text data, real-world scenarios often involve documents that:
- Are stored in external systems like S3 or local filesystems
- Come in various formats (PDF, DOCX, XLSX, EPUB, etc.)
- Change frequently, requiring synchronization between sources and embeddings
pgai's document vectorization system supports directly embedding documents via a declarative approach that handles loading, parsing, chunking, and embedding files.
## Setting up document storage
The foundation of document management in pgai is a document metadata table in PostgreSQL. Documents can either be stored directly within a table using a BYTEA column, or alternatively, the table can hold URIs pointing to files located in an external storage system such as S3. You can also include any additional metadata required by your application in this table.
If your application already handles documents, it's likely that you already have such a table which can be used as a source for the vectorizer.
If you don't have such a table yet and are storing documents in S3 we have a [guide on how to sync S3 to a document table](s3-documents.md#syncing-s3-to-a-documents-table).
### Minimal document table
A minimal document source table requires only an identifier and a URI pointing to the document, this can be the same column. The `updated_at` column is optional but recommended to allow you to trigger re-embedding when the document is updated:
```sql
CREATE TABLE document (
uri TEXT PRIMARY KEY,
updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
);
-- Example records
INSERT INTO document (uri) VALUES
('s3://my-bucket/documents/product-manual.pdf'),
('s3://my-bucket/documents/api-reference.md'),
```
### Extended document table
For real applications, you will often want to include additional metadata that you might need to filter or classify documents.
```sql
CREATE TABLE document (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
uri TEXT NOT NULL,
content_type TEXT,
created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
owner_id INTEGER,
access_level TEXT,
tags TEXT[]
);
-- Example with rich metadata
INSERT INTO document (title, uri, content_type, owner_id, access_level, tags) VALUES
('Product Manual', 's3://my-bucket/documents/product-manual.pdf', 'application/pdf', 12, 'internal', ARRAY['product', 'reference']),
('API Reference', 's3://my-bucket/documents/api-reference.md', 'text/markdown', 8, 'public', ARRAY['api', 'developer']);
```
#### Storing document content directly
For smaller documents or systems without external storage, you can also store content directly as binary data:
```sql
CREATE TABLE document (
id SERIAL PRIMARY KEY,
file BYTEA
);
-- Example of inserting a binary document
INSERT INTO document (file) VALUES (pg_read_binary_file('/tmp/sample.pdf')::bytea);
```
## Configuring document vectorizers
A vectorizer is a declarative configuration that defines how documents are processed, chunked, and embedded. pgai's vectorizer system automatically keeps document embeddings in sync with source documents. You can find the reference for vectorizers in the [API Reference documentation](./api-reference.md).
### Example vectorizer configuration
Here's a complete vectorizer configuration for documents stored in S3:
```sql
SELECT ai.create_vectorizer(
'document'::regclass,
loading => ai.loading_uri(column_name => 'uri'),
parsing => ai.parsing_auto(), -- Optional: auto-detects parser, this is the default and can also be omitted
chunking => ai.chunking_recursive_character_text_splitter(
chunk_size => 700,
separators => array[E'\n## ', E'\n### ', E'\n#### ', E'\n- ', E'\n1. ', E'\n\n', E'\n', '.', '?', '!', ' ', '', '|']
),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
destination => ai.destination_table('document_embeddings')
);
```
This configuration:
1. Uses the `document` table as the source
2. Loads documents from URIs in the `uri` column
3. Automatically detects and parses document formats
4. Splits text into chunks at common markdown breaking points (headers, paragraphs, etc.)
5. Generates embeddings using OpenAI's `text-embedding-3-small` model
You can see more examples in the [more example vectorizer configurations](#appendix-a-more-example-vectorizer-configurations) section.
### Explanation of the components
#### Loading
pgai supports loading documents from references to external storage systems using the `ai.loading_uri` function or from a BYTEA column using the `ai.loading_column` function.
**1. Loading from URI columns (`ai.loading_uri`)**
```sql
loading => ai.loading_uri(
column_name => 'uri',
retries => 6, -- Optional: number of retry attempts (default: 6)
aws_role_arn => 'arn:aws:iam::123456789012:role/S3AccessRole' -- Optional: for S3 access using role assumption
)
```
This is what you will usually use to load any kind of document. It allows to download documents from:
- S3 URLs (e.g. `s3://bucket/path/to/file.pdf`)
- HTTP/HTTPS URLs (e.g. `https://example.com/file.pdf`)
- Local files on the worker machine (e.g. `/path/to/file.pdf`)
**Storing documents in AWS S3**: Timescale Cloud and a self-hosted pgai installation support AWS S3 URLs out of the box. Check the [S3 documentation](./s3-documents.md) for more information on how to authenticate and configure S3.
**Other storage options:** We use the [smart_open](https://pypi.org/project/smart-open/) library to connect to the URI. That means any URI that can work with smart_open should work (including Google Cloud, Azure, etc.); however, only AWS S3 is supported on Timescale Cloud. In a self-hosted installation, other provider should work but you need to install the appropriate smart_open dependencies, and test it yourself. See the [smart-open documentation](https://pypi.org/project/smart-open/) for details.
**2. Loading from BYTEA columns (`ai.loading_column`)**
```sql
loading => ai.loading_column(
column_name => 'content'
)
```
Alternatively, you can use `loading_column` to load documents directly from a BYTEA column. This is useful if you already have the document content in your database and don't want to use any kind of external storage.
#### Parsing
To make documents LLM-friendly, you need to parse them into markdown. pgai currently supports two different parsers: pymupdf and docling. You wont have to worry about this most of the time as `ai.parsing_auto` will automatically select the appropriate parser based on the file type, but you can also explicitly select it.
You can find more information about the parsers in the [parsing reference](./api-reference.md#parsing-configuration).
#### Chunking
Chunking divides documents into smaller pieces for embedding. Since the contents gets parsed to markdown you will want to use a splitter that respects the markdown structure e.g. a setup like this:
```sql
chunking => ai.chunking_recursive_character_text_splitter(
chunk_size => 700,
chunk_overlap => 150,
separators => array[
E'\n## ', -- Split on header level 2
E'\n### ', -- Split on header level 3
E'\n#### ', -- Split on header level 4
E'\n- ', -- Split on list items
E'\n1. ', -- Split on numbered list items
E'\n\n', -- Split on paragraphs
E'\n', -- Split on lines
'.', -- Split on sentences
'?', '!', -- Split on question/exclamation
]
)
```
This configuration progressively tries more granular separators to achieve the target chunk size, preserving document structure where possible.
For more information about chunking, see the [chunking reference](./api-reference.md#chunking-configuration).
#### Embedding
pgai support a wide range of embedding providers. You can find the reference for the embedding providers in the [embedding documentation](./api-reference.md#embedding-configuration).
The embedding providers all follow a similar pattern, e.g. this is how you would use the OpenAI embedding provider:
**OpenAI**
```sql
embedding => ai.embedding_openai(
'text-embedding-3-small', -- Model name
768 -- Embedding dimensions
)
```
You can see more examples in the [more example vectorizer configurations](#appendix-a-more-example-vectorizer-configurations) section.
## Querying document embeddings
Once your vectorizer is created, pgai automatically generates a target table with your embeddings and a view that joins the embeddings with the original document table. We configure the view name in the `ai.destination_table('document_embeddings')` [destination configuration](./api-reference.md#destination-configuration). The view contains all the columns from the original document table plus the following additional columns:
| Column | Type | Description |
|----------------|--------|-----------------------------------------------------------------|
| embedding_uuid | UUID | Unique identifier for the embedding |
| chunk | TEXT | The text segment that was embedded |
| embedding | VECTOR | The vector representation of the chunk |
| chunk_seq | INT | Sequence number of the chunk within the document, starting at 0 |
### Simple semantic search
To search for similar documents:
```sql
-- Basic similarity search
SELECT title, chunk, embedding <=> <search_embedding> AS distance
FROM document_embeddings
ORDER BY distance
LIMIT 5;
```
### Combine vector similarity with metadata filters
One of the most powerful features of pgai's document approach is the ability to combine vector similarity with traditional SQL filters:
```sql
-- Find recent documentation about configuration
SELECT title, chunk
FROM document_embeddings
WHERE
updated_at > (CURRENT_DATE - INTERVAL '30 days')
AND title ILIKE '%configuration%'
ORDER BY embedding <=> <search_embedding>
LIMIT 5;
```
### Advanced query patterns
**Join with application data:**
```sql
-- Find documents relevant to customers with pending support tickets
SELECT c.name, d.title, e.chunk
FROM customers c
JOIN support_tickets t ON c.id = t.customer_id
JOIN customer_documentation cd ON c.id = cd.customer_id
JOIN document_embeddings e ON cd.document_id = e.id
WHERE t.status = 'pending'
ORDER BY e.embedding <=> <search_embedding>
LIMIT 10;
```
## Monitoring and Troubleshooting
### Monitoring failures and retries
You can use the usual vectorizer monitoring tools to check the status of your vectorizers:
**Check pending items**:
```sql
select * from ai.vectorizer_status:
```
**Check for failed items**:
```sql
-- View all vectorizer errors
SELECT * FROM ai.vectorizer_errors;
-- View errors for a specific vectorizer
SELECT * FROM ai.vectorizer_errors WHERE id = <vectorizer_id>;
```
The error table includes detailed information about what went wrong.
**Check the queue and retry counts**:
```sql
SELECT * FROM ai._vectorizer_q_1
```
The queue name can be found in the `ai.vectorizer` table
## Common issues and solutions
**Embedding API rate limits**
If you encounter rate limits with your embedding provider:
- Adjust the processing batch size and concurrency explained in the [processing reference](./api-reference.md#processing-configuration) in general we recommend a low batch size (e.g. 1) and a high concurrency (e.g. 10) for documents. Since parsing takes some time.
- Consider upgrading API tiers or using a different provider
**Document limitations**
- The pgai document vectorizer is designed for small to medium sized documents. Large documents will take a long time to be parsed and embedded. The page limit for pdfs on Timescale Cloud is ~50 pages. For larger documents consider splitting them into smaller chunks.
- Supported documents depend on the parser that you are using. Check the [parser reference](./api-reference.md#parsing-configuration) to see what types of documents are supported by the parser you are using.
## Appendix A: More example vectorizer configurations
### Document processing from S3 with OpenAI embeddings
```sql
-- Create document table
CREATE TABLE documentation (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
file_uri TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
);
-- Add documents
INSERT INTO documentation (title, file_uri) VALUES
('Product Manual', 's3://company-docs/manuals/product-v2.pdf'),
('API Reference', 's3://company-docs/api/reference.md');
-- Create vectorizer
SELECT ai.create_vectorizer(
'documentation'::regclass,
loading => ai.loading_uri(column_name => 'file_uri'),
parsing => ai.parsing_auto(), -- Auto-detects parser, this is the default and can also be omitted
chunking => ai.chunking_recursive_character_text_splitter(
chunk_size => 700,
separators => array[E'\n## ', E'\n### ', E'\n#### ', E'\n- ', E'\n1. ', E'\n\n', E'\n', '.', '?', '!', ' ', '', '|']
),
embedding => ai.embedding_openai('text-embedding-3-small', 768)
);
```
### Binary documents with ollama embeddings
```sql
-- Create document table with binary storage
CREATE TABLE internal_document (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
content BYTEA NOT NULL,
created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP
);
-- Add documents
INSERT INTO internal_document (title, content) VALUES
('Internal Report', pg_read_binary_file('/path/to/report.pdf')::bytea),
('Internal Memo', pg_read_binary_file('/path/to/memo.docx')::bytea);
-- Create vectorizer
SELECT ai.create_vectorizer(
'internal_document'::regclass,
loading => ai.loading_column(column_name => 'content'),
chunking => ai.chunking_recursive_character_text_splitter(
chunk_size => 500,
chunk_overlap => 100,
separators => array[E'\n\n', E'\n', '.', ' ', '']
),
embedding => ai.embedding_ollama('nomic-embed-text', 768, base_url => 'http://ollama:11434')
);
```
|
docs/vectorizer/adding-embedding-integration.md
|
Adding a Vectorizer embedding integration
|
# Adding a Vectorizer embedding integration
We welcome contributions to add new vectorizer embedding integrations.
The vectorizer consists of two components: the configuration, and the
vectorizer worker.
## Configuration
The vectorizer configuration lives in the database, in the `ai.vectorizer`
table. The `ai.create_vectorizer` function creates and inserts this
configuration into the table. When adding a new integration, only the argument
passed to the `embedding` parameter of `ai.create_vectorizer` is relevant. This
value is `jsonb` generated by the `ai.embedding_*` family of functions.
To add a new integration, add a new integration-specific function to the pgai
extension. This function generates the jsonb configuration for the new
integration. Refer to the existing `ai.embedding_openai` and
`ai.embedding_ollama` functions for examples of what these look like.
The configuration function should minimise mandatory arguments, while allowing
as many optional arguments as needed. Avoid using non-null default values for
optional arguments, as leaving a value unconfigured in the vectorizer may be
preferable, allowing it to be set in the vectorizer worker instead.
Update the implementation of `ai._validate_embedding` to account for the new
integration. Update the tests to account for the new function.
## Vectorizer Worker
The vectorizer worker reads the database's vectorizer configuration at runtime
and turns it into a `pgai.vectorizer.Config`.
To add a new integration, add a new file containing the embedding class
with fields corresponding to the database's jsonb configuration into the
[embedders directory] directory. See
the existing implementations for examples of how to do this. Implement the
`Embedder` class' abstract methods. Use first-party python libraries for the
integration, if available. If no first-party python libraries are available,
use direct HTTP requests.
Remember to include the import line of your recently created class into the
[embedders \_\_init\_\_.py].
Add tests which perform end-to-end testing of the new integration. There are
two options for handling API calls to the integration API:
1. Use [vcr.py] to cache real requests to the API
2. Run against the real API
At minimum the integration should use option 1: vcr.py. Option 2 should be
used conservatively. We will determine on a case-by-case basis what level of
testing we would like.
[vcr.py]:https://vcrpy.readthedocs.io/en/latest/
[embedders directory]:/projects/pgai/pgai/vectorizer/embedders
[embedders \_\_init\_\_.py]:/projects/pgai/pgai/vectorizer/embedders/__init__.py
## pgai library
The pgai library exposes helpers to create a vectorizer via pure python.
The classes for this are autogenerated via code generation. To update the classes
with a new integration look into the code generator docs in [/projects/pgai/pgai/vectorizer/generate](/projects/pgai/pgai/vectorizer/generate/README.md).
## Documentation
Ensure that the new integration is documented:
- Document the new database function in [/docs/vectorizer/api-reference.md](/docs/vectorizer/api-reference.md).
- Document any changes to the vectorizer worker in [/docs/vectorizer/worker.md](/docs/vectorizer/worker.md).
- Add a new row in [Supported features in each model](/docs/README.md#pgai-model-calling) for your worker.
|
docs/vectorizer/api-reference.md
|
pgai Vectorizer API reference
|
# pgai Vectorizer API reference
This page provides an API reference for Vectorizer functions. For an overview
of Vectorizer and how it works, see the [Vectorizer Guide](/docs/vectorizer/overview.md).
A vectorizer provides you with a powerful and automated way to generate and
manage LLM embeddings for your PostgreSQL data. Here's a summary of what you
gain from Vectorizers:
- **Automated embedding generation**: you can create a vectorizer for a specified
table, which automatically generates embeddings for the data in that table and
keeps them in sync with the source data.
- **Automatic synchronization**: a vectorizer creates triggers on the source table,
ensuring that embeddings are automatically updated when the source data
changes.
- **Background processing**: the process to create embeddings runs
asynchrounously in the background. This minimizes the impact on regular database
operations such as INSERT, UPDATE, and DELETE.
- **Scalability**: a vectorizer processes data in batches and can run concurrently.
This enables vectorizers to handle large datasets efficiently.
- **Configurable embedding process**: a vectorizer is highly configurable,
allowing you to specify:
- The embedding model and dimensions. For example, the `nomic-embed-text` model in Ollama.
- Chunking strategies for text data.
- Formatting templates for combining multiple fields.
- Indexing options for efficient similarity searches.
- Scheduling for background processing.
- **Integration with multiple AI providers**: a vectorizer supports different
embedding providers, initially including OpenAI, with more planned for the
future.
- **Efficient storage and retrieval**: embeddings are stored in a separate table
with appropriate indexing, optimizing for vector similarity searches.
- **View creation**: a view is automatically created to join the original data with
its embeddings, making it easy to query and use the embedded data.
- **Fine-grained access control**: you can specify the roles that have
access to a vectorizer and its related objects.
- **Monitoring and management**: monitor the vectorizer's queue, enable/disable scheduling, and manage the vectorizer
lifecycle.
Vectorizer significantly simplifies the process of incorporating AI-powered
semantic search and analysis capabilities into existing PostgreSQL databases.
Making it easier for you to leverage the power of LLMs in your data workflows.
Vectorizer offers the following APIs:
**Install or upgrade database dependencies**
- [Install or upgrade](#install-or-upgrade-the-database-objects-necessary-for-vectorizer) the database objects necessary for vectorizer.
**Create and configure vectorizers**
- [Create vectorizers](#create-vectorizers): automate the process of creating embeddings for table data.
- [Loading configuration](#loading-configuration): define the source of the data to embed. You can load data from a column in the source table, or from a file referenced in a column of the source table.
- [Parsing configuration](#parsing-configuration): for documents, define the way the data is parsed after it is loaded.
- [Chunking configuration](#chunking-configuration): define the way text data is split into smaller, manageable pieces
before being processed for embeddings.
- [Formatting configuration](#formatting-configuration): configure the way data from the source table is formatted
before it is sent for embedding.
- [Embedding configuration](#embedding-configuration): specify the LLM provider, model, and the parameters to be
used when generating the embeddings
- [Indexing configuration](#indexing-configuration): specify the way generated embeddings should be indexed for
efficient similarity searches.
- [Scheduling configuration](#scheduling-configuration): configure when and how often the vectorizer should run in order
to process new or updated data.
- [Processing configuration](#processing-configuration): specify the way the vectorizer should process data when
generating embeddings.
**Manage vectorizers**
- [Enable and disable vectorizer schedules](#enable-and-disable-vectorizer-schedules): temporarily pause or resume the
automatic processing of embeddings, without having to delete or recreate the vectorizer configuration.
- [Drop a vectorizer](#drop-a-vectorizer): remove a vectorizer that you created previously, and clean up the associated
resources.
**Monitor vectorizers**
- [View vectorizer status](#view-vectorizer-status): monitoring tools in pgai that provide insights into the state and
performance of vectorizers.
## Install or upgrade the database objects necessary for vectorizer
You can install or upgrade the database objects necessary for vectorizer by running the following cli command:
```bash
pgai install -d DB_URL
```
or by running the following python code:
```python
import pgai
pgai.install(DB_URL)
```
This will create the necessary catalog tables and functions in your database. All of the
database objects will be installed in the `ai` schema.
The version of the database objects corresponds to the version of the `pgai` python package you have installed. To upgrade, first upgrade the python package with `pip install -U pgai` and then run `pgai.install(DB_URL)` again.
## Create vectorizers
You use the `ai.create_vectorizer` function in pgai to set up and configure an automated system
for generating and managing embeddings for a specific table in your database.
The purpose of `ai.create_vectorizer` is to:
- Automate the process of creating embeddings for table data.
- Set up necessary infrastructure such as tables, views, triggers, or columns for embedding management.
- Configure the embedding generation process according to user specifications.
- Integrate with AI providers for embedding creation.
- Set up scheduling for background processing of embeddings.
### Example usage
By using `ai.create_vectorizer`, you can quickly set up a sophisticated
embedding system tailored to your specific needs, without having to manually
create and manage all the necessary database objects and processes.
#### Example 1: Table destination (default)
This approach creates a separate table to store embeddings and a view that joins with the source table:
```sql
SELECT ai.create_vectorizer(
'website.blog'::regclass,
name => 'website_blog_vectorizer',
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
chunking => ai.chunking_character_text_splitter(128, 10),
formatting => ai.formatting_python_template('title: $title published: $published $chunk'),
grant_to => ai.grant_to('bob', 'alice'),
destination => ai.destination_table(
target_schema => 'website',
target_table => 'blog_embeddings_store',
view_name => 'blog_embeddings'
)
);
```
This function call:
1. Sets up a vectorizer named 'website_blog_vectorizer' for the `website.blog` table.
2. Creates a separate table `website.blog_embeddings_store` to store embeddings.
3. Creates a view `website.blog_embeddings` joining the source and embeddings.
4. Loads the `contents` column.
5. Uses the Ollama `nomic-embed-text` model to create 768 dimensional embeddings.
6. Chunks the content into 128-character pieces with a 10-character overlap.
7. Formats each chunk with a `title` and a `published` date.
8. Grants necessary permissions to the roles `bob` and `alice`.
#### Example 2: Column destination
Column destination place the embedding in a separate column in the source table. It can only be used when the vectorizer does not perform chunking because it requires a one-to-one relationship between the source data and the embedding. This is useful in cases where you know the source text is short (as is common if the chunking has already been done upstream in your data pipeline).
The workflow is that your application inserts data into the table with a NULL in the embedding column. The vectorizer will then read the row, generate the embedding and update the row with the correct value in the embedding column.
```sql
SELECT ai.create_vectorizer(
'website.product_descriptions'::regclass,
name => 'product_descriptions_vectorizer',
loading => ai.loading_column('description'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
chunking => ai.chunking_none(), -- Required for column destination
grant_to => ai.grant_to('marketing_team'),
destination => ai.destination_column('description_embedding')
);
```
This function call:
1. Sets up a vectorizer named 'product_descriptions_vectorizer' for the `website.product_descriptions` table.
2. Adds a column called `description_embedding` directly to the source table.
3. Loads the `description` column.
4. Doesn't chunk the content (required for column destination).
5. Uses OpenAI's embedding model to create 768 dimensional embeddings.
6. Doesn't chunk the content (required for column destination).
7. Grants necessary permissions to the role `marketing_team`.
The function returns an integer identifier for the vectorizer created, but you can also reference it by name
in other management functions.
### Parameters
`ai.create_vectorizer` takes the following parameters:
| Name | Type | Default | Required | Description |
|------------------|--------------------------------------------------------|-----------------------------------|----------|----------------------------------------------------------------------------------------------------|
| source | regclass | - | ✔ | The source table that embeddings are generated for. |
| name | text | Auto-generated | ✖ | A unique name for the vectorizer. If not provided, it's auto-generated based on the destination type:<br>- For table destination: `[target_schema]_[target_table]`<br>- For column destination: `[source_schema]_[source_table]_[embedding_column]`<br>Must follow snake_case pattern `^[a-z][a-z_0-9]*$` |
| destination | [Destination configuration](#destination-configuration)| `ai.destination_table()` | ✖ | Configure how the embeddings will be stored. Two options available:<br>- `ai.destination_table()` (default): Creates a separate table to store embeddings<br>- `ai.destination_column()`: Adds an embedding column directly to the source table |
| embedding | [Embedding configuration](#embedding-configuration) | - | ✔ | Set how to embed the data. |
| loading | [Loading configuration](#loading-configuration) | - | ✔ | Set the way to load the data from the source table, using functions like `ai.loading_column()`. |
| parsing | [Parsing configuration](#parsing-configuration) | ai.parsing_auto() | ✖ | Set the way to parse the data, using functions like `ai.parsing_auto()`. |
| chunking | [Chunking configuration](#chunking-configuration) | `ai.chunking_recursive_character_text_splitter()` | ✖ | Set the way to split text data, using functions like `ai.chunking_character_text_splitter()`. |
| indexing | [Indexing configuration](#indexing-configuration) | `ai.indexing_default()` | ✖ | Specify how to index the embeddings. For example, `ai.indexing_diskann()` or `ai.indexing_hnsw()`. |
| formatting | [Formatting configuration](#formatting-configuration) | `ai.formatting_python_template()` | ✖ | Define the data format before embedding, using `ai.formatting_python_template()`. |
| scheduling | [Scheduling configuration](#scheduling-configuration) | `ai.scheduling_default()` | ✖ | Set how often to run the vectorizer. For example, `ai.scheduling_timescaledb()`. |
| processing | [Processing configuration](#processing-configuration ) | `ai.processing_default()` | ✖ | Configure the way to process the embeddings. |
| queue_schema | name | - | ✖ | Specify the schema where the work queue table is created. |
| queue_table | name | - | ✖ | Specify the name of the work queue table. |
| grant_to | [Grant To configuration][#grant-to-configuration] | `ai.grant_to_default()` | ✖ | Specify which users should be able to use objects created by the vectorizer. |
| enqueue_existing | bool | `true` | ✖ | Set to `true` if existing rows should be immediately queued for embedding. |
| if_not_exists | bool | `false` | ✖ | Set to `true` to avoid an error if the vectorizer already exists. |
#### Returns
The `int` id of the vectorizer that you created. You can also reference the vectorizer by its name in management functions.
## Destination configuration
You use the destination configuration functions to define how and where the embeddings will be stored. There are two options available:
- [ai.destination_table](#aidestination_table): Creates a separate table to store embeddings (default behavior)
- [ai.destination_column](#aidestination_column): Adds an embedding column directly to the source table
### ai.destination_table
You use `ai.destination_table` to store embeddings in a separate table. This is the default behavior, where:
- A new table is created to store the embeddings
- A view is created that joins the source table with the embeddings
- Multiple chunks can be created per row (using chunking)
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
destination => ai.destination_table(
target_schema => 'public',
target_table => 'my_table_embeddings_store',
view_schema => 'public',
view_name => 'my_table_embeddings'
),
-- other parameters...
);
```
For simpler configuration with defaults:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
destination => ai.destination_table('my_table_embeddings'),
-- other parameters...
);
```
#### Parameters
`ai.destination_table` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| destination | name | - | ✖ | The base name for the view and table. The view is named `<destination>`, the embedding table is named `<destination>_store`. |
| target_schema | name | Source table schema | ✖ | The schema where the embeddings table will be created. |
| target_table | name | `<source_table>_embedding_store` or `<destination>_store` | ✖ | The name of the table where embeddings will be stored. |
| view_schema | name | Source table schema | ✖ | The schema where the view will be created. |
| view_name | name | `<source_table>_embedding` or `<destination>` | ✖ | The name of the view that joins source and embeddings tables. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.destination_column
You use `ai.destination_column` to store embeddings directly in the source table as a new column. This approach can only be used when the vectorizer does not perform chunking because it requires a one-to-one relationship between the source data and the embedding. This is useful in cases where you know the source text is short (as is common if the chunking has already been done upstream in your data pipeline).
This approach:
- Adds a vector column directly to the source table
- Does not create a separate view
- Requires chunking to be set to `ai.chunking_none()` (no chunking)
- Stores exactly one embedding per row
The workflow is that your application inserts data into the table with a NULL in the embedding column. The vectorizer will then read the row, generate the embedding and update the row with the correct value in the embedding column.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
destination => ai.destination_column('content_embedding'),
chunking => ai.chunking_none(),
-- other parameters...
);
```
#### Parameters
`ai.destination_column` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| embedding_column | name | - | ✔ | The name of the column to be added to the source table for storing embeddings. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Loading configuration
You use the loading configuration functions in `pgai` to define the way data is loaded from the source table.
The loading functions are:
- [ai.loading_column](#ailoading_column)
- [ai.loading_uri](#ailoading_uri)
### ai.loading_column
You use `ai.loading_column` to load the data to embed directly from a column in the source table.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
loading => ai.loading_column('contents'),
-- other parameters...
);
```
#### Parameters
`ai.loading_column` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| column_name | text | - | ✔ | The name of the column containing the data to load. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.loading_uri
You use `ai.loading_uri` to load the data to embed from a file that is referenced in a column of the source table.
This file path is internally passed to [smart_open](https://github.com/piskvorky/smart_open), so it supports any protocol that smart_open supports, including:
- Local files
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
- HTTP/HTTPS
- SFTP
- and [many more](https://github.com/piskvorky/smart_open/blob/master/help.txt)
#### Environment configuration
You just need to ensure the vectorizer worker has the correct credentials to access the file, such as in environment variables. Here is an example for AWS S3:
```bash
export AWS_ACCESS_KEY_ID='your_access_key'
export AWS_SECRET_ACCESS_KEY='your_secret_key'
export AWS_REGION='your_region' # optional
```
Make sure these environment variables are properly set in the environment where the PGAI vectorizer worker runs.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
loading => ai.loading_uri('file_uri_column_name'),
-- other parameters...
);
```
#### Parameters
`ai.loading_uri` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| column_name | text | - | ✔ | The name of the column containing the file path. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Parsing configuration
You use the parsing configuration functions in `pgai` to define how data is parsed after document loading. This is useful if for non-textual formats such as PDF documents.
The parsing functions are:
- [ai.parsing_auto](#aiparsing_auto): Automatically selects the appropriate parser based on file type.
- [ai.parsing_none](#aiparsing_none): Converts various formats to Markdown.
- [ai.parsing_docling](#aiparsing_docling): More powerful alternative to PyMuPDF. See [Docling](https://docling-project.github.io/docling/usage/supported_formats/) for supported formats.
- [ai.parsing_pymupdf](#aiparsing_pymupdf): See [PyMuPDF](https://pymupdf.readthedocs.io/en/latest/) for supported formats.
### ai.parsing_auto
You use `ai.parsing_auto` to automatically select an appropriate parser based on detected file types.
Documents with unrecognizable formats won't be processed and will generate an error (in the `ai.vectorizer_errors` table.
The parser selection works by examining file extensions and content types:
- For PDF files, images, Office documents (DOCX, XLSX, etc.): Uses docling
- For EPUB and MOBI (e-book formats): Uses pymupdf
- For text formats (TXT, MD, etc.): No parser is used (content is read directly)
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_auto(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_auto` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_none
You use `ai.parsing_none` to skip the parsing step. Only appropriate for textual data.
#### Example usage, for textual data.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_none(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_none` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_docling
You use `ai.parsing_docling` to parse the data provided by the loader using [docling](https://docling-project.github.io/docling/).
Docling is a more robust and thorough document parsing library that:
- Uses OCR capabilities to extract text from images
- Can parse complex documents with tables and multi-column layouts
- Supports Office formats (DOCX, XLSX, etc.)
- Preserves document structure better than other parsers
- Converts documents to markdown format
Note that docling uses ML models for improved parsing, which makes it slower than simpler parsers like pymupdf.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_docling(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_docling` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_pymupdf
You use `ai.parsing_pymupdf` to parse the data provided by the loader using [pymupdf](https://pymupdf.readthedocs.io/en/latest/).
PyMuPDF is a faster, simpler document parser that:
- Processes PDF documents with basic structure preservation
- Supports e-book formats like EPUB and MOBI
- Is generally faster than docling for simpler documents
- Works well for documents with straightforward layouts
Choose pymupdf when processing speed is more important than perfect structure preservation.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_pymupdf(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_pymupdf` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Chunking configuration
You use the chunking configuration functions in `pgai` to define the way text data is split into smaller,
manageable pieces before being processed for embeddings. This is crucial because many embedding models have input size
limitations, and chunking allows for processing of larger text documents while maintaining context.
By using chunking functions, you can fine-tune how your text data is
prepared for embedding, ensuring that the chunks are appropriately sized and
maintain necessary context for their specific use case. This is particularly
important for maintaining the quality and relevance of the generated embeddings,
especially when dealing with long-form content or documents with specific
structural elements.
The chunking functions are:
- [ai.chunking_character_text_splitter](#aichunking_character_text_splitter)
- [ai.chunking_recursive_character_text_splitter](#aichunking_recursive_character_text_splitter)
The key difference between these functions is that `chunking_recursive_character_text_splitter`
allows for a more sophisticated splitting strategy, potentially preserving more
semantic meaning in the chunks.
### ai.chunking_character_text_splitter
You use `ai.chunking_character_text_splitter` to:
- Split text into chunks based on a specified separator.
- Control the chunk size and the amount of overlap between chunks.
#### Example usage
- Split the content into chunks of 128 characters, with 10
character overlap, using '\n;' as the separator:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
chunking => ai.chunking_character_text_splitter(128, 10, E'\n'),
-- other parameters...
);
```
#### Parameters
`ai.chunking_character_text_splitter` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|---------|-|--------------------------------------------------------|
|chunk_size| int | 800 |✖| The maximum number of characters in a chunk |
|chunk_overlap| int | 400 |✖| The number of characters to overlap between chunks |
|separator| text | E'\n\n' |✖| The string or character used to split the text |
|is_separator_regex| bool | false |✖| Set to `true` if `separator` is a regular expression. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.chunking_recursive_character_text_splitter
`ai.chunking_recursive_character_text_splitter` provides more fine-grained control over the chunking process.
You use it to recursively split text into chunks using multiple separators.
#### Example usage
- Recursively split content into chunks of 256 characters, with a 20 character
overlap, first trying to split on '\n;', then on spaces:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
chunking => ai.chunking_recursive_character_text_splitter(
256,
20,
separators => array[E'\n;', ' ']
),
-- other parameters...
);
```
#### Parameters
`ai.chunking_recursive_character_text_splitter` takes the following parameters:
| Name | Type | Default | Required | Description |
|--------------------|------|---------|-|----------------------------------------------------------|
| chunk_size | int | 800 |✖| The maximum number of characters per chunk |
| chunk_overlap | int | 400 |✖| The number of characters to overlap between chunks |
| separators | text[] | array[E'\n\n', E'\n', '.', '?', '!', ' ', ''] |✖| The string or character used to split the text |
| is_separator_regex | bool | false |✖| Set to `true` if `separator` is a regular expression. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Embedding configuration
You use the embedding configuration functions to specify how embeddings are
generated for your data.
The embedding functions are:
- [ai.embedding_litellm](#aiembedding_litellm)
- [ai.embedding_openai](#aiembedding_openai)
- [ai.embedding_ollama](#aiembedding_ollama)
- [ai.embedding_voyageai](#aiembedding_voyageai)
### ai.embedding_litellm
You call the `ai.embedding_litellm` function to use LiteLLM to generate embeddings for models from multiple providers.
The purpose of `ai.embedding_litellm` is to:
- Define the embedding model to use.
- Specify the dimensionality of the embeddings.
- Configure optional, provider-specific parameters.
- Set the name of the environment variable that holds the value of your API key.
#### Example usage
Use `ai.embedding_litellm` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
1. Set the required API key for your provider.
The API key should be set as an environment variable which is available to either the Vectorizer worker, or the
Postgres process.
2. Create a vectorizer using LiteLLM to access the 'microsoft/codebert-base' embedding model on huggingface:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'huggingface/microsoft/codebert-base',
768,
api_key_name => 'HUGGINGFACE_API_KEY',
extra_options => '{"wait_for_model": true}'::jsonb
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the LiteLLM embedding configuration:
| Name | Type | Default | Required | Description |
|---------------|-------|---------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the embedding model to use. Refer to the [LiteLLM embedding documentation] for an overview of the available providers and models. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| api_key_name | text | - | ✖ | Set the name of the environment variable that contains the API key. This allows for flexible API key management without hardcoding keys in the database. |
| extra_options | jsonb | - | ✖ | Set provider-specific configuration options. |
[LiteLLM embedding documentation]: https://docs.litellm.ai/docs/embedding/supported_embedding
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
#### Provider-specific configuration examples
The following subsections show how to configure the vectorizer for all supported providers.
##### Cohere
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'cohere/embed-english-v3.0',
1024,
api_key_name => 'COHERE_API_KEY',
),
-- other parameters...
);
```
Note: The [Cohere documentation on input_type] specifies that the `input_type` parameter is required.
By default, LiteLLM sets this to `search_document`. The input type can be provided
via `extra_options`, i.e. `extra_options => '{"input_type": "search_document"}'::jsonb`.
[Cohere documentation on input_type]: https://docs.cohere.com/v2/docs/embeddings#the-input_type-parameter
#### Mistral
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'mistral/mistral-embed',
1024,
api_key_name => 'MISTRAL_API_KEY',
),
-- other parameters...
);
```
Note: Mistral limits the maximum input per batch to 16384 tokens.
##### Azure OpenAI
To set up a vectorizer with Azure OpenAI you require these values from the Azure AI Foundry console:
- deployment name
- base URL
- version
- API key
The deployment name is visible in the "Deployment info" section. The base URL and version are
extracted from the "Target URI" field in the "Endpoint section". The Target URI has the form:
`https://your-resource-name.openai.azure.com/openai/deployments/your-deployment-name/embeddings?api-version=2023-05-15`.
In this example, the base URL is: `https://your-resource-name.openai.azure.com` and the version is `2023-05-15`.

Configure the vectorizer, note that the base URL and version are configured through `extra_options`:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'azure/<deployment name here>',
1024,
api_key_name => 'AZURE_API_KEY',
extra_options => '{"api_base": "<base URL here>", "api_version": "<version here>"}'::jsonb
),
-- other parameters...
);
```
#### Huggingface inference models
You can use [Huggingface inference] to obtain vector embeddings. Note that
Huggingface has two categories of inference: "serverless inference", and
"inference endpoints". Serverless inference is free, but is limited to models
under 10GB in size, and the model may not be immediately available to serve
requests. Inference endpoints are a paid service and provide always-on APIs
for production use-cases.
Note: We recommend using the `wait_for_model` parameter when using vectorizer
with serverless inference to force the call to block until the model has been
loaded. If you do not use `wait_for_model`, it's likely that vectorization will
never succeed.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'huggingface/BAAI/bge-small-en-v1.5',
, 384
, extra_options => '{"wait_for_model": true}'::jsonb
)
-- other parameters...
);
```
[Huggingface inference]: https://huggingface.co/docs/huggingface_hub/en/guides/inference
#### AWS Bedrock
To set up a vectorizer with AWS Bedrock, you must ensure that the vectorizer
is authenticated to make API calls to the AWS Bedrock endpoint. The vectorizer
worker uses boto3 under the hood, so there are multiple ways to achieve this.
The simplest method is to provide the `AWS_ACCESS_KEY_ID`,
`AWS_SECRET_ACCESS_KEY`, and `AWS_REGION_NAME` environment variables to the
vectorizer worker. Consult the [boto3 credentials documentation] for more
options.
[boto3 credentials documentation]: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'bedrock/amazon.titan-embed-text-v2:0',
1024,
api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional
extra_options => '{"aws_access_key_id": "<access key id>", "aws_region_name": "<region name>"}'::jsonb -- optional
),
-- other parameters...
);
```
You can also only configure the secret in the database, and provide the
`api_key_name` parameter to prompt the vectorizer worker to load the api key
from the database. When you do this, you may need to pass `aws_access_key_id`
and `aws_region_name` through the `extra_options` parameter:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'bedrock/amazon.titan-embed-text-v2:0',
1024,
api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional
extra_options => '{"aws_access_key_id": "<access key id>", "aws_region_name": "<region name>"}'::jsonb -- optional
),
-- other parameters...
);
```
#### Vertex AI
To set up a vectorizer with Vertex AI, you must ensure that the vectorizer
can make API calls to the Vertex AI endpoint. The vectorizer worker uses
GCP's authentication under the hood, so there are multiple ways to achieve
this.
The simplest method is to provide the `VERTEX_PROJECT`, and
`VERTEX_CREDENTIALS` environment variables to the vectorizer worker. These
correspond to the project id, and the path to a file containing credentials for
a service account. Consult the [Authentication methods at Google] for more
options.
[Authentication methods at Google]: https://cloud.google.com/docs/authentication
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'vertex_ai/text-embedding-005',
768
),
-- other parameters...
);
```
You can also only configure the secret in the database, and provide the
`api_key_name` parameter to prompt the vectorizer worker to load the api key
from the database. When you do this, you may need to pass `vertex_project` and
`vertex_location` through the `extra_options` parameter.
Note: `VERTEX_CREDENTIALS` should contain the path to a file
containing the API key, the vectorizer worker requires to have access to this
file in order to load the credentials.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'vertex_ai/text-embedding-005',
768,
api_key_name => 'VERTEX_CREDENTIALS', -- optional
extra_options => '{"vertex_project": "<project id>", "vertex_location": "<vertex location>"}'::jsonb -- optional
),
-- other parameters...
);
```
### ai.embedding_openai
You call the `ai.embedding_openai` function to use an OpenAI model to generate embeddings.
The purpose of `ai.embedding_openai` is to:
- Define which OpenAI embedding model to use.
- Specify the dimensionality of the embeddings.
- Configure optional parameters like the user identifier for API calls.
- Set the name of the [environment variable that holds the value of your OpenAI API key][openai-use-env-var].
#### Example usage
Use `ai.embedding_openai` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
1. Set the value of your OpenAI API key.
For example, [in an environment variable][openai-set-key] or in a [Docker configuration][docker configuration].
2. Create a vectorizer with OpenAI as the embedding provider:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_openai(
'text-embedding-3-small',
768,
chat_user => 'bob',
api_key_name => 'MY_OPENAI_API_KEY_NAME'
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the OpenAI embedding configuration:
| Name | Type | Default | Required | Description |
|--------------|------|------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the OpenAI embedding model to use. For example, `text-embedding-3-small`. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| chat_user | text | - | ✖ | The identifier for the user making the API call. This can be useful for tracking API usage or for OpenAI's monitoring purposes. |
| api_key_name | text | `OPENAI_API_KEY` | ✖ | Set [the name of the environment variable that contains the OpenAI API key][openai-use-env-var]. This allows for flexible API key management without hardcoding keys in the database. On Timescale Cloud, you should set this to the name of the secret that contains the OpenAI API key. |
| base_url | text | - | ✖ | Set the base_url of the OpenAI API. Note: no default configured here to allow configuration of the vectorizer worker through `OPENAI_BASE_URL` env var. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.embedding_ollama
You use the `ai.embedding_ollama` function to use an Ollama model to generate embeddings.
The purpose of `ai.embedding_ollama` is to:
- Define which Ollama model to use.
- Specify the dimensionality of the embeddings.
- Configure how the Ollama API is accessed.
- Configure the model's truncation behaviour, and keep alive.
- Configure optional, model-specific parameters, like the `temperature`.
#### Example usage
This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_ollama(
'nomic-embed-text',
768,
base_url => 'http://my.ollama.server:443'
options => '{ "num_ctx": 1024 }',
keep_alive => "10m"
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the Ollama embedding configuration:
| Name | Type | Default | Required | Description |
|------------|---------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the Ollama model to use. For example, `nomic-embed-text`. Note: the model must already be available (pulled) in your Ollama server. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| base_url | text | - | ✖ | Set the base_url of the Ollama API. Note: no default configured here to allow configuration of the vectorizer worker through `OLLAMA_HOST` env var. |
| options | jsonb | - | ✖ | Configures additional model parameters listed in the documentation for the Modelfile, such as `temperature`, or `num_ctx`. |
| keep_alive | text | - | ✖ | Controls how long the model will stay loaded in memory following the request. Note: no default configured here to allow configuration at Ollama-level. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.embedding_voyageai
You use the `ai.embedding_voyageai` function to use a Voyage AI model to generate embeddings.
The purpose of `ai.embedding_voyageai` is to:
- Define which Voyage AI model to use.
- Specify the dimensionality of the embeddings.
- Configure the model's truncation behaviour, and api key name.
- Configure the input type.
#### Example usage
This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_voyageai(
'voyage-3.5-lite', -- or 'voyage-3.5', 'voyage-3-large', etc.
1024, -- default dimensions for voyage-3.5 models
api_key_name => "TEST_API_KEY",
output_dimension => 512 -- Optional: use 256, 512, 1024, or 2048
),
-- other parameters...
);
```
**Example with flexible dimensions (Matryoshka embeddings):**
```sql
-- Use 256 dimensions for faster search and less storage
SELECT ai.create_vectorizer(
'articles'::regclass,
embedding => ai.embedding_voyageai(
'voyage-3-large',
1024, -- Schema dimensions
output_dimension => 256 -- Actual embedding dimensions
),
destination => ai.destination_table('articles_embeddings_256d')
);
```
#### Available Models
**Current Generation (Recommended):**
| Model | Purpose | Default Dimensions | Max Tokens/Request |
|-------|---------|-------------------|-------------------|
| `voyage-3.5-lite` | Cost & latency optimized | 1024 | 1M |
| `voyage-3.5` | General-purpose optimized | 1024 | 320K |
| `voyage-3-large` | Best for general-purpose & multilingual | 1024 | 120K |
| `voyage-code-3` | Code retrieval specialized | 1024 | 120K |
| `voyage-finance-2` | Finance domain | 1024 | 120K |
| `voyage-law-2` | Legal documents | 1024 | 120K |
**Older Models:**
| Model | Purpose | Default Dimensions | Max Tokens/Request |
|-------|---------|-------------------|-------------------|
| `voyage-3-lite` | General-purpose (older) | 512 | 120K |
| `voyage-2` | General-purpose (legacy) | - | 320K |
#### Parameters
The function takes several parameters to customize the Voyage AI embedding configuration:
| Name | Type | Default | Required | Description |
|------------------|---------|------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the [Voyage AI model](https://docs.voyageai.com/docs/embeddings) to use. See table above for available models. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model (typically 1024 for voyage-3.x models). |
| input_type | text | 'document' | ✖ | Type of the input text: null, 'query', or 'document'. Setting this improves retrieval quality by allowing the model to optimize the embedding. |
| api_key_name | text | `VOYAGE_API_KEY` | ✖ | Set the name of the environment variable that contains the Voyage AI API key. This allows for flexible API key management without hardcoding keys in the database. On Timescale Cloud, you should set this to the name of the secret that contains the Voyage AI API key. |
| output_dimension | int | null | ✖ | Set the output dimension for embeddings. Supports 256, 512, 1024, or 2048 for voyage-3.x models. Lower dimensions reduce storage (up to 75%) and improve search speed with minimal accuracy loss. Uses Matryoshka embeddings technique. |
| output_dtype | text | 'float' | ✖ | Set the output data type for embeddings. Options: 'float' (default), 'int8', 'uint8', 'binary', 'ubinary'. Quantized types (int8, uint8) reduce network bandwidth and API costs. Binary types (binary, ubinary) provide maximum compression with 1/8 the dimensions. Embeddings are automatically converted to float for storage in PostgreSQL. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Formatting configuration
You use the `ai.formatting_python_template` function in `pgai` to
configure the way data from the source table is formatted before it is sent
for embedding.
`ai.formatting_python_template` provides a flexible way to structure the input
for embedding models. This enables you to incorporate relevant metadata and additional
text. This can significantly enhance the quality and usefulness of the generated
embeddings, especially in scenarios where context from multiple fields is
important for understanding or searching the content.
The purpose of `ai.formatting_python_template` is to:
- Define a template for formatting the data before embedding.
- Allow the combination of multiple fields from the source table.
- Add consistent context or structure to the text being embedded.
- Customize the input for the embedding model to improve relevance and searchability.
Formatting happens after chunking and the special `$chunk` variable contains the chunked text.
### Example usage
- Default formatting:
The default formatter uses the `$chunk` template, resulting in outputing the chunk text as-is.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('$chunk'),
-- other parameters...
);
```
- Add context from other columns:
Add the title and publication date to each chunk, providing more context for the embedding.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('Title: $title\nDate: $published\nContent: $chunk'),
-- other parameters...
);
```
- Combine multiple fields:
Prepend author and category information to each chunk.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('Author: $author\nCategory: $category\n$chunk'),
-- other parameters...
);
```
- Add consistent structure:
Add start and end markers to each chunk, which could be useful for certain
types of embeddings or retrieval tasks.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('BEGIN DOCUMENT\n$chunk\nEND DOCUMENT'),
-- other parameters...
);
```
### Parameters
`ai.formatting_python_template` takes the following parameter:
|Name| Type | Default | Required | Description |
|-|--------|-|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|template| string |`$chunk`|✔| A string using [Python template strings](https://docs.python.org/3/library/string.html#template-strings) with $-prefixed variables that defines how the data should be formatted. |
- The $chunk placeholder is required and represents the text chunk that will be embedded.
- Other placeholders can be used to reference columns from the source table.
- The template allows for adding static text or structuring the input in a specific way.
### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Indexing configuration
You use indexing configuration functions in pgai to
specify the way generated embeddings should be indexed for efficient similarity
searches. These functions enable you to choose and configure the indexing
method that best suits your needs in terms of performance, accuracy, and
resource usage.
By providing these indexing options, pgai allows you to optimize your
embedding storage and retrieval based on their specific use case and performance
requirements. This flexibility is crucial for scaling AI-powered search and
analysis capabilities within a PostgreSQL database.
Key points about indexing:
- The choice of indexing method depends on your dataset size, query performance requirements, and available resources.
- [ai.indexing_none](#aiindexing_none) is better suited for small datasets, or when you want to perform index creation manually.
- [ai.indexing_diskann](#aiindexing_diskann) is generally recommended for larger datasets that require an index.
- The `min_rows` parameter enables you to delay index creation until you have enough data to justify the overhead.
- These indexing methods are designed for approximate nearest neighbor search, which trades a small amount of accuracy for significant speed improvements in similarity searches.
The available functions are:
- [ai.indexing_default](#aiindexing_default): when you do not want indexes created automatically.
- [ai.indexing_none](#aiindexing_none): when you do not want indexes created automatically.
- [ai.indexing_diskann](#aiindexing_diskann): configure indexing using the [DiskANN algorithm](https://github.com/timescale/pgvectorscale).
- [ai.indexing_hnsw](#aiindexing_hnsw): configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world).
### ai.indexing_default
You use `ai.indexing_default` to use the platform-specific default value for indexing.
On Timescale Cloud, the default is `ai.indexing_diskann()`. On self-hosted, the default is `ai.indexing_none()`.
A timescaledb background job is used for automatic index creation. Since timescaledb may not be installed
in a self-hosted environment, we default to `ai.indexing_none()`.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_default(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_none
You use `ai.indexing_none` to specify that no special indexing should be used for the embeddings.
This is useful when you don't need fast similarity searches or when you're dealing with a small amount of data.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_none(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_diskann
You use `ai.indexing_diskann` to configure indexing using the DiskANN algorithm, which is designed for high-performance
approximate nearest neighbor search on large-scale datasets. This is suitable for very large datasets that need to be
stored on disk.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_diskann(min_rows => 500000, storage_layout => 'memory_optimized'),
-- other parameters...
);
```
#### Parameters
`ai.indexing_diskann` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|-|--------------------------------------------------|
|min_rows| int | 100000 |✖| The minimum number of rows before creating the index |
| storage_layout | text | - |✖| Set to either `memory_optimized` or `plain` |
| num_neighbors | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter. |
| search_list_size | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| max_alpha | float8 | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| num_dimensions | int | - |✖|Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| num_bits_per_dimension | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| create_when_queue_empty | boolean | true |✖| Create the index only after all of the embeddings have been generated. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_hnsw
You use `ai.indexing_hnsw` to configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world),
which is known for fast and accurate approximate nearest neighbor search.
HNSW is suitable for in-memory datasets and scenarios where query speed is crucial.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_hnsw(min_rows => 50000, opclass => 'vector_l1_ops'),
-- other parameters...
);
```
#### Parameters
`ai.indexing_hnsw` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------------------|-|----------------------------------------------------------------------------------------------------------------|
|min_rows| int | 100000 |✖| The minimum number of rows before creating the index |
|opclass| text | `vector_cosine_ops` |✖| The operator class for the index. Possible values are:`vector_cosine_ops`, `vector_l1_ops`, or `vector_ip_ops` |
|m| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) |
|ef_construction| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) |
| create_when_queue_empty| boolean | true |✖| Create the index only after all of the embeddings have been generated. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Scheduling configuration
You use scheduling functions in pgai to configure when and how often the vectorizer should run to process new or
updated data. These functions allow you to set up automated, periodic execution of the embedding
generation process. These are advanced options and most users should use the default.
By providing these scheduling options, pgai enables you to automate the process
of keeping your embeddings up-to-date with minimal manual intervention. This is
crucial for maintaining the relevance and accuracy of AI-powered search and
analysis capabilities, especially in systems where data is frequently updated or
added. The flexibility in scheduling also allows users to balance the freshness
of embeddings against system resource usage and other operational
considerations.
The available functions are:
- [ai.scheduling_default](#aischeduling_default): uses the platform-specific default scheduling configuration. On Timescale Cloud this is equivalent to `ai.scheduling_timescaledb()`. On self-hosted deployments, this is equivalent to `ai.scheduling_none()`.
- [ai.scheduling_none](#aischeduling_none): when you want manual control over when the vectorizer runs. Use this when you're using an external scheduling system, as is the case with self-hosted deployments.
- [ai.scheduling_timescaledb](#aischeduling_timescaledb): leverages TimescaleDB's robust job scheduling system, which is designed for reliability and scalability. Use this when you're using Timescale Cloud.
### ai.scheduling_default
You use `ai.scheduling_default` to use the platform-specific default scheduling configuration.
On Timescale Cloud, the default is `ai.scheduling_timescaledb()`. On self-hosted, the default is `ai.scheduling_none()`.
A timescaledb background job is used to periodically trigger a cloud vectorizer on Timescale Cloud.
This is not available in a self-hosted environment.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_default(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.scheduling_none
You use `ai.scheduling_none` to
- Specify that no automatic scheduling should be set up for the vectorizer.
- Manually control when the vectorizer runs or when you're using an external scheduling system.
You should use this for self-hosted deployments.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_none(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.scheduling_timescaledb
You use `ai.scheduling_timescaledb` to:
- Configure automated scheduling using TimescaleDB's job scheduling system.
- Allow periodic execution of the vectorizer to process new or updated data.
- Provide fine-grained control over when and how often the vectorizer runs.
#### Example usage
- Basic usage (run every 5 minutes). This is the default:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(),
-- other parameters...
);
```
- Custom interval (run every hour):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(interval '1 hour'),
-- other parameters...
);
```
- Specific start time and timezone:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(
interval '30 minutes',
initial_start => '2024-01-01 00:00:00'::timestamptz,
timezone => 'America/New_York'
),
-- other parameters...
);
```
- Fixed schedule:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(
interval '1 day',
fixed_schedule => true,
timezone => 'UTC'
),
-- other parameters...
);
```
#### Parameters
`ai.scheduling_timescaledb` takes the following parameters:
|Name|Type| Default | Required | Description |
|-|-|---------|-|--------------------------------------------------------------------------------------------------------------------|
|schedule_interval|interval| '10m' |✔| Set how frequently the vectorizer checks for new or updated data to process. |
|initial_start|timestamptz| - |✖| Delay the start of scheduling. This is useful for coordinating with other system processes or maintenance windows. |
|fixed_schedule|bool| - |✖|Set to `true` to use a fixed schedule such as every day at midnight. Set to `false` for a sliding window such as every 24 hours from the last run|
|timezone|text| - |✖| Set the timezone this schedule operates in. This ensures that schedules are interpreted correctly, especially important for fixed schedules or when coordinating with business hours. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Processing configuration
You use the processing configuration functions in pgai to specify
the way the vectorizer should process data when generating embeddings,
such as the batch size and concurrency. These are advanced options and most
users should use the default.
### ai.processing_default
You use `ai.processing_default` to specify the concurrency and batch size for the vectorizer.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
processing => ai.processing_default(batch_size => 200, concurrency => 5),
-- other parameters...
);
```
#### Parameters
`ai.processing_default` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|------------------------------|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|batch_size| int | Determined by the vectorizer |✖| The number of items to process in each batch. The optimal batch size depends on your data and cloud function configuration, larger batch sizes can improve efficiency but may increase memory usage. The default is 1 for vectorizers that use document loading (`ai.loading_uri`) and 50 otherwise. |
|concurrency| int | Determined by the vectorizer |✖| The number of concurrent processing tasks to run. The optimal concurrency depends on your cloud infrastructure and rate limits, higher concurrency can speed up processing but may increase costs and resource usage. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Grant To configuration
You use the grant to configuration function in pgai to specify which users should be able to use
objects created by the vectorizer.
### ai.grant_to
Grant permissions to a comma-separated list of users.
Includes the users specified in the `ai.grant_to_default` setting.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
grant_to => ai.grant_to('bob', 'alice'),
-- other parameters...
);
```
#### Parameters
This function takes a comma-separated list of usernames to grant permissions to.
#### Returns
An array of name values, that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Enable and disable vectorizer schedules
You use `ai.enable_vectorizer_schedule` and `ai.disable_vectorizer_schedule` to control
the execution of [scheduled vectorizer jobs](#scheduling-configuration). These functions
provide a way to temporarily pause or resume the automatic processing of embeddings, without
having to delete or recreate the vectorizer configuration.
These functions provide an important layer of operational control for managing
pgai vectorizers in production environments. They allow database administrators
and application developers to balance the need for up-to-date embeddings with
other system priorities and constraints, enhancing the overall flexibility and
manageability of pgai.
Key points about schedule enable and disable:
- These functions provide fine-grained control over individual vectorizer schedules without affecting other
vectorizers, or the overall system configuration.
- Disabling a schedule does not delete the vectorizer or its configuration; it simply stops scheduling future
executions of the job.
- These functions are particularly useful in scenarios such as:
- System maintenance windows where you want to reduce database load.
- Temporarily pausing processing during data migrations or large bulk updates.
- Debugging or troubleshooting issues related to the vectorizer.
- Implementing manual control over when embeddings are updated.
- When a schedule is disabled, new or updated data is not automatically processed. However, the data is still
queued, and will be processed when the schedule is re-enabled, or when the vectorizer is run manually.
- After re-enabling a schedule, for a vectorizer configured with
[ai.scheduling_timescaledb](#aischeduling_timescaledb), the next run is based
on the original scheduling configuration. For example, if the vectorizer was
set to run every hour, it will run at the next hour mark after being enabled.
- You can reference vectorizers either by their ID or their name.
Usage example in a maintenance scenario:
```sql
-- Before starting system maintenance using IDs
SELECT ai.disable_vectorizer_schedule(1);
SELECT ai.disable_vectorizer_schedule(2);
-- Or using names (more human-readable)
SELECT ai.disable_vectorizer_schedule('public_blog_embeddings');
SELECT ai.disable_vectorizer_schedule('public_products_embeddings');
-- Perform maintenance tasks...
-- After maintenance is complete
SELECT ai.enable_vectorizer_schedule('public_blog_embeddings');
SELECT ai.enable_vectorizer_schedule('public_products_embeddings');
```
The available functions are:
- [ai.enable_vectorizer_schedule](#aienable_vectorizer_schedule): activate, reactivate or resume a scheduled job.
- [ai.disable_vectorizer_schedule](#aidisable_vectorizer_schedule): disactivate or temporarily stop a scheduled job.
### ai.enable_vectorizer_schedule
You use `ai.enable_vectorizer_schedule` to:
- Activate or reactivate the scheduled job for a specific vectorizer.
- Allow the vectorizer to resume automatic processing of new or updated data.
#### Example usage
To resume the automatic scheduling for a vectorizer:
```sql
-- Using vectorizer name (recommended)
SELECT ai.enable_vectorizer_schedule('public_blog_embeddings');
-- Using ID
SELECT ai.enable_vectorizer_schedule(1);
```
#### Parameters
`ai.enable_vectorizer_schedule` can be called in two ways:
1. With a vectorizer name (recommended for better readability)
2. With a vectorizer ID
`ai.enable_vectorizer_schedule(name text)`:
|Name| Type | Default | Required | Description |
|-|------|---------|-|-----------------------------------------------------------|
|name| text | - |✔| The name of the vectorizer whose schedule you want to enable. |
`ai.enable_vectorizer_schedule(vectorizer_id int)`:
|Name| Type | Default | Required | Description |
|-|------|---------|-|-----------------------------------------------------------|
|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to enable. |
#### Returns
`ai.enable_vectorizer_schedule` does not return a value.
### ai.disable_vectorizer_schedule
You use `ai.disable_vectorizer_schedule` to:
- Deactivate the scheduled job for a specific vectorizer.
- Temporarily stop the automatic processing of new or updated data.
#### Example usage
To stop the automatic scheduling for a vectorizer:
```sql
-- Using name (recommended)
SELECT ai.disable_vectorizer_schedule('public_blog_embeddings');
-- Using ID
SELECT ai.disable_vectorizer_schedule(1);
```
#### Parameters
`ai.disable_vectorizer_schedule` can be called in two ways:
1. With a vectorizer name (recommended for better readability)
2. With a vectorizer ID
`ai.disable_vectorizer_schedule(name text)`:
|Name| Type | Default | Required | Description |
|-|------|---------|-|----------------------------------------------------------------------|
|name| text | - |✔| The name of the vectorizer whose schedule you want to disable. |
`ai.disable_vectorizer_schedule(vectorizer_id int)`:
|Name| Type | Default | Required | Description |
|-|------|---------|-|----------------------------------------------------------------------|
|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to disable. |
#### Returns
`ai.disable_vectorizer_schedule` does not return a value.
## Drop a vectorizer
`ai.drop_vectorizer` is a management tool that you use to remove a vectorizer that you
[created previously](#create-vectorizers), and clean up the associated
resources. Its primary purpose is to provide a controlled way to delete a
vectorizer when it's no longer needed, or when you want to reconfigure it from
scratch.
You use `ai.drop_vectorizer` to:
- Remove a specific vectorizer configuration from the system.
- Clean up associated database objects and scheduled jobs.
- Safely undo the creation of a vectorizer.
`ai.drop_vectorizer` performs the following on the vectorizer to drop:
- Deletes the scheduled job associated with the vectorizer if one exists.
- Drops the trigger from the source table used to queue changes.
- Drops the trigger function that backed the source table trigger.
- Drops the queue table used to manage the updates to be processed.
- Deletes the vectorizer row from the `ai.vectorizer` table.
By default, `ai.drop_vectorizer` does not:
- Drop the target table containing the embeddings.
- Drop the view joining the target and source tables.
There is an optional parameter named `drop_all` which is `false` by default. If you
explicitly pass `true`, the function WILL drop the target table and view.
This design allows you to keep the generated embeddings and the convenient view
even after dropping the vectorizer. This is useful if you want to stop
automatic updates but still use the existing embeddings.
#### Example usage
Best practices are:
- Before dropping a vectorizer, ensure that you will not need the automatic embedding updates it provides.
- After dropping a vectorizer, you may want to manually clean up the target table and view if they're no longer needed.
- You can reference vectorizers either by their ID or their name (recommended).
Examples:
- Remove a vectorizer by name (recommended):
```sql
SELECT ai.drop_vectorizer('public_blog_embeddings');
```
- Remove a vectorizer by ID:
```sql
SELECT ai.drop_vectorizer(1);
```
- Remove a vectorizer and drop the target table and view as well:
```sql
SELECT ai.drop_vectorizer('public_blog_embeddings', drop_all=>true);
```
#### Parameters
`ai.drop_vectorizer` can be called in two ways:
1. With a vectorizer name (recommended for better readability)
2. With a vectorizer ID
`ai.drop_vectorizer(name text, drop_all bool)`:
|Name| Type | Default | Required | Description |
|-|------|-|-|-|
|name| text | -|✔|The name of the vectorizer you want to drop|
|drop_all| bool | false |✖|true to drop the target table and view as well|
`ai.drop_vectorizer(vectorizer_id int, drop_all bool)`:
|Name| Type | Default | Required | Description |
|-|------|-|-|-|
|vectorizer_id| int | -|✔|The identifier of the vectorizer you want to drop|
|drop_all| bool | false |✖|true to drop the target table and view as well|
#### Returns
`ai.drop_vectorizer` does not return a value, but it performs several cleanup operations.
## View vectorizer status
[ai.vectorizer_status view](#aivectorizer_status-view) and
[ai.vectorizer_queue_pending function](#aivectorizer_queue_pending-function) are
monitoring tools in pgai that provide insights into the state and performance of vectorizers.
These monitoring tools are crucial for maintaining the health and performance of
your pgai-enhanced database. They allow you to proactively manage your
vectorizers, ensure timely processing of embeddings, and quickly identify and
address any issues that may arise in your AI-powered data pipelines.
For effective monitoring, you use `ai.vectorizer_status`.
For example:
```sql
-- Get an overview of all vectorizers
SELECT * FROM ai.vectorizer_status;
```
Sample output:
| id | source_table | target_table | view | pending_items |
|----|--------------|--------------|------|---------------|
| 1 | public.blog | public.blog_contents_embedding_store | public.blog_contents_embeddings | 1 |
The `pending_items` column indicates the number of items still awaiting embedding creation. The pending items count helps you to:
- Identify bottlenecks in processing.
- Determine if you need to adjust scheduling or processing configurations.
- Monitor the impact of large data imports or updates on your vectorizers.
Regular monitoring using these tools helps ensure that your vectorizers are keeping up with data changes, and that
embeddings remain up-to-date.
Available views are:
- [ai.vectorizer_status](#aivectorizer_status-view): view, monitor and display information about a vectorizer.
Available functions are:
- [ai.vectorizer_queue_pending](#aivectorizer_queue_pending-function): retrieve just the queue count for a vectorizer.
### ai.vectorizer_status view
You use `ai.vectorizer_status` to:
- Get a high-level overview of all vectorizers in the system.
- Regularly monitor and check the health of the entire system.
- Display key information about each vectorizer's configuration and current state.
- Use the `pending_items` column to get a quick indication of processing backlogs.
#### Example usage
- Retrieve all vectorizers that have items waiting to be processed:
```sql
SELECT * FROM ai.vectorizer_status WHERE pending_items > 0;
```
- System health monitoring:
```sql
-- Alert if any vectorizer has more than 1000 pending items
SELECT id, source_table, pending_items
FROM ai.vectorizer_status
WHERE pending_items > 1000;
```
#### Returns
`ai.vectorizer_status` returns the following:
| Column name | Description |
|---------------|-----------------------------------------------------------------------|
| id | The unique identifier of this vectorizer |
|source_table | The fully qualified name of the source table |
|target_table | The fully qualified name of the table storing the embeddings |
|view | The fully qualified name of the view joining source and target tables |
| pending_items | The number of items waiting to be processed by the vectorizer |
### ai.vectorizer_queue_pending function
`ai.vectorizer_queue_pending` enables you to retrieve the number of items in a vectorizer queue
when you need to focus on a particular vectorizer or troubleshoot issues.
You use `vectorizer_queue_pending` to:
- Retrieve the number of pending items for a specific vectorizer.
- Allow for more granular monitoring of individual vectorizer queues.
#### Example usage
Return the number of pending items for a vectorizer:
```sql
-- Using name (recommended)
SELECT ai.vectorizer_queue_pending('public_blog_embeddings');
-- Using ID
SELECT ai.vectorizer_queue_pending(1);
```
A queue with a very large number of items may be slow to count. The optional
`exact_count` parameter is defaulted to false. When false, the count is limited.
An exact count is returned if the queue has 10,000 or fewer items, and returns
9223372036854775807 (the max bigint value) if there are greater than 10,000
items.
To get an exact count, regardless of queue size, set the optional parameter to
`true` like this:
```sql
-- Using name (recommended)
SELECT ai.vectorizer_queue_pending('public_blog_embeddings', exact_count=>true);
-- Using ID
SELECT ai.vectorizer_queue_pending(1, exact_count=>true);
```
#### Parameters
`ai.vectorizer_queue_pending` can be called in two ways:
1. With a vectorizer name (recommended for better readability)
2. With a vectorizer ID
`ai.vectorizer_queue_pending(name text, exact_count bool)`:
| Name | Type | Default | Required | Description |
|---------------|------|---------|----------|---------------------------------------------------------|
| name | text | - | ✔ | The name of the vectorizer you want to check |
| exact_count | bool | false | ✖ | If true, return exact count. If false, capped at 10,000 |
`ai.vectorizer_queue_pending(vectorizer_id int, exact_count bool)`:
| Name | Type | Default | Required | Description |
|---------------|------|---------|----------|---------------------------------------------------------|
| vectorizer_id | int | - | ✔ | The identifier of the vectorizer you want to check |
| exact_count | bool | false | ✖ | If true, return exact count. If false, capped at 10,000 |
### Returns
The number of items in the queue for the specified vectorizer
[timescale-cloud]: https://console.cloud.timescale.com/
[openai-use-env-var]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2
[openai-set-key]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2
[docker configuration]: /docs/vectorizer/worker.md#install-and-configure-vectorizer-worker
|
docs/vectorizer/migrating-from-extension.md
|
Migrating from the extension to the python library
|
Previous versions of pgai vectorizer used an extension to provide the vectorizer
functionality. We have removed the need for the extension and put the
vectorizer code into the pgai python library. This change allows the vectorizer
to be used on more PostgreSQL cloud providers (AWS RDS, Supabase, etc.) and
simplifies the installation and upgrade process.
Versions that used the extension:
- `ai` extension version < 0.10.0
- `pgai` python library version < 0.10.0
# Migrating from the extension to the python library
We made this change in a way that will allow current users of the vectorizer to
continue using the feature without interruption, but they will have to modify how they
upgrade vectorizer functionality in the future.
The upgrade process is as follows:
1. **Upgrade the extension:** Run ALTER EXTENSION ai UPDATE TO '0.10.1' to detach the vectorizer catalog tables and functions from the extension. This leaves them in your database in the ai schema, and the vectorizer will continue to work.
2. **Upgrade (or install) the pgai python library:** Install pgai version `>0.10.0`. This can be done with `pip install -U pgai` or via your `requirements.txt` or similar dependency file.
3. **Manage the vectorizer with the python library:** You can then manage the vectorizer from the python library or cli by using `pgai install -d DB_URL` as described in the new python-library-based [workflow](/docs/vectorizer/api-reference.md#install-or-upgrade-the-database-objects-necessary-for-vectorizer).
3. **(Optional) Remove the extension:** If you are not using Timescale Cloud and you don't use the model calling capabilities of pgai, you can then remove the pgai extension from your database.
> [!WARNING]
> If you are using Timescale Cloud, you will need to keep the extension installed to use the vectorizer cloud functions.
# Changes to the `create_vectorizer` API.
During the transition to the python library, some APIs changed for the `ai.create_vectorizer` call. On a high level:
- The `ai.create_vectorizer` call now requires a top-level `loading` argument. This allows us more flexibility in how we load data into the vectorizer. For example, we can now load data from file using the [`loading => loading_uri()`](/docs/vectorizer/api-reference.md#ailoading_uri) function.
- The destination where embeddings are stored is now configured via the `destination` top-level argument. This was done to allow us to support more types of schema design for storing embeddings. For example, we can now store embeddings in a column of a table via the [`destination => ai.destination_column()`](/docs/vectorizer/api-reference.md#aidestination_column) function in addition to the previous behavior of using a separate table via the [`destination => ai.destination_table()`](/docs/vectorizer/api-reference.md#aidestination_table) function.
These changes are automatically applied to existing vectorizers. But, when creating new vectorizers, developers should be aware of the following changes:
* `ai.create_vectorizer` now requires a [`loading =>`](/docs/vectorizer/api-reference.md#loading-configuration) argument. Previous behavior is provided via the [`loading => loading_column()`](/docs/vectorizer/api-reference.md#ailoading_column) function.
* `ai.create_vectorizer` no longer takes `destination`, `target_table`, `target_schema`, `view_schema`, `view_name` as arguments configure these options via the new [`destination => ai.destination_table()`](/docs/vectorizer/api-reference.md#destination-configuration) function instead.
* [ai.chunking_character_text_splitter](/docs/vectorizer/api-reference.md#aichunking_character_text_splitter) and [ai.chunking_recursive_character_text_splitter](/docs/vectorizer/api-reference.md#aichunking_recursive_character_text_splitter) no longer take a `chunk_column` argument, that column name is now provided via [`loading => loading_column()`](/docs/vectorizer/api-reference.md#ailoading_column) function instead.
# Commons Issues:
### Old extension still installed
If you see something like
```
psycopg.errors.DuplicateTable: relation "vectorizer" already exists
CONTEXT: SQL statement "create table ai.vectorizer
```
When trying to run `pgai install`, it likely means that you have an old version (`<0.10`) of the extension installed.
Make sure to run `ALTER EXTENSION ai UPDATE TO '0.10.1'` first!
|
docs/vectorizer/s3-documents.md
|
Pgai vectorizer S3 integration guide
|
# Pgai vectorizer S3 integration guide
Pgai vectorizers can be configured to create vector embeddings for documents stored in S3 buckets. We have a [general guide for embedding documents](./document-embeddings.md) that walks you through the steps to configure your vectorizer to load, parse, chunk and embed documents. This guide will focus on issues specific to documents stored in S3.
A simple vectorizer configuration for documents stored in S3 looks like this:
```sql
SELECT ai.create_vectorizer(
'document'::regclass,
loading => ai.loading_uri(column_name => 'uri'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
destination => ai.destination_table('document_embeddings')
);
```
Where the `document` table has a column `uri` that contains the S3 URI of the document. Learn more in our [guide for embedding documents](./document-embeddings.md).
If you do not have a documents table yet, we provide you an example of how you can sync your s3 buckets to such a table [further down in this document](#syncing-s3-to-a-documents-table).
But how do you configure the vectorizer to get access to your S3 buckets if they are not publicly accessible? This is the focus of the rest of this guide.
- [Setup for self-hosted pgai installations](#setup-for-self-hosted-pgai-installations)
- [Setup for Timescale Cloud](#setup-for-timescale-cloud)
- [Common issues and solutions](#common-issues-and-solutions)
## Setup for self-hosted pgai installations
To integrate with your AWS S3 buckets, pgai needs to authenticate. There are two main methods to authenticate with S3:
**1. Default AWS credentials**
pgai uses the default AWS credential sources, look into the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials) for details. E.g. you can set the following environment variables where the vectorizer runs:
```
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
```
The user must have appropriate S3 read permissions for the buckets containing your documents.
**2. Assume Role-based Authentication**
You can also use the `aws_role_arn` parameter to assume an IAM role. This is what Timescale Cloud uses, but it is usually not necessary if the worker runs on the same AWS account as your AWS S3 buckets::
```sql
SELECT ai.create_vectorizer(
'document'::regclass,
loading => ai.loading_uri(
column_name => 'uri',
aws_role_arn => 'arn:aws:iam::123456789012:role/S3AccessRole'
),
-- other configuration...
);
```
The role must have appropriate S3 read permissions for the buckets containing your documents.
## Setup for Timescale Cloud
For Timescale Cloud installations, only role-based authentication via `assume_role_arn` is supported.
### Create a role for s3 access
First you need to create a role that Timescale can assume:
```bash
aws iam create-role \
--role-name timescale-vectorizer-s3-access \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::142548018081:role/timescale-pgai-vectorizer"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"sts:ExternalId": "projectId/serviceId"
}
}
}
]
}'
```
Note that you need to replace the `projectId/serviceId` in the trust policy with the actual project and service id of your Timescale Cloud installation. You can find this in the Timescale Cloud console. This is a security measure that prevents the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html), which would otherwise allow other Timescale Cloud users to access your buckets if they guessed your role name and accountId.
### Grant permissions to your bucket to the role
```bash
aws iam put-role-policy \
--role-name timescale-vectorizer-s3-access \
--policy-name S3AccessPolicy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::test",
"arn:aws:s3:::test/*"
]
}
]
}'
```
### Get the role ARN
```bash
aws iam get-role --role-name timescale-s3-role-test --query 'Role.Arn' --output text
```
### Configure it in your ai.loading_uri:
```sql
ai.loading_uri(
column_name => 'uri',
aws_role_arn => 'arn:aws:iam::123456789012:role/timescale-vectorizer-s3-access'
)
```
## Syncing S3 to a Documents Table
If your application treats s3 as the source of truth for documents and therefore doesn't keep track of files in postgres, you can configure [s3 event notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html) to keep your document table synchronized with S3 when documents are uploaded, modified, or deleted.
The easiest way to handle s3 changes is to create a new AWS lambda function to listen to s3 notifications. AWS will take care of calling your function whenever the bucket content changes. The lambda function can then update the document table in your postgres instance accordingly.
### 1. Create a new lambda function
Create a new AWS Lambda function, in this example we are using Python 3.13 as our runtime. You can use the AWS console or the AWS CLI to create the function.

### 2. Configure the trigger
Next up add a trigger to your lambda function, choose s3 as the trigger type and select the bucket you want to listen to.
For event types just make sure you include all object create and delete events. S3 does not differentiate between creates and updates.

### 3. Implement the lambda function
Your lambda function then needs to [handle s3 events](https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html) and update the document table accordingly. Here is a simple example of a lambda function that does this for a table that has `uri` and `updated_at` columns:
```python
import json
import psycopg2
# You might want to load this from env vars or your secret manager instead
CONN_STRING = "postgresql://tsdbadmin:my-host-name:5432/postgres"
def lambda_handler(event, context):
conn = psycopg2.connect(CONN_STRING)
# Process each record in the event
for record in event['Records']:
# Extract S3 event details
event_name = record['eventName']
bucket_name = record['s3']['bucket']['name']
object_key = record['s3']['object']['key']
s3_uri = f"s3://{bucket_name}/{object_key}"
cur = conn.cursor()
# Determine if this is a create/update or delete event
if event_name.startswith('ObjectCreated'): # An update is also an ObjectCreated event
print(f"Creating or updating document for {s3_uri}")
cur.execute(
"INSERT INTO s3_documents (file_uri) VALUES (%s) ON CONFLICT (file_uri) DO UPDATE SET updated_at = CURRENT_TIMESTAMP",
[s3_uri]
)
elif event_name.startswith('ObjectRemoved'):
print(f"Deleting document for {s3_uri}")
cur.execute("Delete from s3_documents where file_uri=%s;", [s3_uri])
conn.commit()
cur.close()
conn.close()
return {
'statusCode': 200,
'body': json.dumps('S3 event processing completed successfully')
}
```
This lambda function requires the `psycopg2` library to connect to postgres. You can either include it in your [deployment package](https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-create-dependencies) or use a [custom docker image](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html) to include it.
> [!NOTE]
> If you are working on an ARM Mac installing the right binary version of psycopg2 needs some fiddling with pip. [This guide](https://aws.plainenglish.io/installing-psycopg2-on-aws-lambda-when-developing-on-arm-macs-f1453199f516) might help.
That's it: Save and deploy the function. The lambda function will be triggered whenever a file is created, updated or deleted in the S3 bucket. It will then insert or delete the corresponding document in the `s3_documents` table or update the `updated_at` timestamp if the document already exists. This will in turn inform any configured vectorizer to reprocess the document.
## Common issues and solutions
**1. S3 Access Issues**
If documents from S3 fail to load:
- Verify AWS credentials are correctly configured
- Check that IAM roles have appropriate permissions
- Ensure S3 bucket names and object keys are correct
|
docs/vectorizer/python-integration.md
|
Overview
|
# Overview
This document describes how to create and run vectorizers from Python.
# Installation
First, install the pgai library:
```bash
pip install pgai
```
Then, you need to install the necessary database tables and functions. All database objects will be created in the `ai` schema. This is done by running the following Python code:
```python
import pgai
pgai.install(DB_URL)
```
# Creating vectorizers from python
To help you create vectorizers from python, pgai provides the `CreateVectorizer` helper class. This class makes it easy to generate the `create_vectorizer` SQL statement, by giving you a pythonic interface.
It accepts all the options listed in the [SQL API](/docs/vectorizer/api-reference.md) and exposes the `to_sql`
method to generate a SQL query which you can then run through the SQL library of your choice:
```python
from pgai.vectorizer import CreateVectorizer
from pgai.vectorizer.configuration import EmbeddingOpenaiConfig, ChunkingCharacterTextSplitterConfig, FormattingPythonTemplateConfig, LoadingColumnConfig, DestinationTableConfig
vectorizer_statement = CreateVectorizer(
source="blog",
name="blog_content_embedder", # Optional custom name for easier reference
destination=DestinationTableConfig(
destination='blog_embeddings'
),
loading=LoadingColumnConfig(column_name='content'),
embedding=EmbeddingOpenaiConfig(
model='text-embedding-3-small',
dimensions=768
),
chunking=ChunkingCharacterTextSplitterConfig(
chunk_size=800,
chunk_overlap=400,
separator='.',
is_separator_regex=False
),
formatting=FormattingPythonTemplateConfig(template='$title - $chunk')
).to_sql()
```
Then, you can run this statement using the PostgreSQL library of your choice. For example, using the [`psycopg`](https://www.psycopg.org/psycopg3/docs/) library:
```python
import psycopg
with psycopg.connect(conn_string) as conn:
with conn.cursor() as cursor:
cursor.execute(vectorizer_statement)
```
# Running the vectorizer worker
You can then run the vectorizer worker using the the CLI tool or the `Worker` class discussed in the [vectorizer worker documentation](/docs/vectorizer/worker.md).
Note that you will need to install pgai with the `vectorizer-worker` extra to run the worker yourself:
```bash
pip install pgai[vectorizer-worker]
```
# Related integrations
- [SQLAlchemy integration](/docs/vectorizer/sqlalchemy-integration.md)
- [Alembic integration](/docs/vectorizer/alembic-integration.md)
|
docs/vectorizer/quick-start.md
|
Vectorizer quick start
|
# Vectorizer quick start
This page shows you how to create an Ollama-based vectorizer in a self-hosted Postgres instance. We also show how simple it is to do semantic search on the automatically embedded data!
If you prefer working with the OpenAI API instead of self-hosting models, you can jump over to the [openai quick start](/docs/vectorizer/quick-start-openai.md).
## Setup a local development environment
To set up a development environment, use a docker compose file that includes a:
- Postgres deployment image with the TimescaleDB and pgai extensions installed
- pgai vectorizer worker image
- ollama image to host embedding and large language models
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Create the following `compose.yaml` in a new directory:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
OLLAMA_HOST: http://ollama:11434
command: [ "--poll-interval", "5s" ]
ollama:
image: ollama/ollama
volumes:
data:
```
1. **Start the database**
```shell
docker compose up -d db
```
1. **Install pgai in your database**
```shell
docker compose run --rm --entrypoint "python -m pgai install -d postgres://postgres:postgres@db:5432/postgres" vectorizer-worker
```
1. **Start the vectorizer worker and ollama**
```shell
docker compose up -d
```
## Create and run a vectorizer
Now we can create and run a vectorizer. A vectorizer is a pgai concept, it processes data in a table and automatically creates embeddings for it.
1. **Connect to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on your database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
destination => ai.destination_table('blog_contents_embeddings')
);
```
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.ollama_embed('nomic-embed-text', 'good food', host => 'http://ollama:11434') as distance
FROM blog_contents_embeddings
ORDER BY distance
LIMIT 10;
```
The results look like:
| chunk | distance |
|-------------------------------------------------------------------------------|--------------------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.5030059372474176 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.5868937074856113 |
| PostgreSQLBlogging can be a great way to share your thoughts and expertise... | 0.5928412342761966 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.6161160890734267 |
| Cloud computing has revolutionized the way businesses operate... | 0.6664001441252841 |
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp).
|
docs/vectorizer/quick-start-ollama.md
|
Vectorizer quick start with Ollama
|
# Vectorizer quick start with Ollama
## Go to our vectorizer-quickstart [here](/docs/vectorizer/quick-start.md) to start with pgai and ollama.
|
docs/vectorizer/sqlalchemy-integration.md
|
SQLAlchemy Integration with pgai Vectorizer
|
# SQLAlchemy Integration with pgai Vectorizer
When creating vectorizers that use the `ai.destination_table` option, the vectorizer will create a new table in the database to store the vector embeddings. This guide describes how to integrate this new table,
and it's relationship to your other tables, into your SQLAlchemy models.
The heart of this integration is the `vectorizer_relationship` helper. Think of it as a normal SQLAlchemy [relationship](https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html), but with a preconfigured model instance under the hood.
This allows you to easily query vector embeddings created by pgai using familiar SQLAlchemy patterns.
## Installation
To use the SQLAlchemy integration, install pgai with the SQLAlchemy extras:
```bash
pip install "pgai[sqlalchemy]"
```
## Basic Usage
Here's a basic example of how to use the `vectorizer_relationship`:
```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from pgai.sqlalchemy import vectorizer_relationship
class Base(DeclarativeBase):
pass
class BlogPost(Base):
__tablename__ = "blog_posts"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str]
content: Mapped[str]
# Add vector embeddings for the content field
content_embeddings = vectorizer_relationship(
dimensions=768
)
```
Note if you work with alembics autogenerate functionality for migrations, also check the [Alembic integration guide](alembic-integration.md).
### Semantic Search
You can then perform semantic similarity search on the field using [pgvector-python's](https://github.com/pgvector/pgvector-python) distance functions:
```python
from sqlalchemy import func, text
similar_posts = (
session.query(BlogPost.content_embeddings)
.order_by(
BlogPost.content_embeddings.embedding.cosine_distance(
func.ai.openai_embed(
"text-embedding-3-small",
"search query",
text("dimensions => 768")
)
)
)
.limit(5)
.all()
)
```
Or if you already have the embeddings in your application:
```python
similar_posts = (
session.query(BlogPost.content_embeddings)
.order_by(
BlogPost.content_embeddings.embedding.cosine_distance(
[3, 1, 2]
)
)
.limit(5)
.all()
)
```
## Configuration
The `vectorizer_relationship` accepts the following parameters:
- `dimensions` (int): The size of the embedding vector (required)
- `target_schema` (str, optional): Override the schema for the embeddings table. If not provided, inherits from the parent model's schema
- `target_table` (str, optional): Override the table name for embeddings. Default is `{table_name}_embedding_store`
Additional parameters are simply forwarded to the underlying [SQLAlchemy relationship](https://docs.sqlalchemy.org/en/20/orm/relationships.html) so you can configure it as you desire.
Think of the `vectorizer_relationship` as a normal SQLAlchemy relationship, but with a preconfigured model instance under the hood.
The relationship into the other direction is also automatically set, if you want to change it's configuration you can set the
`parent_kwargs`parameter. E.g. `parent_kwargs={"lazy": "joined"}` to configure eager loading.
## Setting up the Vectorizer
After defining your model, you need to create the vectorizer using pgai's SQL functions:
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
loading => ai.loading_column('content'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
chunking => ai.chunking_recursive_character_text_splitter(
50, -- chunk_size
10 -- chunk_overlap
)
);
```
We recommend adding this to a migration script and run it via alembic (see our [alembic integration docs](alembic-integration.md) for more details).
## Querying Embeddings
The `vectorizer_relationship` provides several ways to work with embeddings:
### 1. Direct Access to Embeddings
If you access the class property of your model the `vectorizer_relationship` provide a SQLAlchemy model that you can query directly:
```python
# Get all embeddings
embeddings = session.query(BlogPost.content_embeddings).all()
# Access embedding properties
for embedding in embeddings:
print(embedding.embedding) # The vector embedding
print(embedding.chunk) # The text chunk
```
The model will have the primary key fields of the parent model as well as the following fields:
- `chunk` (str): The text chunk that was embedded
- `embedding` (Vector): The vector embedding
- `chunk_seq` (int): The sequence number of the chunk
- `embedding_uuid` (str): The UUID of the embedding
- `parent` (ParentModel): The parent model instance
### 2. Relationship Access
```python
blog_post = session.query(BlogPost).first()
for embedding in blog_post.content_embeddings:
print(embedding.chunk)
```
Access the original posts through the parent relationship
```python
for embedding in similar_posts:
print(embedding.parent.title)
```
### 3. Join Queries
You can combine embedding queries with regular SQL queries using the relationship:
```python
results = (
session.query(BlogPost, BlogPost.content_embeddings)
.join(BlogPost.content_embeddings)
.filter(BlogPost.title.ilike("%search term%"))
.all()
)
for post, embedding in results:
print(f"Title: {post.title}")
print(f"Chunk: {embedding.chunk}")
```
|
docs/vectorizer/quick-start-openai.md
|
Vectorizer quick start with OpenAI
|
# Vectorizer quick start with OpenAI
This page shows you how to create a vectorizer in a self-hosted Postgres instance, then use
the pgai vectorizer worker to create embeddings from data in your database. To finish off we show how simple it
is to do semantic search on the embedded data in one query!
## Setup a local developer environment
The local developer environment is a docker configuration you use to develop and test pgai, vectorizers and vectorizer
worker locally. It includes a:
- Postgres deployment image with the TimescaleDB and pgai extensions installed
- pgai vectorizer worker image
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Copy the following configuration into a file named `compose.yaml`:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg16
environment:
POSTGRES_PASSWORD: postgres
OPENAI_API_KEY: <your-api-key>
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
OPENAI_API_KEY: <your-api-key>
volumes:
data:
```
1. **Tune the developer image for your AI provider**
Replace `OPENAI_API_KEY` with a key from your AI provider.
1. **Start the database**
```shell
docker compose up -d db
```
1. **Install pgai in your database**
```shell
docker compose run --rm --entrypoint "python -m pgai install -d postgres://postgres:postgres@db:5432/postgres" vectorizer-worker
```
## Create and run a vectorizer
To create and run a vectorizer, then query the auto-generated embeddings created by the vectorizer:
1. **Connection to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on your database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
destination => ai.destination_table('blog_contents_embeddings')
);
```
5. **Run the vectorizer worker**
When you install pgai on Timescale Cloud, vectorizers are run automatically using TimescaleDB scheduling.
For self-hosted, you run a pgai vectorizer worker so the vectorizer can process the data in `blog`.
In a new terminal, start the vectorizer worker:
```shell
docker compose up -d vectorizer-worker
```
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.openai_embed('text-embedding-3-small', 'good food', dimensions=>768) as distance
FROM blog_contents_embeddings
ORDER BY distance;
```
The results look like:
| chunk | distance |
|------|--------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.6720892190933228 |
| Blogging can be a great way to share your thoughts and expertise... | 0.7744888961315155 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.815629243850708 |
| Cloud computing has revolutionized the way businesses operate... | 0.8913049921393394 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.9215681301612775 |
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp).
|
docs/vectorizer/quick-start-voyage.md
|
Vectorizer quick start with VoyageAI
|
# Vectorizer quick start with VoyageAI
This page shows you how to create a vectorizer and run a semantic search on the automatically embedded data on a self-hosted Postgres instance.
To follow this tutorial you need to have a Voyage AI account API key. You can get one [here](https://www.voyageai.com/).
## Setup a local development environment
To set up a development environment for Voyage AI, create a docker-compose file that includes:
- The official TimescaleDB docker image with pgai, pgvectorscale and timescaledb included
- The pgai vectorizer worker image
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Create the following `docker-compose.yml` in a new directory:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
VOYAGE_API_KEY: your-api-key
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
VOYAGE_API_KEY: your-api-key
command: [ "--poll-interval", "5s" ]
volumes:
data:
```
1. **Start the services**
```shell
docker compose up -d
```
1. **Install pgai in your database**
```shell
docker compose run --rm --entrypoint "python -m pgai install -d postgres://postgres:postgres@db:5432/postgres" vectorizer-worker
```
## Create and run a vectorizer
Now you can create and run a vectorizer. A vectorizer is a pgai concept, it processes data in a table and automatically creates embeddings for it.
1. **Connect to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on the database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_voyageai(
'voyage-3.5-lite', -- or 'voyage-3.5', 'voyage-3-large', 'voyage-code-3', etc.
1024 -- default dimensions for voyage-3.5-lite
),
destination => ai.destination_table('blog_contents_embeddings')
);
```
**Available Voyage AI Models:**
- `voyage-3.5-lite`: Cost & latency optimized, 1024 dims (1M tokens/request) - **Recommended**
- `voyage-3.5`: General-purpose optimized, 1024 dims (320K tokens/request)
- `voyage-3-large`: Best for general-purpose & multilingual, 1024 dims (120K tokens/request)
- `voyage-code-3`: Specialized for code retrieval, 1024 dims (120K tokens/request)
- `voyage-finance-2`: Finance domain optimized, 1024 dims
- `voyage-law-2`: Legal document optimized, 1024 dims
- `voyage-3-lite`: Older model, 512 dims (120K tokens/request)
**Flexible Dimensions (New!):**
For voyage-3.x models, you can specify `output_dimension` to reduce storage and improve performance:
```sql
-- Use 256 dimensions for 75% storage reduction
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_voyageai(
'voyage-3.5-lite',
1024, -- Schema dimensions
output_dimension => 256 -- Actual embedding dimensions
),
destination => ai.destination_table('blog_embeddings_compact')
);
```
**Dimension Trade-offs:**
- **256 dims**: Fastest search, 75% less storage, minimal accuracy loss
- **512 dims**: Balanced performance and accuracy
- **1024 dims**: Default, best accuracy (recommended for most use cases)
- **2048 dims**: Maximum accuracy for complex tasks
**Quantization (New!):**
Use `output_dtype` to reduce network bandwidth and API costs:
```sql
-- Use int8 quantization for 4x bandwidth reduction
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_voyageai(
'voyage-3.5-lite',
1024,
output_dtype => 'int8' -- Options: float, int8, uint8, binary, ubinary
),
destination => ai.destination_table('blog_embeddings_quantized')
);
```
**Quantization Options:**
- **float**: Default, no compression (4 bytes per dimension)
- **int8**: Integer quantization, 4x smaller transfer (~1 byte per dim)
- **uint8**: Unsigned integer quantization, 4x smaller
- **binary**: Maximum compression, 32x smaller (1 bit per dim)
- **ubinary**: Unsigned binary, 32x smaller
Note: Quantized embeddings are automatically converted to float for storage in PostgreSQL, so you get bandwidth savings but not storage savings.
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.voyageai_embed('voyage-3.5-lite', 'good food') as distance
FROM blog_contents_embeddings
ORDER BY distance;
```
The results look like:
| Chunk | Distance |
|--------|-----------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.6102883386268212 |
| Blogging can be a great way to share your thoughts and expertise... | 0.7245166465928164 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.7789760644464416 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.9036547272308249 |
| Cloud computing has revolutionized the way businesses operate... | 0.9131323552491029 |
## Reranking with Voyage AI
Voyage AI also provides reranking capabilities to improve search result relevance. Reranking takes your initial search results and reorders them based on relevance to your query.
### Using the Reranker
**Basic reranking:**
```sql
SELECT *
FROM ai.voyageai_rerank_simple(
'rerank-2.5',
'What are best practices for healthy eating?',
ARRAY[
'Maintaining a healthy diet can be challenging for busy professionals...',
'Blogging can be a great way to share your thoughts and expertise...',
'PostgreSQL is a powerful, open source object-relational database system...',
'As we look towards the future, artificial intelligence continues to evolve...',
'Cloud computing has revolutionized the way businesses operate...'
],
api_key => 'your-api-key'
)
ORDER BY relevance_score DESC;
```
**Results:**
| index | document | relevance_score |
|-------|----------|-----------------|
| 0 | Maintaining a healthy diet can be challenging... | 0.9156 |
| 1 | Blogging can be a great way to share... | 0.2341 |
| 4 | Cloud computing has revolutionized... | 0.1023 |
| ... | ... | ... |
**Limit results with top_k:**
```sql
SELECT *
FROM ai.voyageai_rerank_simple(
'rerank-2.5-lite',
'healthy eating',
ARRAY['...'],
api_key => 'your-api-key',
top_k => 3
)
ORDER BY relevance_score DESC;
```
### Available Reranker Models
**Current Generation (Recommended):**
| Model | Context Length | Best For |
|-------|---------------|----------|
| `rerank-2.5` | 32K tokens | Quality with multilingual/instruction support |
| `rerank-2.5-lite` | 32K tokens | Latency & quality balance |
**Older Models:**
| Model | Context Length | Notes |
|-------|---------------|-------|
| `rerank-2` | 16K tokens | Legacy |
| `rerank-2-lite` | 8K tokens | Legacy |
| `rerank-1` | 8K tokens | Legacy |
| `rerank-lite-1` | 4K tokens | Legacy |
### Reranker vs Semantic Search
- **Semantic Search** (embeddings): Fast initial retrieval from large datasets
- **Reranking**: Precise relevance scoring for top-k results from semantic search
**Typical workflow:**
1. Use semantic search to get top 100 candidates
2. Use reranker to get the most relevant 5-10 results
---
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp).
|
docs/vectorizer/worker.md
|
Running on Timescale Cloud
|
# Running on Timescale Cloud
When you install pgai on **Timescale Cloud**, the vectorizer worker is automatically activated and run on a schedule
so you don't need to do anything, everything just works out of the box.
**How it works**: When you deploy a pgai vectorizer on Timescale Cloud, a scheduled job detects
whether work is to be done for the vectorizers. If there is, the job triggers a cloud function to embed the data.
**Disable the cloud function**: There are some instances in which you might want
to run the vectorizer worker manually and disable the cloud function from
running. You can do this by setting [scheduling =>ai.scheduling_none()](/docs/vectorizer/api-reference.md#scheduling-configuration)
in the configuration for your vectorizer. Then you can run the vectorizer
worker manually using the `pgai vectorizer worker` command or any other method
discussed below.
**Manually trigger an execution**: You can also always manually trigger an execution of the vectorizer cloud function by running `select ai.execute_vectorizer(<vectorizer_id>)`.
> [!NOTE]
> Timescale Cloud currently does not support Ollama. To use Ollama on the data in your Timescale Cloud
> service you need to disable the cloud function and run the vectorizer worker yourself.
# Running on self-hosted Postgres or other platforms
When you use pgai vectorizers on a self-hosted Postgres installation or another cloud provider other than Timescale Cloud, you have to run the vectorizer worker yourself. The vectorizer worker will connect to your database and process the vectorizers you have defined. You can run the vectorizer:
- Through the pgai CLI tool as `pgai vectorizer worker` (see [instructions below](#running-a-vectorizer-worker-as-a-cli-tool))
- Integrating the vectorizer worker as a background process into your own python application (see [instructions below](#running-a-vectorizer-worker-in-your-own-application))
- Using the vectorizer worker Docker image (see [instructions below](#running-a-vectorizer-worker-with-docker))
- In a Docker Compose configuration (see [instructions below](#running-a-vectorizer-worker-with-docker-compose))
## Running a vectorizer worker as a CLI tool
**Prerequisites**: [Python (>= 3.10)][python3] and [pip][pip]
1. **Install [pgai](https://pypi.org/project/pgai/) from PyPI**
```shell
pip install pgai[vectorizer-worker]
```
The `pgai` command line tool should now be in your `$PATH`.
1. **Create a .env file**
[Configure](#setting-api-keys-through-environment-variables-or-env-file) the API keys for your embedding providers by adding them to a `.env` file. For example, if you are using OpenAI, you can add the following:
```
OPENAI_API_KEY=<your-openai-api-key>
```
Alternatively, you can set the API key through an environment variable directly.
1. **Run the vectorizer worker**
After you [define a vectorizer in your database](/docs/vectorizer/overview.md#define-a-vectorizer), you run
a vectorizer worker to generate and update your embeddings:
```shell
pgai vectorizer worker -d "postgres://user:password@host:5432/database"
```
For more configuration options, see [Advanced configuration options](#advanced-configuration-options) below.
## Running a vectorizer worker in your own application
**Prerequisites**: [Python (>= 3.10)][python3] and [pip][pip]
1. Add the pgai package dependency to your project
```shell
pip install pgai[vectorizer-worker]
```
or add `pgai` to the dependencies in your `requirements.txt` file, `pyproject.toml`, or similar configuration file.
2. Add the vectorizer worker to run in the background of your application
```python
from pgai import Worker
worker = Worker(db_url="postgres://user:password@host:5432/database")
task = asyncio.create_task(worker.run())
```
We recommend to shutdown the worker gracefully when your application shuts
down. It won't cause problems if you skip this step and force kill the
worker, but this ensures the last batch of work is finished, if possible,
before stopping the process. You can do this by calling
`request_graceful_shutdown` on the worker. Example:
```python
await worker.request_graceful_shutdown()
try:
result = await asyncio.wait_for(task, timeout=20)
if result is not None:
print("Worker shutdown with exception:", result)
else:
print("Worker shutdown successfully")
except asyncio.TimeoutError:
print("Worker did not shutdown in time, it was killed")
```
3. Make sure you add the API keys for your embedding providers to the environment variables when you run
your application.
We recommend using a `.env` file to set the API key and then having your
application load the `.env` file using the `load_dotenv` function from the
`python-dotenv` package.
Alternatively, you can set the API key through an environment variable directly.
4. Run your application
For more configuration options, see [Advanced configuration options](#advanced-configuration-options) below.
## Running a vectorizer worker with Docker
**Prerequisites**: [Docker][docker]
1. **Create a .env file**
[Configure](#setting-api-keys-through-environment-variables-or-env-file) the API keys for your embedding providers by adding them to a `.env` file. For example, if you are using OpenAI, you can add the following:
```
OPENAI_API_KEY=<your-openai-api-key>
```
Alternatively, you can set the API key by passing it as an environment variable in the `docker run` command below.
1. **Run the vectorizer worker**
After you [define a vectorizer in your database](/docs/vectorizer/overview.md#define-a-vectorizer), you run a vectorizer worker to generate and update your embeddings.
```
docker run --env-file=.env timescale/pgai-vectorizer-worker:{tag version} --db-url "postgres://user:password@host:5432/database"
```
For more configuration options, see [Advanced configuration options](#advanced-configuration-options) below.
## Running a vectorizer worker with Docker Compose
Below is an end-to-end batteries-included Docker Compose configuration which you can use to test pgai vectorizers and and the vectorizer worker locally. It includes a:
- local Postgres instance,
- Ollama embedding API service
- pgai vectorizer worker
On your local machine:
1. **Copy the following configuration into a file named `compose.yaml`**
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
OLLAMA_HOST: http://ollama:11434
command: [ "--poll-interval", "5s" ]
ollama:
image: ollama/ollama
volumes:
data:
```
1. **Start the services locally**
```shell
docker compose up -d
```
1. **Connect to your self-hosted database**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
For more configuration options, see [Advanced configuration options](#advanced-configuration-options) below.
# Configure the vectorizer worker
Most users of the vectorizer worker will need to set the following configuration options:
- [Setting the database connection string](#setting-the-database-connection-string)
- [Setting API keys for embedding providers through environment variables (or .env file)](#setting-api-keys-through-environment-variables-or-env-file)
Other, advanced configuration options are available, see [Advanced configuration options](#advanced-configuration-options) below.
## Setting the database connection string
The vectorizer worker needs to know how to connect to your database. You can do this by setting the `-d` command line argument or the `PGAI_VECTORIZER_WORKER_DB_URL` environment variable.
For example, if you are using a local Postgres database, you can set the database connection string as follows:
```
pgai vectorizer worker -d "postgres://user:password@host:5432/database"
```
## Setting API keys through environment variables (or .env file)
If you are using an embedding provider that requires an API key (which most do),
you can set the API key through an environment variable or a .env file.
For example, if you are using OpenAI, you can set the API key in a .env file as follows:
```
OPENAI_API_KEY=<your-openai-api-key>
```
Or you can set the API key through an environment variable:
```
export OPENAI_API_KEY=<your-openai-api-key>
```
## Advanced configuration options
Most users of the vectorizer worker will be happy with the default configuration for all other options.
But, if you need to, you can control the following:
- The vectorizer ids that are processed by the vectorizer worker ([section below](#control-which-vectorizers-are-processed))
- The time between vectorizer worker runs ([section below](#set-the-time-between-vectorizer-worker-runs))
- The number of asynchronous tasks running in a vectorizer worker ([section below](#set-the-number-of-asynchronous-tasks-running-in-a-vectorizer-worker))
- Whether to run the vectorizer worker once and then exit ([section below](#run-the-vectorizer-worker-once-and-then-exit))
All of these options can be set through the command line arguments, environment variables, or through an argument to the `Worker`
class constructor in the `pgai` Python package.
| Option | Command line argument | Environment variable | `Worker` class constructor argument |
|--------|-----------------------|--------------------|-------------------------------------|
| Control which vectorizers are processed | `-i` / `--vectorizer-id` | `PGAI_VECTORIZER_WORKER_VECTORIZER_IDS` | `vectorizer_ids` |
| Set the time between vectorizer worker runs | `--poll-interval` | `PGAI_VECTORIZER_WORKER_POLL_INTERVAL` | `poll_interval` |
| Set the number of asynchronous tasks running in a vectorizer worker | `-c` / `--concurrency` | `PGAI_VECTORIZER_WORKER_CONCURRENCY` | `concurrency` |
| Run the vectorizer worker once and then exit | `--once` | `PGAI_VECTORIZER_WORKER_ONCE` | `once` |
### Control which vectorizers are processed
If you want to run a vectorizer worker that only processes a subset of the vectorizers in your database,
you can do so by specifying the vectorizer ids you want to process. You can do this by using the
`-i` / `--vectorizer-id` command line argument.
A vectorizer worker can:
- Run all vectorizers:
To run all current and future vectorizers:
- cli: `pgai vectorizer worker`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database")`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version}`
- Docker Compose: `command: []`
- Run a single vectorizer:
To run the vectorizer with id 42:
- cli: `pgai vectorizer worker -i 42`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[42])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
- Run multiple specific vectorizers:
To run the vectorizers with ids `42`, `64`, and `8`:
- cli: `pgai vectorizer worker -i 42 -i 64 -i 8`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[42, 64, 8])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42 -i 64 -i 8`
- Docker Compose: `command: ["-i", "42", "-i", "64", "-i", "8"]`
- Run multiple vectorizers in concurrent vectorizer workers:
To run the vectorizers with id `42` and `64` in different vectorizer workers:
1. In a first shell, run:
- cli: `pgai vectorizer worker -i 42`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[42])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
1. In another shell, run:
- cli: `pgai vectorizer worker -i 64`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[64])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 64`
- Docker Compose: `command: ["-i", "64"]`
- Run concurrent vectorizer workers on a single vectorizer
More than one vectorizer worker can efficiently process the same vectorizer id
at the same time. To run the vectorizer with id `41` in different vectorizer workers:
1. In a first shell, run:
- cli: `pgai vectorizer worker -i 42`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[42])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
1. In another shell, run:
- cli: `pgai vectorizer worker -i 42`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", vectorizer_ids=[42])`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
You find the vectorizer ids in the `ai.vectorizer` table.
### Set the time between vectorizer worker runs
When you run a vectorizer worker, it loops over the vectorizers defined in your database.
Each vectorizer worker processes vectorizer queue until it is empty. By
default, the vectorizer worker sleeps for five minutes, then start over.
To control the time between vectorizer worker iterations, set the integer seconds or a duration string
in the `--poll-interval` parameter:
- Run every hour:
- cli: `pgai vectorizer worker --poll-interval=1h`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", poll_interval=timedelta(hours=1))`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=1h`
- Docker Compose: `command: ["--poll-interval", "1h"]`
- Run every 45 minutes:
- cli: `pgai vectorizer worker --poll-interval=45m`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", poll_interval=timedelta(minutes=45))`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=45m`
- Docker Compose: `command: ["--poll-interval", "45m"]`
- Run every 900 seconds:
- cli: `pgai vectorizer worker --poll-interval=900`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", poll_interval=timedelta(seconds=900))`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=900`
- Docker Compose: `command: ["--poll-interval", "900"]`
You can also make the vectorizer worker run only once by setting the `--once` flag. See [Run the vectorizer worker once and then exit](#run-the-vectorizer-worker-once-and-then-exit) for more details.
### Set the number of asynchronous tasks running in a vectorizer worker
Use the `-c` / `--concurrency` option to cause the vectorizer worker to use
multiple asynchronous tasks to process a queue:
- cli: `pgai vectorizer worker -c 3`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", concurrency=3)`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -c 3`
- Docker Compose: `command: ["-c", "3"]`
### Run the vectorizer worker once and then exit
You can run the vectorizer worker once and then exit by using the `--once` flag. This is useful for debugging or if you want to run the vectorizer worker in a cron job.
- cli: `pgai vectorizer worker --once`
- python: `worker = Worker(db_url="postgres://user:password@host:5432/database", once=True)`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --once`
- Docker Compose: `command: ["--once"]`
[python3]: https://www.python.org/downloads/
[pip]: https://pip.pypa.io/en/stable/installation/#supported-methods
[docker]: https://docs.docker.com/get-docker/
[psql]: https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/
[openai-key]: https://platform.openai.com/api-keys
[voyage-key]: https://docs.voyageai.com/docs/faq#how-do-i-get-the-voyage-api-key
|
docs/vectorizer/overview.md
|
Automate AI embedding with pgai Vectorizer
|
# Automate AI embedding with pgai Vectorizer
Vector embeddings have emerged as a powerful tool for transforming text into
compact, semantically rich representations. This approach unlocks the potential
for more nuanced and context-aware searches, surpassing traditional
keyword-based methods. By leveraging vector embeddings, users can search through
things that have similar meanings but use completely different words.
While modern vector databases like PostgreSQL excel at storing and querying
these embeddings efficiently, the challenge of maintaining synchronization
between embeddings and their source data has typically fallen to developers,
requiring manual workflows and custom solutions.
Enter our innovative SQL-level interface for embedding services. This guide
introduces a groundbreaking approach that automates the embedding process within
the database management system itself. By treating embeddings as a declarative,
DDL-like feature—akin to an index -- but with the added flexibility of
representing only a part of a row's data -- we've simplified the entire workflow.
Our system empowers you to:
- Designate any text column for embedding using customizable rules (or, if you are embedding binary documents such as PDFs, you can see our guide for [embedding documents](document-embeddings.md))
- Automatically generate and maintain searchable embedding tables
- Keep embeddings continuously synchronized with source data (asynchronously)
- Utilize a convenient view that seamlessly joins base tables with their embeddings
This page offers a comprehensive overview of Vectorizer features,
demonstrating how it streamlines the process of working with vector embeddings
in your database. To quickly try out embeddings using a pre-built Docker developer environment, see the
[Vectorizer quick start](/docs/vectorizer/quick-start.md). For a more detailed technical specification, see the
[Vectorizer API reference](/docs/vectorizer/api-reference.md).
To make embedding generation performant, and resilient to intermittent LLM
endpoint failures, we use a background worker to perform the embedding
generation. When you create Vectorizers in a [Timescale Cloud](https://tsdb.co/gh-pgai-signup) database, the
worker runs automatically and creates and synchronizes the embeddings in the
background. When using a database on another cloud provider (AWS RDS, Supabase,
etc.) or self-hosted Postgres, you can use the [vectorizer worker](/docs/vectorizer/worker.md) to
process your vectorizers.
This guide walks you the steps to configure your vectorizer to embed data stored in text columns. If you are embedding binary documents such as PDFs, see our guide for [embedding documents](document-embeddings.md).
Let's explore how the Vectorizer can transform your approach to unstructured,
textual, data analysis, and semantic search:
- [Select an embedding provider and set up your API Keys](#select-an-embedding-provider-and-set-up-your-api-keys)
- [Define a vectorizer](#define-a-vectorizer)
- [Query an embedding](#query-an-embedding)
- [Inject context into vectorizer chunks](#inject-context-into-vectorizer-chunks)
- [Improve query performance on your Vectorizer](#improve-query-performance-on-your-vectorizer)
- [Control vectorizer run time](#control-the-vectorizer-run-time-)
- [The embedding storage table](#the-embedding-storage-table)
- [Monitor a vectorizer](#monitor-a-vectorizer)
## Select an embedding provider and set up your API Keys
Vectorizer supports the following vector embedding providers as first-party integrations:
- [Ollama](https://ollama.com/)
- [Voyage AI](https://www.voyageai.com/)
- [OpenAI](https://openai.com/)
Additionally, through the [LiteLLM](https://litellm.ai) provider we support:
- [Cohere](https://cohere.com/)
- [HuggingFace Inference Endpoints](https://endpoints.huggingface.co/)
- [Mistral](https://mistral.ai/)
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
- [AWS Bedrock](https://aws.amazon.com/bedrock/)
- [Vertex AI](https://cloud.google.com/vertex-ai)
When using an external embedding service, you need to setup your API keys to access
the service. To store several API keys, you give each key a name and reference them
in the `embedding` section of the Vectorizer configuration. The default API key
names match the embedding provider's default name.
The default key names are:
| Provider | Key name |
|-----------|----------------|
| OpenAI | OPENAI_API_KEY |
| Voyage AI | VOYAGE_API_KEY |
Setting up your API keys is done differently depending on whether you are using Vectorizer in
Timescale Cloud or on a self-hosted Postgres server.
- Timescale Cloud
1. In [Timescale Console > Project Settings](https://console.cloud.timescale.com/dashboard/settings), click `AI Model API Keys`.
1. Click `Add AI Model API Keys`, add your key, then click `Add API key`.
Your API key is stored securely in Timescale Cloud, not your database.
- Self-hosted Postgres
Set an environment variable that is the [same as your API key name](/docs/vectorizer/worker.md#install-and-configure-vectorizer-worker).
For example:
```bash
export OPENAI_API_KEY="Your OpenAI API key"
```
## Define a vectorizer
You can configure the system to automatically generate and update embeddings
for a table's data. Let's consider the following example table:
```sql
CREATE TABLE blog(
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
To configure the system to embed this data automatically, you can use a SQL
query like this:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
name => 'blog_embeddings', -- Optional custom name for easier reference
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
destination => ai.destination_table('blog_contents_embeddings')
);
```
This example uses the `nomic-embed-text` embedding model hosted on a local
Ollama instance. Vectorizer supports other embedding providers, for more details
consult the [embedding configuration](/docs/vectorizer/api-reference.md#embedding-configuration)
section of the vectorizer API reference.
The `loading` parameter specifies the source of the data to generate embeddings from. E.g. from the `contents` column.
Vectorizer supports other loaders, such as the
`ai.loading_uri`, which loads external documents from local or remote buckets like S3, etc.
For more details, check the [loading configuration](/docs/vectorizer/api-reference.md#loading-configuration) section
of the vectorizer API reference or our [guide for embedding documents](document-embeddings.md).
Additionally, if the `contents` field is lengthy, it is split into multiple chunks,
resulting in several embeddings for a single blog post. Chunking helps
ensure that each embedding is semantically coherent, typically representing a
single thought or concept. A useful mental model is to think of embedding one
paragraph at a time.
However, splitting text into chunks can sometimes lead to losing context. To
mitigate this, you can reintroduce context into each chunk. For instance, you
might want to repeat the blog post's title in every chunk. This is easily
achieved using the `formatting` parameter, which allows you to inject row data
into each chunk:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title: $chunk'),
destination => ai.destination_table('blog_contents_embeddings')
);
```
This approach ensures that each chunk retains important contextual information,
improving the quality and relevance of the embeddings.
On Timescale Cloud, vectorizers are created automatically and scheduled using TimescaleDB background jobs running
every five minutes. If you are self-hosting, you need to [run the vectorizer-worker](/docs/vectorizer/worker.md)
manually to create and run the vectorizer.
## Query an embedding
The `create_vectorizer` command generates a view with the same name as the
specified destination. This view contains all the embeddings for the blog table.
Note that you'll typically have multiple rows in the view for each blog entry,
as multiple embeddings are usually generated for each source document.
The view includes all columns from the blog table plus the following additional columns:
| Column | Type | Description |
|----------------|--------|-----------------------------------------------------------------|
| embedding_uuid | UUID | Unique identifier for the embedding |
| chunk | TEXT | The text segment that was embedded |
| embedding | VECTOR | The vector representation of the chunk |
| chunk_seq | INT | Sequence number of the chunk within the document, starting at 0 |
To find the closest embeddings to a query, use this canonical SQL query:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance
FROM blog_contents_embeddings
ORDER BY distance
LIMIT 10;
```
The `<=>` operator calculates the distance between the query embedding and each
row's embedding vector. This is a simple way to do semantic search.
**Tip**: You can use the `ai.ollama_embed` function in our [PostgreSQL extension](/projects/extension/README.md) to generate an embedding for a user-provided query right inside the database.
You can combine this with metadata filters by adding a WHERE clause:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance
FROM blog_contents_embeddings
WHERE
metadata->>'department' = 'finance'
ORDER BY
distance
LIMIT 10;
```
This approach works with any column from the blog table. For example, to search by author:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance,
author
FROM blog_contents_embeddings
WHERE
author = 'Bulgakov'
ORDER BY
distance
LIMIT 10;
```
<details>
<summary>Click to see SQLAlchemy examples for querying the embeddings</summary>
Given an example SQLAlchemy model:
```python
class Wiki(Base):
__tablename__ = "wiki"
id: Mapped[int] = mapped_column(primary_key=True)
url: Mapped[str]
title: Mapped[str]
text: Mapped[str]
# Add vector embeddings for the text field
text_embeddings = vectorizer_relationship(
target_table='wiki_embeddings',
dimensions=384
)
```
You can use the text_embeddings relationship to perform semantic search on the embeddings by ordering the results by distance.
```python
async def _find_relevant_chunks(client: ollama.AsyncClient, query: str, limit: int = 2) -> WikiSearchResult:
response = await client.embed(model="all-minilm", input=query)
embedding = response.embeddings[0]
with Session(engine) as session:
# Query both the Wiki model and its embeddings
result = session.query(
Wiki,
Wiki.text_embeddings.embedding.cosine_distance(embedding).label('distance')
).join(Wiki.text_embeddings).order_by(
'distance'
).limit(limit).all()
return result
```
You can, of course, add any other filters to the query.
</details>
## Inject context into vectorizer chunks
Formatting allows you to inject additional information into each chunk. This is
needed because splitting the text into chunks can lead to losing important
context. For instance, you might want to include the authors and title with each
chunk. This is achieved using Python template strings, which have access to all
columns in the row and a special `$chunk` variable containing the chunk's text.
You may need to reduce the chunk size to ensure the formatted text fits within
token limits. Adjust the `chunk_size` parameter of the text_splitter
accordingly:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title - by $author - $chunk'),
destination => ai.destination_table('blog_contents_embeddings')
);
```
The default format string is simply `$chunk`.
## Improve query performance on your Vectorizer
A vector index on the embedding column improves query performance. On Timescale Cloud, a vectorscale
index is automatically created after 100,000 rows of vector data are present.
This behaviour is configurable, you can also specify other vector index types. The following
example uses a HNSW index:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title - by $author - $chunk'),
indexing => ai.indexing_hnsw(min_rows => 100000, opclass => 'vector_l2_ops'),
destination => ai.destination_table('blog_contents_embeddings')
);
```
Note: Indexing relies on a background job that runs periodically, so this
feature will not work if scheduling is disabled (which is the default for self-hosted installations).
## Control the vectorizer run time
When you use Vectorizer on Timescale Cloud, you use scheduling to control the time when vectorizers run.
A scheduled job checks for work to be done and, if so, runs the cloud function to embed the data.
By default, scheduling uses TimescaleDB background jobs running every five minutes.
Once the table is large enough, scheduling also handles index creation on the embedding column.
When you self-host vectorizer, the vectorizer worker uses a polling mechanism to check whether
there is work to be done. Thus, scheduling is not needed and is deactivated by default.
Note: when scheduling is disabled, the index is not created automatically. You need to create it manually.
## The embedding storage table
The view is based on a table storing blog embeddings, named
`blog_contents_embeddings_store`. You can query this table directly for
potentially more efficient queries. The table structure is as follows:
```sql
CREATE TABLE blog_contents_embeddings_store(
embedding_uuid UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
id INT, -- primary key referencing the blog table
chunk_seq INT NOT NULL,
chunk TEXT NOT NULL,
embedding VECTOR(768) NOT NULL,
UNIQUE (id, chunk_seq),
FOREIGN KEY (id) REFERENCES public.blog(id) ON DELETE CASCADE
);
```
## Destination Options for Embeddings
Vectorizer supports two different ways to store your embeddings. You should choose the option to use based on whether:
- You need **multiple embeddings per source row** because of chunking. This is the common case. You should choose table destination.
- You need a **single embedding per source row**. This happens if you are either embedding small text fragments (e.g. a single sentence) or if have already chunked the document and the souce table contains the chunks. In this case, you should choose a column destination.
### 1. Table Destination (Default)
The default approach creates a separate table to store embeddings and a view that joins with the source table:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
name => 'blog_vectorizer', -- Optional custom name for easier reference
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
destination => ai.destination_table(
target_schema => 'public',
target_table => 'blog_embeddings_store',
view_name => 'blog_embeddings'
),
);
```
**When to use table destination:**
- When you need multiple embeddings per row (chunking)
- For large text fields that need to be split
- You are vectorizing documents (which typically require chunking)
### 2. Column Destination
For simpler cases, you can add an embedding column directly to the source table. This can only be used when the vectorizer does not perform chunking because it requires a one-to-one relationship between the source data and the embedding. This is useful in cases where you know the source text is short (as is common if the chunking has already been done upstream in your data pipeline).
The workflow is that your application inserts data into the table with a NULL in the embedding column. The vectorizer will then read the row, generate the embedding and update the row with the correct value in the embedding column.
```sql
SELECT ai.create_vectorizer(
'product_descriptions'::regclass,
name => 'product_descriptions_vectorizer',
loading => ai.loading_column('description'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
chunking => ai.chunking_none(), -- Required for column destination
destination => ai.destination_column('description_embedding')
);
```
**When to use column destination:**
- When you need exactly one embedding per row
- For shorter text that doesn't require chunking
- When your application already takes care of the chunking before inserting into the database
- When you want to avoid creating additional database objects
**Note:** Column destination requires chunking to be set to `ai.chunking_none()` since it can only store one embedding per row.
## Monitor a vectorizer
Since embeddings are created asynchronously, a delay may occur before they
become available. Use the `vectorizer_status` view to monitor the vectorizer's
status:
```sql
SELECT * FROM ai.vectorizer_status;
```
Sample output:
| id | source_table | target_table | view | pending_items |
|----|--------------|--------------------------------------|---------------------------------|---------------|
| 1 | public.blog | public.blog_contents_embeddings_store | public.blog_contents_embeddings | 1 |
The `pending_items` column indicates the number of items still awaiting embedding creation.
If the number of pending items exceeds 10,000, we return the maximum value of a bigint (`9223372036854775807`)
instead of exhaustively counting the items. This is done for performance.
Alternately, you can call the `ai.vectorizer_queue_pending` function to get the count of pending items
for a single vectorizer. The `exact_count` parameter is defaulted to `false`, but passing `true`
will exhaustively count the exact number of pending items.
```sql
select ai.vectorizer_queue_pending(1, exact_count=>true);
```
|
docs/vectorizer/alembic-integration.md
|
Alembic integration
|
# Alembic integration
Alembic is a database migration tool that allows you to manage your database schema. This document describes how to use Alembic to manage your vectorizer definitions, since those should be considered part of your database schema.
We first cover how to create vectorizers using the Alembic operations. Then, we cover how to exclude the tables created and managed by pgai Vectorizer from the autogenerate process.
## Installation
To make use of the alembic operations you need to install pgai with the sqlalchemy extras:
```bash
pip install pgai[sqlalchemy]
```
## Creating vectorizers
pgai provides native Alembic operations for managing vectorizers. For them to work you need to run `register_operations` in your env.py file. Which registers the pgai operations under the global op context:
```python
from pgai.alembic import register_operations
register_operations()
```
Then you can use the `create_vectorizer` operation to create a vectorizer for your model. As well as the `drop_vectorizer` operation to remove it.
```python
from alembic import op
from pgai.vectorizer.configuration import (
EmbeddingOpenaiConfig,
ChunkingCharacterTextSplitterConfig,
FormattingPythonTemplateConfig,
LoadingColumnConfig,
DestinationTableConfig
)
def upgrade() -> None:
op.create_vectorizer(
source="blog",
name="blog_content_embedder", # Optional custom name for easier reference
destination=DestinationTableConfig(
destination='blog_embeddings'
)
loading=LoadingColumnConfig(column_name='content'),
embedding=EmbeddingOpenaiConfig(
model='text-embedding-3-small',
dimensions=768
),
chunking=ChunkingCharacterTextSplitterConfig(
chunk_size=800,
chunk_overlap=400,
separator='.',
is_separator_regex=False
),
formatting=FormattingPythonTemplateConfig(template='$title - $chunk')
)
def downgrade() -> None:
op.drop_vectorizer(name="blog_content_embedder", drop_all=True)
```
The `create_vectorizer` operation supports all configuration options available in the [SQL API](/docs/vectorizer/api-reference.md).
## Excluding tables created by pgai Vectorizer from Alembic autogeneration
When you create a vectorizer, pgai automatically creates a table to store the vector embeddings. This table is managed by pgai and should not be included in created by alembic migrations. So, if you are using alembic's autogenerate functionality to generate migrations, you will need to exclude these tables from the autogenerate process.
If you are using SQLAlchemy, the `vectorizer_relationship` generates a new SQLAlchemy model, that is available under the attribute that you specify, and those models need to be excluded from the autogenerate process. When those models are created, they are added to a list in your metadata called `pgai_managed_tables` and you can exclude them by adding the following to your `env.py`:
```python
def include_object(object, name, type_, reflected, compare_to):
if type_ == "table" and name in target_metadata.info.get("pgai_managed_tables", set()):
return False
return True
context.configure(
connection=connection,
target_metadata=target_metadata,
include_object=include_object
)
```
This should now prevent alembic from generating tables for these models when you run `alembic revision --autogenerate`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.