Dataset Viewer
path
stringlengths 22
47
| title
stringlengths 10
44
| contents
stringlengths 143
71k
|
---|---|---|
docs/security/privileges.md | Privileges | # Privileges
The ai.grant_ai_usage function is an important security and access control tool
in the pgai extension. Its primary purpose is to grant the necessary permissions
for a specified user or role to use the pgai functionality effectively and
safely. This function simplifies the process of setting up appropriate access
rights, ensuring that users can interact with the AI features without
compromising database security.
Purpose:
1. Grant appropriate permissions to a specified user or role for using pgai features.
2. Provide a streamlined way to manage access control for the AI functionality.
3. Allow for different levels of access (regular usage vs. administrative access).
Usage:
```sql
SELECT ai.grant_ai_usage(to_user name, admin bool DEFAULT false)
```
Parameters:
1. to_user: The name of the user or role to whom permissions will be granted.
2. admin: A boolean flag indicating whether to grant administrative privileges (default is false).
The function doesn't return a value, but it performs several grant operations.
Key actions performed by ai.grant_ai_usage:
1. Grants permissions on the 'ai' schema.
2. Grants permissions on tables, sequences, and views within the 'ai' schema.
3. Grants execute permissions on functions and procedures in the 'ai' schema.
4. If admin is true, grants more extensive permissions, including the ability to grant permissions to others.
Examples:
1. Granting regular usage permissions:
```sql
SELECT ai.grant_ai_usage('analyst_role');
```
This grants basic usage permissions to the 'analyst_role'.
2. Granting administrative permissions:
```sql
SELECT ai.grant_ai_usage('ai_admin_role', admin => true);
```
This grants administrative permissions to the 'ai_admin_role'.
Key points about ai.grant_ai_usage:
1. Regular usage (admin = false):
- Grants USAGE and CREATE on the 'ai' schema.
- Grants SELECT, INSERT, UPDATE, DELETE on tables.
- Grants USAGE, SELECT, UPDATE on sequences.
- Grants SELECT on views.
- Grants EXECUTE on functions and procedures.
2. Administrative usage (admin = true):
- Grants ALL PRIVILEGES on the 'ai' schema, tables, sequences, views, functions, and procedures.
- Includes WITH GRANT OPTION, allowing the admin to grant permissions to others.
3. The function is designed to be idempotent, meaning it can be run multiple times without causing issues.
4. It automatically handles the different types of database objects (tables, views, functions, etc.) without requiring separate grant statements for each.
5. The function is security definer, meaning it runs with the privileges of its owner (typically a superuser), allowing it to grant permissions that the calling user might not directly have.
Use cases:
1. Setting up a new analyst:
```sql
CREATE ROLE new_analyst;
SELECT ai.grant_ai_usage('new_analyst');
```
2. Promoting a user to an AI administrator:
```sql
SELECT ai.grant_ai_usage('experienced_user', admin => true);
```
3. Ensuring all members of a role have appropriate access:
```sql
SELECT ai.grant_ai_usage('data_science_team');
```
4. Granting temporary admin access for maintenance:
```sql
CREATE ROLE temp_admin;
SELECT ai.grant_ai_usage('temp_admin', admin => true);
-- After maintenance
DROP ROLE temp_admin;
```
Best practices:
1. Use this function instead of manually granting permissions to ensure consistency and completeness of access rights.
2. Be cautious with granting admin privileges, as this gives extensive control over the pgai functionality.
3. Regularly review who has been granted access, especially admin access, as part of security audits.
4. Consider creating roles for different levels of pgai usage and granting permissions to these roles rather than individual users.
The ai.grant_ai_usage function is a crucial tool for managing access to pgai
features. It ensures that users have the permissions they need to work with AI
functionality in the database while maintaining security and control over these
powerful features. By providing a simple interface for granting permissions, it
helps database administrators manage access effectively and reduce the risk of
misconfiguration.
|
docs/security/handling-api-keys.md | Handling API keys | # Handling API keys
A number of pgai functions call out to third-party APIs which require an API
key for authentication. There are a few ways to pass your API key to these
functions. This document lays out the different options and provides
recommendations for which option to use.
API keys are sensitive values, so we provide several ways to specify them so
they can be provided securely:
**Recommended ways (most secure)**
1. If you are using Timescale Cloud, we recommend that you [configure an API key in Timescale Cloud](#configure-an-api-key-in-timescale-cloud).
2. If you are self-hosting, you can [configure an API key through an environment variable available to the PostgreSQL process](#configure-an-api-key-through-an-environment-variable-available-to-the-postgres-process-self-hosted)
**Other ways**
1. You can [configure the api key for an interactive a psql session](#configure-an-api-key-for-an-interactive-psql-session).
2. You can [provide the api key directly with the `api_key` function parameter](#provide-the-api-key-directly-with-the-api_key-function-parameter).
When you call a pgai function without setting `api_key` or `api_key_name`, pgai
attempts to resolve the secret by using a default value for `api_key_name`. The
default is provider-dependent:
| Provider | Default `api_key_name` |
|-----------|------------------------|
| Anthropic | ANTHROPIC_API_KEY |
| Cohere | COHERE_API_KEY |
| OpenAI | OPENAI_API_KEY |
| VoyageAI | VOYAGE_API_KEY |
## Configure an API key in Timescale Cloud
1. Navigate to the "AI Model API Keys" tab under "Project settings"

1. Add a new AI Model API key, providing the name and API key

1. Use this API key name in calls to pgai functions, like so:
```sql
SELECT * FROM ai.openai_list_models(api_key_name => 'MY_API_KEY');
```
## Configure an API key through an environment variable available to the Postgres process (self-hosted)
If you're running PostgreSQL yourself, or have the ability to configure the
runtime of PostgreSQL, you set an environment variable for the PostgreSQL
process.
### Configure the environment variable
How you configured the environment variable depends on how you are running your
database. Some common examples are: Systemd, Docker, or Docker Compose.
#### Configure the environment variable in Systemd unit
In the `[Service]` stanza of the Systemd unit, you add:
```
Environment=MY_API_KEY=<api key here>
```
#### Configure the environment variable with Docker
You set the environment variable with the `-e` parameter to `docker run`:
```sh
docker run -e MY_API_KEY=<api key here> ... timescale/timescaledb-ha:pg17
```
#### Configure the environment variable with Docker Compose
You set the environment variable in the `environment` parameter of your
database:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
MY_API_KEY: <api key here>
...
```
### Use the name of the environment variable in calls to pgai
```sql
SELECT * FROM ai.openai_list_models(api_key_name => 'MY_API_KEY');
```
## Configure an API key for an interactive psql session
To use a [session level parameter when connecting to your database with psql](https://www.postgresql.org/docs/current/config-setting.html#CONFIG-SETTING-SHELL)
to run your AI queries:
1. Set the api key as an environment variable in your shell:
```bash
export MY_API_KEY="this-is-my-super-secret-api-key-dont-tell"
```
1. Use the session-level parameter when you connect to your database:
```bash
PGOPTIONS="-c ai.my_api_key=$MY_API_KEY" psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>"
```
1. Run your AI query:
```sql
SELECT * FROM ai.voyageai_embed('voyage-3-lite', 'sample text to embed', api_key_name => 'my_api_key');
```
## provide the api key directly with the `api_key` function parameter
Note: passing the `api_key` parameter to a pgai function as text results in the
value being printed into the PostgreSQL logs. This could expose your API key.
Instead, we recommend passing the `api_key` parameter as a bind variable:
1. Set the API key as an environment variable in your shell:
```bash
export MY_API_KEY="this-is-my-super-secret-api-key-dont-tell"
```
2. Connect to your database and set your api key as a [psql variable](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES):
```bash
psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>" -v my_api_key=$MY_API_KEY
```
Your API key is now available as a psql variable named `my_api_key` in your psql session.
You can also log into the database, then set `my_api_key` using the `\getenv` [metacommand](https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMAND-GETENV):
```sql
\getenv my_api_key MY_API_KEY
```
3. Pass your API key to your parameterized query:
```sql
SELECT *
FROM ai.openai_list_models(api_key=>$1)
ORDER BY created DESC
\bind :my_api_key
\g
```
|
docs/utils/chunking.md | Chunk text with SQL functions | # Chunk text with SQL functions
The `ai.chunk_text` and `ai.chunk_text_recursively` functions allow you to split text into smaller chunks.
## Example usage
Given a table like this
```sql
create table blog
( id int not null primary key
, title text
, body text
);
```
You can chunk the text in the `body` column like this
```sql
select
b.id
, b.title
, c.seq
, c.chunk
from blog b
cross join lateral ai.chunk_text(b.body) c
order by b.id, c.seq
;
```
## chunk_text
Splits text into chunks using a separator.
This uses the [CharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.CharacterTextSplitter.html) from the `langchain_text_splitters` Python package.
| Name | Type | Default | Required | Description |
|---------------------|-------|----------|----------|-----------------------------------------------------------|
| input | text | - | ✔ | The text to split into chunks |
| chunk_size | int | *4000 | ✖ | The target size of a chunk in characters |
| chunk_overlap | int | *200 | ✖ | The target amount of overlapping characters in each chunk |
| separator | text | *E'\n\n' | ✖ | The text to split on |
| is_separator_regex | text | false | ✖ | `true` if the separator represents a regular expression |
*defaulted by the underlying Python implementation rather than in SQL
```sql
select *
from ai.chunk_text
($$if two witches watch two watches, which witch watches which watch?$$
, separator=>' '
, chunk_size=>10
, chunk_overlap=>0
);
```
The query above will return the results below:
```
seq | chunk
-----+-----------
0 | if two
1 | witches
2 | watch two
3 | watches,
4 | which
5 | witch
6 | watches
7 | which
8 | watch?
(9 rows)
```
## chunk_text_recursively
Recursively splits text into chunks using multiple separators in sequence.
This uses the [RecursiveCharacterTextSplitter](https://python.langchain.com/api_reference/text_splitters/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) from the `langchain_text_splitters` Python package.
| Name | Type | Default | Required | Description |
|--------------------|--------|---------------------------------|----------|-----------------------------------------------------------|
| input | text | - | ✔ | The text to split into chunks |
| chunk_size | int | *4000 | ✖ | The target size of a chunk in characters |
| chunk_overlap | int | *200 | ✖ | The target amount of overlapping characters in each chunk |
| separators | text[] | *array[E'\n\n', E'\n', ' ', ''] | ✖ | An array of texts to split on |
| is_separator_regex | text | false | ✖ | `true` if the separators represents regular expressions |
*defaulted by the underlying Python implementation rather than in SQL
```sql
select *
from ai.chunk_text_recursively
($$if two witches watch two watches, which witch watches which watch?$$
, separators=>array[' ', '.', '?']
, chunk_size=>2
, chunk_overlap=>0
);
```
The query above will return the results below:
```
seq | chunk
-----+-----------
0 | if
1 | two
2 | witches
3 | watch
4 | two
5 | watches,
6 | which
7 | witch
8 | watches
9 | which
10 | watch
11 | ?
(12 rows)
``` |
docs/structured_retrieval/text_to_sql.md | Text to SQL | # Text to SQL
> **Disclaimer:** this feature is in early preview and not yet supported in any way for production.
## Overview
pgai Text to SQL makes it simple for developers to use data from PostgreSQL tables as context for LLM applications.
Whether you’re building a customer-facing chatbot or an internal tool, text to SQL can help with the following use cases:
* **Structured Retrieval** – Answer questions using only SQL tables. Example: “What is the median MRR of our customers?”
* **Structured \+ Unstructured Retrieval** – Combine SQL with vector search for RAG apps. Example: “Show me accounts that mentioned *security* in sales calls (unstructured), but don’t have 2FA enabled (structured).”
* **AI Agents** – Let AI agents retrieve structured and unstructured data. Example: A stock analyst agent that looks up time-series data (structured) and searches earnings reports (unstructured).
## API keys
Add your API key for your [LLM/embedding providers](#supported-providers):
1. Visit [Timescale Console](https://console.cloud.timescale.com/)
2. Click on your service
3. Click on AI in the top menu
4. Click on Vectorizers in the left menu
5. Click `Add an API key`
6. For a list of all the providers we support and their API keys, see [Supported providers](#supported-providers).
## Fork your database
We recommend creating a fork of your database so you can safely experiment with text to SQL without impacting your production databases.
1. Visit Timescale Console
2. Click on your service
3. In the top right of your screen, click the three dot menu
4. Click `Fork service`
5. Email [support@timescale.com](mailto:support@timescale.com) your service ID and we will enable text to SQL functionality for your service
## Install or update the extension
Next, install the pgai extension. If you already have it installed, update it to the latest version. Note: you’ll need a specific version that has the text to SQL capabilities.
```sql
/*
* If it's your first time installing pgai
*/
select set_config('ai.enable_feature_flag_text_to_sql', 'true', false);
create extension if not exists ai cascade;
/*
* If you already have pgai, update it
*/
select set_config('ai.enable_feature_flag_text_to_sql', 'true', false);
alter extension ai update;
```
## Create the semantic catalog
This function creates [vectorizers](/docs/vectorizer/overview.md) for the tables that pgai uses to store descriptions for tables, columns, and queries. These vectorizers will automatically generate embeddings for the descriptions and update them if you edit the descriptions.
For example:
```sql
-- OpenAI embeddings and OpenAI o1 completions
select ai.create_semantic_catalog(
embedding=>ai.embedding_openai('text-embedding-3-small', 768)
, text_to_sql=>ai.text_to_sql_openai(model=>'o1')
);
-- OpenAI embeddings + Claude 3.5 Sonnet completions
select ai.create_semantic_catalog(
embedding=>ai.embedding_openai('text-embedding-3-small', 768)
, text_to_sql=>ai.text_to_sql_anthropic(model=>'claude-3-5-sonnet-latest')
);
-- Voyage embeddings + Claude 3.5 Sonnet completions
select ai.create_semantic_catalog(
embedding=>ai.embedding_voyageai('voyage-3-lite', 512)
, text_to_sql=>ai.text_to_sql_anthropic(model=>'claude-3-5-sonnet-latest')
);
-- Ollama embeddings + OpenAI o1 completions
select ai.create_semantic_catalog
( embedding=>ai.embedding_ollama
( 'nomic-embed-text'
, 768
, base_url=>'http://host.docker.internal:11434'
)
, text_to_sql=>ai.text_to_sql_openai(model=>'o1')
);
```
The embedding parameter is the only required parameter. It defines how we will embed the descriptions of database objects and SQL examples. The semantic catalog supports the same embedding configurations supported by Vectorizer. So, you may use OpenAI, Ollama, VoyageAI, etc. The optional arguments will differ between the LLM provider, but you’ll need to specify a model and dimensions regardless.
The `text_to_sql` parameter establishes a default configuration for the `text_to_sql` function. The main purpose here is to define the model to be used for generating SQL. Optional arguments differ between the LLM providers. For a list of the providers we support, see [Supported providers](#supported-providers).
You can mix and match. For example, you can use Ollama for embeddings and Anthropic for SQL generation.
## Write descriptions
Write descriptions for tables, columns, and queries. pgai will give these descriptions to the LLM to help it create a more accurate query.
```sql
/*
* Tables or views
*/
/* Upsert a description */
select ai.set_description('orders', 'The orders table stores details about individual orders....');
/* Delete a description */
select ai.delete_description('orders');
/*
* Columns
*/
/* Upsert a description */
select ai.set_column_description('orders', 'dt', 'The timestamp at which the order was submitted. This column cannot be null.');
/* Delete a description */
select ai.delete_column_description('orders', 'dt');
/* Delete all column descriptions from a table */
select ai.delete_column_descriptions('orders');
/*
* Functions
*/
/* Upsert a description */
select ai.set_function_description('myfunc'::regproc, 'This function returns all the orders with a "pending" status');
/* Delete a description */
select ai.delete_function_description('myfunc'::regproc);
/*
* SQL queries
*/
/* Adding a description */
select ai.add_sql_example
( $$select date_trunc('day', o.dt) as day, avg(o.cost) as avg_cost from orders o where '2024-11-01'::date <= o.dt and o.dt < '2024-12-01'::date group by 1 order by 1$$
, 'This query calculates the daily average cost of orders in November. The orders table is filtered by the dt column to....'
);
/* Updating a description */
update ai.semantic_catalog_sql
set sql = 'new example'
where sql = 'old example';
/* Deleting a description */
delete from ai.semantic_catalog_sql
where sql = 'old example';
```
Wait for the descriptions to be embedded. You can monitor the queue with this query. These numbers should go to zero.
```sql
select
ai.vectorizer_queue_pending(k.obj_vectorizer_id) as obj_pending,
ai.vectorizer_queue_pending(k.sql_vectorizer_id) as sql_pending
from ai.semantic_catalog k
where k.catalog_name = 'default';
```
Want to generate your descriptions using LLMs so you don’t have to write them manually?
```sql
-- Generate a description for the `orders` table and print it
select ai.generate_description('orders');
-- Generate a description for the `orders` table, print it, and save it to the semantic catalog
-- If a description already exists, it will not overwrite
select ai.generate_description('orders', save => true);
-- Will save and overwrite the existing description
select ai.generate_description('orders', save => true, overwrite => true);
-- Generate and save descriptions for each column in the `orders` table
select ai.generate_column_descriptions('orders', save => true);
-- Generate and save a description for your `myfunc` function
select ai.generate_function_description('myfunc'::regproc, save => true);
```
## Use the `text_to_sql` function
Now you’re ready to use the `text_to_sql` function.
```sql
select ai.text_to_sql('show me the average order cost by day in November');
/*
query
-------------------------------
select
date_trunc('day', o.dt) as day
, avg(o.cost) as avg_cost
from orders o
where '2024-11-01'::date <= o.dt
and o.dt < '2024-12-01'::date
group by 1
order by 1
(1 row)
*/
```
Turn on debug messages to see what is happening behind the scenes (the prompts sent and the LLM’s responses).
```sql
set client_min_messages to 'DEBUG1';
```
## Supported providers
The `text_to_sql` function uses the Completions API, and searching the semantic catalog uses the Embeddings API. You can use different providers for each component.
| Provider | Completions | Embeddings | API key |
| :---- | :---- | :---- | :---- |
| OpenAI | ✅ | ✅ | `OPENAI_API_KEY` |
| Anthropic | ✅ | | `ANTHROPIC_API_KEY` |
| Ollama | ✅ | ✅ | n/a |
| VoyageAI | | ✅ | `VOYAGE_API_KEY` |
| Cohere | ✅ | | `COHERE_API_KEY` |
## Check out the demo
Check out [the demo](/examples/text_to_sql/README.md) of this feature.
## FAQ
* As a developer, how would I track/log what questions my end users are asking?
* This is something you’d have to handle at the application level.
* As a developer, how can end users give thumbs up/down feedback to my application, and how would that feedback go back to pgai for improvement?
* Coming soon.
* Can pgai handle questions that require joining across multiple databases?
* You could do this with [foreign data wrappers](https://www.postgresql.org/docs/current/postgres-fdw.html).
* How does security and privacy work? Who can see what data?
* Timescale is trusted by thousands of customers to run demanding production-level software with security at the heart of everything. See our [Security page](https://www.timescale.com/security) for full details on how we keep your data secure.
* Schema information and descriptions will go to the LLM provider, and just descriptions will go to the embedding provider. Depending on what provider you’re using, please refer to their privacy policies.
* As a developer, how do I make sure that my end users can only access data they’re authorized to see?
* You could use Postgres [Row Level Security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) to accomplish this. See the example code [here](https://gist.github.com/rahilsondhi/eaafa12a1543d75c0993c094e286beb8)
* How does pricing work?
* pgai is a free Postgres extension.
* If you want to use Timescale Cloud to host your Postgres database, please see [timescale.com/pricing](http://timescale.com/pricing).
* You will incur charges by the LLM/embedding providers. Please see their Pricing pages.
* When pgai generates queries, do they automatically get executed?
* Not at the moment. It’s the developer’s responsibility to take the SQL query generated by pgai and execute it.
* What if my end user asks for 10m rows?
* You could run an `explain` on the generated query to see if it would be performant or not.
* What is the latency like?
* At minimum, it’s the latency of the LLM provider, plus pgai’s SQL query to search the schema for relevant tables.
* If the LLM needs additional information, that will be more LLM round trips and schema searches.
* If the LLM hasn’t figured out a solution within 10 interactions, it will abort.
* How many tables/columns can this support?
* There isn’t a known, hard limit. We search your database for tables relevant to the end user’s question and forward those schemas to the LLM.
* Can I self-host this? Or is it available on Timescale Cloud only?
* Yes, you can self-host it, but you’d have to run the [vectorizer workers](https://github.com/timescale/pgai/blob/main/docs/vectorizer/worker.md) as well.
* If you’re using Timescale Cloud, we take care of the vectorizer workers for you.
|
docs/vectorizer/quick-start-voyage.md | Vectorizer quick start with VoyageAI | # Vectorizer quick start with VoyageAI
This page shows you how to create a vectorizer and run a semantic search on the automatically embedded data on a self-hosted Postgres instance.
To follow this tutorial you need to have a Voyage AI account API key. You can get one [here](https://www.voyageai.com/).
## Setup a local development environment
To set up a development environment for Voyage AI, create a docker-compose file that includes:
- The official TimescaleDB docker image with pgai, pgvectorscale and timescaledb included
- The pgai vectorizer worker image
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Create the following `docker-compose.yml` in a new directory:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
VOYAGE_API_KEY: your-api-key
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
VOYAGE_API_KEY: your-api-key
command: [ "--poll-interval", "5s" ]
volumes:
data:
```
1. **Start the services**
```shell
docker compose up -d
```
## Create and run a vectorizer
Now you can create and run a vectorizer. A vectorizer is a pgai concept, it processes data in a table and automatically creates embeddings for it.
1. **Connect to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on the database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_voyageai(
'voyage-3-lite',
512
)
);
```
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.voyageai_embed('voyage-3-lite', 'good food') as distance
FROM blog_contents_embeddings
ORDER BY distance;
```
The results look like:
| Chunk | Distance |
|--------|-----------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.6102883386268212 |
| Blogging can be a great way to share your thoughts and expertise... | 0.7245166465928164 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.7789760644464416 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.9036547272308249 |
| Cloud computing has revolutionized the way businesses operate... | 0.9131323552491029 |
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp). |
docs/vectorizer/api-reference.md | pgai Vectorizer API reference |
# pgai Vectorizer API reference
This page provides an API reference for Vectorizer functions. For an overview
of Vectorizer and how it works, see the [Vectorizer Guide](/docs/vectorizer/overview.md).
A vectorizer provides you with a powerful and automated way to generate and
manage LLM embeddings for your PostgreSQL data. Here's a summary of what you
gain from Vectorizers:
- **Automated embedding generation**: you can create a vectorizer for a specified
table, which automatically generates embeddings for the data in that table and
keeps them in sync with the source data.
- **Automatic synchronization**: a vectorizer creates triggers on the source table,
ensuring that embeddings are automatically updated when the source data
changes.
- **Background processing**: the process to create embeddings runs
asynchrounously in the background. This minimizes the impact on regular database
operations such as INSERT, UPDATE, and DELETE.
- **Scalability**: a vectorizer processes data in batches and can run concurrently.
This enables vectorizers to handle large datasets efficiently.
- **Configurable embedding process**: a vectorizer is highly configurable,
allowing you to specify:
- The embedding model and dimensions. For example, the `nomic-embed-text` model in Ollama.
- Chunking strategies for text data.
- Formatting templates for combining multiple fields.
- Indexing options for efficient similarity searches.
- Scheduling for background processing.
- **Integration with multiple AI providers**: a vectorizer supports different
embedding providers, initially including OpenAI, with more planned for the
future.
- **Efficient storage and retrieval**: embeddings are stored in a separate table
with appropriate indexing, optimizing for vector similarity searches.
- **View creation**: a view is automatically created to join the original data with
its embeddings, making it easy to query and use the embedded data.
- **Fine-grained access control**: you can specify the roles that have
access to a vectorizer and its related objects.
- **Monitoring and management**: monitor the vectorizer's queue, enable/disable scheduling, and manage the vectorizer
lifecycle.
Vectorizer significantly simplifies the process of incorporating AI-powered
semantic search and analysis capabilities into existing PostgreSQL databases.
Making it easier for you to leverage the power of LLMs in your data workflows.
Vectorizer offers the following APIs:
**Create and configure vectorizers**
- [Create vectorizers](#create-vectorizers): automate the process of creating embeddings for table data.
- [Loading configuration](#loading-configuration): define the source of the data to embed. You can load data from a column in the source table, or from a file referenced in a column of the source table.
- [Parsing configuration](#parsing-configuration): for documents, define the way the data is parsed after it is loaded.
- [Chunking configuration](#chunking-configuration): define the way text data is split into smaller, manageable pieces
before being processed for embeddings.
- [Formatting configuration](#formatting-configuration): configure the way data from the source table is formatted
before it is sent for embedding.
- [Embedding configuration](#embedding-configuration): specify the LLM provider, model, and the parameters to be
used when generating the embeddings
- [Indexing configuration](#indexing-configuration): specify the way generated embeddings should be indexed for
efficient similarity searches.
- [Scheduling configuration](#scheduling-configuration): configure when and how often the vectorizer should run in order
to process new or updated data.
- [Processing configuration](#processing-configuration): specify the way the vectorizer should process data when
generating embeddings.
**Manage vectorizers**
- [Enable and disable vectorizer schedules](#enable-and-disable-vectorizer-schedules): temporarily pause or resume the
automatic processing of embeddings, without having to delete or recreate the vectorizer configuration.
- [Drop a vectorizer](#drop-a-vectorizer): remove a vectorizer that you created previously, and clean up the associated
resources.
**Monitor vectorizers**
- [View vectorizer status](#view-vectorizer-status): monitoring tools in pgai that provide insights into the state and
performance of vectorizers.
## Create vectorizers
You use the `ai.create_vectorizer` function in pgai to set up and configure an automated system
for generating and managing embeddings for a specific table in your database.
The purpose of `ai.create_vectorizer` is to:
- Automate the process of creating embeddings for table data.
- Set up necessary infrastructure such as tables, views, and triggers for embedding management.
- Configure the embedding generation process according to user specifications.
- Integrate with AI providers for embedding creation.
- Set up scheduling for background processing of embeddings.
### Example usage
By using `ai.create_vectorizer`, you can quickly set up a sophisticated
embedding system tailored to your specific needs, without having to manually
create and manage all the necessary database objects and processes. For example:
```sql
SELECT ai.create_vectorizer(
'website.blog'::regclass,
loading => ai.loading_column('contents'),
embedding => ai.embedding_ollama('nomic-embed-text', 768),
chunking => ai.chunking_character_text_splitter(128, 10),
formatting => ai.formatting_python_template('title: $title published: $published $chunk'),
grant_to => ai.grant_to('bob', 'alice')
);
```
This function call:
1. Sets up a vectorizer for the `website.blog` table.
2. Loads the `contents` column.
2. Uses the Ollama `nomic-embed-text` model to create 768 dimensional embeddings.
3. Chunks the content into 128-character pieces with a 10-character overlap.
4. Formats each chunk with a `title` and a `published` date.
5. Grants necessary permissions to the roles `bob` and `alice`.
The function returns an integer identifier for the vectorizer created, which you can use
in other management functions.
### Parameters
`ai.create_vectorizer` takes the following parameters:
| Name | Type | Default | Required | Description |
|------------------|--------------------------------------------------------|-----------------------------------|----------|----------------------------------------------------------------------------------------------------|
| source | regclass | - | ✔ | The source table that embeddings are generated for. |
| destination | name | - | ✖ | Set the name of the table embeddings are stored in, and the view with both the original data and the embeddings.<br>The view is named `<destination>`, the embedding table is named `<destination>_store`.<br>You set destination to avoid naming conflicts when you configure additional vectorizers for a source table. |
| embedding | [Embedding configuration](#embedding-configuration) | - | ✔ | Set how to embed the data. |
| loading | [Loading configuration](#loading-configuration) | - | ✔ | Set the way to load the data from the source table, using functions like `ai.loading_column()`. |
| parsing | [Parsing configuration](#parsing-configuration) | ai.parsing_auto() | ✖ | Set the way to parse the data, using functions like `ai.parsing_auto()`. |
| chunking | [Chunking configuration](#chunking-configuration) | `ai.chunking_recursive_character_text_splitter()` | ✖ | Set the way to split text data, using functions like `ai.chunking_character_text_splitter()`. |
| indexing | [Indexing configuration](#indexing-configuration) | `ai.indexing_default()` | ✖ | Specify how to index the embeddings. For example, `ai.indexing_diskann()` or `ai.indexing_hnsw()`. |
| formatting | [Formatting configuration](#formatting-configuration) | `ai.formatting_python_template()` | ✖ | Define the data format before embedding, using `ai.formatting_python_template()`. |
| scheduling | [Scheduling configuration](#scheduling-configuration) | `ai.scheduling_default()` | ✖ | Set how often to run the vectorizer. For example, `ai.scheduling_timescaledb()`. |
| processing | [Processing configuration](#processing-configuration ) | `ai.processing_default()` | ✖ | Configure the way to process the embeddings. |
| target_schema | name | - | ✖ | Specify the schema where the embeddings will be stored. This argument takes precedence over `destination`. |
| target_table | name | - | ✖ | Specify name of the table where the embeddings will be stored. |
| view_schema | name | - | ✖ | Specify the schema where the view is created. |
| view_name | name | - | ✖ | Specify the name of the view to be created. This argument takes precedence over `destination`. |
| queue_schema | name | - | ✖ | Specify the schema where the work queue table is created. |
| queue_table | name | - | ✖ | Specify the name of the work queue table. |
| grant_to | [Grant To configuration][#grant-to-configuration] | `ai.grant_to_default()` | ✖ | Specify which users should be able to use objects created by the vectorizer. |
| enqueue_existing | bool | `true` | ✖ | Set to `true` if existing rows should be immediately queued for embedding. |
#### Returns
The `int` id of the vectorizer that you created.
## Loading configuration
You use the loading configuration functions in `pgai` to define the way data is loaded from the source table.
The loading functions are:
- [ai.loading_column](#ailoading_column)
- [ai.loading_uri](#ailoading_uri)
### ai.loading_column
You use `ai.loading_column` to load the data to embed directly from a column in the source table.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
loading => ai.loading_column('contents'),
-- other parameters...
);
```
#### Parameters
`ai.loading_column` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| column_name | text | - | ✔ | The name of the column containing the data to load. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.loading_uri
You use `ai.loading_uri` to load the data to embed from a file that is referenced in a column of the source table.
This file path is internally passed to [smart_open](https://github.com/piskvorky/smart_open), so it supports any protocol that smart_open supports, including:
- Local files
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
- HTTP/HTTPS
- SFTP
- and [many more](https://github.com/piskvorky/smart_open/blob/master/help.txt)
#### Environment configuration
You just need to ensure the vectorizer worker has the correct credentials to access the file, such as in environment variables. Here is an example for AWS S3:
```bash
export AWS_ACCESS_KEY_ID='your_access_key'
export AWS_SECRET_ACCESS_KEY='your_secret_key'
export AWS_REGION='your_region' # optional
```
Make sure these environment variables are properly set in the environment where the PGAI vectorizer worker runs.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
loading => ai.loading_uri('file_uri_column_name'),
-- other parameters...
);
```
#### Parameters
`ai.loading_uri` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| column_name | text | - | ✔ | The name of the column containing the file path. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Parsing configuration
You use the parsing configuration functions in `pgai` to define how data is parsed after document loading. This is useful if for non-textual formats such as PDF documents.
The parsing functions are:
- [ai.parsing_auto](#aiparsing_auto): Automatically selects the appropriate parser based on file type.
- [ai.parsing_none](#aiparsing_none): Converts various formats to Markdown.
- [ai.parsing_docling](#aiparsing_docling): More powerful alternative to PyMuPDF. See [Docling](https://docling-project.github.io/docling/usage/supported_formats/) for supported formats.
- [ai.parsing_pymupdf](#aiparsing_pymupdf): See [PyMuPDF](https://pymupdf.readthedocs.io/en/latest/) for supported formats.
### ai.parsing_auto
You use `ai.parsing_auto` to automatically select an appropriate parser based on detected file types.
Documents with unrecognizable formats won't be processed and will generate an error (in the `ai.vectorizer_errors` table.
The parser selection works by examining file extensions and content types:
- For PDF files, images, Office documents (DOCX, XLSX, etc.): Uses docling
- For EPUB and MOBI (e-book formats): Uses pymupdf
- For text formats (TXT, MD, etc.): No parser is used (content is read directly)
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_auto(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_auto` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_none
You use `ai.parsing_none` to skip the parsing step. Only appropriate for textual data.
#### Example usage, for textual data.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_none(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_none` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_docling
You use `ai.parsing_docling` to parse the data provided by the loader using [docling](https://docling-project.github.io/docling/).
Docling is a more robust and thorough document parsing library that:
- Uses OCR capabilities to extract text from images
- Can parse complex documents with tables and multi-column layouts
- Supports Office formats (DOCX, XLSX, etc.)
- Preserves document structure better than other parsers
- Converts documents to markdown format
Note that docling uses ML models for improved parsing, which makes it slower than simpler parsers like pymupdf.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_docling(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_docling` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.parsing_pymupdf
You use `ai.parsing_pymupdf` to parse the data provided by the loader using [pymupdf](https://pymupdf.readthedocs.io/en/latest/).
PyMuPDF is a faster, simpler document parser that:
- Processes PDF documents with basic structure preservation
- Supports e-book formats like EPUB and MOBI
- Is generally faster than docling for simpler documents
- Works well for documents with straightforward layouts
Choose pymupdf when processing speed is more important than perfect structure preservation.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
parsing => ai.parsing_pymupdf(),
-- other parameters...
);
```
#### Parameters
`ai.parsing_pymupdf` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|----------|-------------|
| None | - | - | - | - |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Chunking configuration
You use the chunking configuration functions in `pgai` to define the way text data is split into smaller,
manageable pieces before being processed for embeddings. This is crucial because many embedding models have input size
limitations, and chunking allows for processing of larger text documents while maintaining context.
By using chunking functions, you can fine-tune how your text data is
prepared for embedding, ensuring that the chunks are appropriately sized and
maintain necessary context for their specific use case. This is particularly
important for maintaining the quality and relevance of the generated embeddings,
especially when dealing with long-form content or documents with specific
structural elements.
The chunking functions are:
- [ai.chunking_character_text_splitter](#aichunking_character_text_splitter)
- [ai.chunking_recursive_character_text_splitter](#aichunking_recursive_character_text_splitter)
The key difference between these functions is that `chunking_recursive_character_text_splitter`
allows for a more sophisticated splitting strategy, potentially preserving more
semantic meaning in the chunks.
### ai.chunking_character_text_splitter
You use `ai.chunking_character_text_splitter` to:
- Split text into chunks based on a specified separator.
- Control the chunk size and the amount of overlap between chunks.
#### Example usage
- Split the content into chunks of 128 characters, with 10
character overlap, using '\n;' as the separator:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
chunking => ai.chunking_character_text_splitter(128, 10, E'\n'),
-- other parameters...
);
```
#### Parameters
`ai.chunking_character_text_splitter` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|---------|-|--------------------------------------------------------|
|chunk_size| int | 800 |✖| The maximum number of characters in a chunk |
|chunk_overlap| int | 400 |✖| The number of characters to overlap between chunks |
|separator| text | E'\n\n' |✖| The string or character used to split the text |
|is_separator_regex| bool | false |✖| Set to `true` if `separator` is a regular expression. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.chunking_recursive_character_text_splitter
`ai.chunking_recursive_character_text_splitter` provides more fine-grained control over the chunking process.
You use it to recursively split text into chunks using multiple separators.
#### Example usage
- Recursively split content into chunks of 256 characters, with a 20 character
overlap, first trying to split on '\n;', then on spaces:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
chunking => ai.chunking_recursive_character_text_splitter(
256,
20,
separators => array[E'\n;', ' ']
),
-- other parameters...
);
```
#### Parameters
`ai.chunking_recursive_character_text_splitter` takes the following parameters:
| Name | Type | Default | Required | Description |
|--------------------|------|---------|-|----------------------------------------------------------|
| chunk_size | int | 800 |✖| The maximum number of characters per chunk |
| chunk_overlap | int | 400 |✖| The number of characters to overlap between chunks |
| separators | text[] | array[E'\n\n', E'\n', '.', '?', '!', ' ', ''] |✖| The string or character used to split the text |
| is_separator_regex | bool | false |✖| Set to `true` if `separator` is a regular expression. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Embedding configuration
You use the embedding configuration functions to specify how embeddings are
generated for your data.
The embedding functions are:
- [ai.embedding_litellm](#aiembedding_litellm)
- [ai.embedding_openai](#aiembedding_openai)
- [ai.embedding_ollama](#aiembedding_ollama)
- [ai.embedding_voyageai](#aiembedding_voyageai)
### ai.embedding_litellm
You call the `ai.embedding_litellm` function to use LiteLLM to generate embeddings for models from multiple providers.
The purpose of `ai.embedding_litellm` is to:
- Define the embedding model to use.
- Specify the dimensionality of the embeddings.
- Configure optional, provider-specific parameters.
- Set the name of the environment variable that holds the value of your API key.
#### Example usage
Use `ai.embedding_litellm` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
1. Set the required API key for your provider.
The API key should be set as an environment variable which is available to either the Vectorizer worker, or the
Postgres process.
2. Create a vectorizer using LiteLLM to access the 'microsoft/codebert-base' embedding model on huggingface:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'huggingface/microsoft/codebert-base',
768,
api_key_name => 'HUGGINGFACE_API_KEY',
extra_options => '{"wait_for_model": true}'::jsonb
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the LiteLLM embedding configuration:
| Name | Type | Default | Required | Description |
|---------------|-------|---------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the embedding model to use. Refer to the [LiteLLM embedding documentation] for an overview of the available providers and models. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| api_key_name | text | - | ✖ | Set the name of the environment variable that contains the API key. This allows for flexible API key management without hardcoding keys in the database. |
| extra_options | jsonb | - | ✖ | Set provider-specific configuration options. |
[LiteLLM embedding documentation]: https://docs.litellm.ai/docs/embedding/supported_embedding
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
#### Provider-specific configuration examples
The following subsections show how to configure the vectorizer for all supported providers.
##### Cohere
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'cohere/embed-english-v3.0',
1024,
api_key_name => 'COHERE_API_KEY',
),
-- other parameters...
);
```
Note: The [Cohere documentation on input_type] specifies that the `input_type` parameter is required.
By default, LiteLLM sets this to `search_document`. The input type can be provided
via `extra_options`, i.e. `extra_options => '{"input_type": "search_document"}'::jsonb`.
[Cohere documentation on input_type]: https://docs.cohere.com/v2/docs/embeddings#the-input_type-parameter
#### Mistral
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'mistral/mistral-embed',
1024,
api_key_name => 'MISTRAL_API_KEY',
),
-- other parameters...
);
```
Note: Mistral limits the maximum input per batch to 16384 tokens.
##### Azure OpenAI
To set up a vectorizer with Azure OpenAI you require these values from the Azure AI Foundry console:
- deployment name
- base URL
- version
- API key
The deployment name is visible in the "Deployment info" section. The base URL and version are
extracted from the "Target URI" field in the "Endpoint section". The Target URI has the form:
`https://your-resource-name.openai.azure.com/openai/deployments/your-deployment-name/embeddings?api-version=2023-05-15`.
In this example, the base URL is: `https://your-resource-name.openai.azure.com` and the version is `2023-05-15`.

Configure the vectorizer, note that the base URL and version are configured through `extra_options`:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'azure/<deployment name here>',
1024,
api_key_name => 'AZURE_API_KEY',
extra_options => '{"api_base": "<base URL here>", "api_version": "<version here>"}'::jsonb
),
-- other parameters...
);
```
#### Huggingface inference models
You can use [Huggingface inference] to obtain vector embeddings. Note that
Huggingface has two categories of inference: "serverless inference", and
"inference endpoints". Serverless inference is free, but is limited to models
under 10GB in size, and the model may not be immediately available to serve
requests. Inference endpoints are a paid service and provide always-on APIs
for production use-cases.
Note: We recommend using the `wait_for_model` parameter when using vectorizer
with serverless inference to force the call to block until the model has been
loaded. If you do not use `wait_for_model`, it's likely that vectorization will
never succeed.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'huggingface/BAAI/bge-small-en-v1.5',
, 384
, extra_options => '{"wait_for_model": true}'::jsonb
)
-- other parameters...
);
```
[Huggingface inference]: https://huggingface.co/docs/huggingface_hub/en/guides/inference
#### AWS Bedrock
To set up a vectorizer with AWS Bedrock, you must ensure that the vectorizer
is authenticated to make API calls to the AWS Bedrock endpoint. The vectorizer
worker uses boto3 under the hood, so there are multiple ways to achieve this.
The simplest method is to provide the `AWS_ACCESS_KEY_ID`,
`AWS_SECRET_ACCESS_KEY`, and `AWS_REGION_NAME` environment variables to the
vectorizer worker. Consult the [boto3 credentials documentation] for more
options.
[boto3 credentials documentation]: (https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'bedrock/amazon.titan-embed-text-v2:0',
1024,
api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional
extra_options => '{"aws_access_key_id": "<access key id>", "aws_region_name": "<region name>"}'::jsonb -- optional
),
-- other parameters...
);
```
You can also only configure the secret in the database, and provide the
`api_key_name` parameter to prompt the vectorizer worker to load the api key
from the database. When you do this, you may need to pass `aws_access_key_id`
and `aws_region_name` through the `extra_options` parameter:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'bedrock/amazon.titan-embed-text-v2:0',
1024,
api_key_name => 'AWS_SECRET_ACCESS_KEY', -- optional
extra_options => '{"aws_access_key_id": "<access key id>", "aws_region_name": "<region name>"}'::jsonb -- optional
),
-- other parameters...
);
```
#### Vertex AI
To set up a vectorizer with Vertex AI, you must ensure that the vectorizer
can make API calls to the Vertex AI endpoint. The vectorizer worker uses
GCP's authentication under the hood, so there are multiple ways to achieve
this.
The simplest method is to provide the `VERTEX_PROJECT`, and
`VERTEX_CREDENTIALS` environment variables to the vectorizer worker. These
correspond to the project id, and the path to a file containing credentials for
a service account. Consult the [Authentication methods at Google] for more
options.
[Authentication methods at Google]: https://cloud.google.com/docs/authentication
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'vertex_ai/text-embedding-005',
768
),
-- other parameters...
);
```
You can also only configure the secret in the database, and provide the
`api_key_name` parameter to prompt the vectorizer worker to load the api key
from the database. When you do this, you may need to pass `vertex_project` and
`vertex_location` through the `extra_options` parameter.
Note: `VERTEX_CREDENTIALS` should contain the path to a file
containing the API key, the vectorizer worker requires to have access to this
file in order to load the credentials.
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_litellm(
'vertex_ai/text-embedding-005',
768,
api_key_name => 'VERTEX_CREDENTIALS', -- optional
extra_options => '{"vertex_project": "<project id>", "vertex_location": "<vertex location>"}'::jsonb -- optional
),
-- other parameters...
);
```
### ai.embedding_openai
You call the `ai.embedding_openai` function to use an OpenAI model to generate embeddings.
The purpose of `ai.embedding_openai` is to:
- Define which OpenAI embedding model to use.
- Specify the dimensionality of the embeddings.
- Configure optional parameters like the user identifier for API calls.
- Set the name of the [environment variable that holds the value of your OpenAI API key][openai-use-env-var].
#### Example usage
Use `ai.embedding_openai` to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
1. Set the value of your OpenAI API key.
For example, [in an environment variable][openai-set-key] or in a [Docker configuration][docker configuration].
2. Create a vectorizer with OpenAI as the embedding provider:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_openai(
'text-embedding-3-small',
768,
chat_user => 'bob',
api_key_name => 'MY_OPENAI_API_KEY_NAME'
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the OpenAI embedding configuration:
| Name | Type | Default | Required | Description |
|--------------|------|------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the OpenAI embedding model to use. For example, `text-embedding-3-small`. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| chat_user | text | - | ✖ | The identifier for the user making the API call. This can be useful for tracking API usage or for OpenAI's monitoring purposes. |
| api_key_name | text | `OPENAI_API_KEY` | ✖ | Set [the name of the environment variable that contains the OpenAI API key][openai-use-env-var]. This allows for flexible API key management without hardcoding keys in the database. On Timescale Cloud, you should set this to the name of the secret that contains the OpenAI API key. |
| base_url | text | - | ✖ | Set the base_url of the OpenAI API. Note: no default configured here to allow configuration of the vectorizer worker through `OPENAI_BASE_URL` env var. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.embedding_ollama
You use the `ai.embedding_ollama` function to use an Ollama model to generate embeddings.
The purpose of `ai.embedding_ollama` is to:
- Define which Ollama model to use.
- Specify the dimensionality of the embeddings.
- Configure how the Ollama API is accessed.
- Configure the model's truncation behaviour, and keep alive.
- Configure optional, model-specific parameters, like the `temperature`.
#### Example usage
This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_ollama(
'nomic-embed-text',
768,
base_url => "http://my.ollama.server:443"
options => '{ "num_ctx": 1024 }',
keep_alive => "10m"
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the Ollama embedding configuration:
| Name | Type | Default | Required | Description |
|------------|---------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the Ollama model to use. For example, `nomic-embed-text`. Note: the model must already be available (pulled) in your Ollama server. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| base_url | text | - | ✖ | Set the base_url of the Ollama API. Note: no default configured here to allow configuration of the vectorizer worker through `OLLAMA_HOST` env var. |
| options | jsonb | - | ✖ | Configures additional model parameters listed in the documentation for the Modelfile, such as `temperature`, or `num_ctx`. |
| keep_alive | text | - | ✖ | Controls how long the model will stay loaded in memory following the request. Note: no default configured here to allow configuration at Ollama-level. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
### ai.embedding_voyageai
You use the `ai.embedding_voyageai` function to use a Voyage AI model to generate embeddings.
The purpose of `ai.embedding_voyageai` is to:
- Define which Voyage AI model to use.
- Specify the dimensionality of the embeddings.
- Configure the model's truncation behaviour, and api key name.
- Configure the input type.
#### Example usage
This function is used to create an embedding configuration object that is passed as an argument to [ai.create_vectorizer](#create-vectorizers):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
embedding => ai.embedding_voyageai(
'voyage-3-lite',
512,
api_key_name => "TEST_API_KEY"
),
-- other parameters...
);
```
#### Parameters
The function takes several parameters to customize the Voyage AI embedding configuration:
| Name | Type | Default | Required | Description |
|--------------|---------|------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| model | text | - | ✔ | Specify the name of the [Voyage AI model](https://docs.voyageai.com/docs/embeddings#model-choices) to use. |
| dimensions | int | - | ✔ | Define the number of dimensions for the embedding vectors. This should match the output dimensions of the chosen model. |
| input_type | text | 'document' | ✖ | Type of the input text, null, 'query', or 'document'. |
| api_key_name | text | `VOYAGE_API_KEY` | ✖ | Set the name of the environment variable that contains the Voyage AI API key. This allows for flexible API key management without hardcoding keys in the database. On Timescale Cloud, you should set this to the name of the secret that contains the Voyage AI API key. |
#### Returns
A JSON configuration object that you can use in [ai.create_vectorizer](#create-vectorizers).
## Formatting configuration
You use the `ai.formatting_python_template` function in `pgai` to
configure the way data from the source table is formatted before it is sent
for embedding.
`ai.formatting_python_template` provides a flexible way to structure the input
for embedding models. This enables you to incorporate relevant metadata and additional
text. This can significantly enhance the quality and usefulness of the generated
embeddings, especially in scenarios where context from multiple fields is
important for understanding or searching the content.
The purpose of `ai.formatting_python_template` is to:
- Define a template for formatting the data before embedding.
- Allow the combination of multiple fields from the source table.
- Add consistent context or structure to the text being embedded.
- Customize the input for the embedding model to improve relevance and searchability.
Formatting happens after chunking and the special `$chunk` variable contains the chunked text.
### Example usage
- Default formatting:
The default formatter uses the `$chunk` template, resulting in outputing the chunk text as-is.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('$chunk'),
-- other parameters...
);
```
- Add context from other columns:
Add the title and publication date to each chunk, providing more context for the embedding.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('Title: $title\nDate: $published\nContent: $chunk'),
-- other parameters...
);
```
- Combine multiple fields:
Prepend author and category information to each chunk.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('Author: $author\nCategory: $category\n$chunk'),
-- other parameters...
);
```
- Add consistent structure:
Add start and end markers to each chunk, which could be useful for certain
types of embeddings or retrieval tasks.
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
formatting => ai.formatting_python_template('BEGIN DOCUMENT\n$chunk\nEND DOCUMENT'),
-- other parameters...
);
```
### Parameters
`ai.formatting_python_template` takes the following parameter:
|Name| Type | Default | Required | Description |
|-|--------|-|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|template| string |`$chunk`|✔| A string using [Python template strings](https://docs.python.org/3/library/string.html#template-strings) with $-prefixed variables that defines how the data should be formatted. |
- The $chunk placeholder is required and represents the text chunk that will be embedded.
- Other placeholders can be used to reference columns from the source table.
- The template allows for adding static text or structuring the input in a specific way.
### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Indexing configuration
You use indexing configuration functions in pgai to
specify the way generated embeddings should be indexed for efficient similarity
searches. These functions enable you to choose and configure the indexing
method that best suits your needs in terms of performance, accuracy, and
resource usage.
By providing these indexing options, pgai allows you to optimize your
embedding storage and retrieval based on their specific use case and performance
requirements. This flexibility is crucial for scaling AI-powered search and
analysis capabilities within a PostgreSQL database.
Key points about indexing:
- The choice of indexing method depends on your dataset size, query performance requirements, and available resources.
- [ai.indexing_none](#aiindexing_none) is better suited for small datasets, or when you want to perform index creation manually.
- [ai.indexing_diskann](#aiindexing_diskann) is generally recommended for larger datasets that require an index.
- The `min_rows` parameter enables you to delay index creation until you have enough data to justify the overhead.
- These indexing methods are designed for approximate nearest neighbor search, which trades a small amount of accuracy for significant speed improvements in similarity searches.
The available functions are:
- [ai.indexing_default](#aiindexing_default): when you do not want indexes created automatically.
- [ai.indexing_none](#aiindexing_none): when you do not want indexes created automatically.
- [ai.indexing_diskann](#aiindexing_diskann): configure indexing using the [DiskANN algorithm](https://github.com/timescale/pgvectorscale).
- [ai.indexing_hnsw](#aiindexing_hnsw): configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world).
### ai.indexing_default
You use `ai.indexing_default` to use the platform-specific default value for indexing.
On Timescale Cloud, the default is `ai.indexing_diskann()`. On self-hosted, the default is `ai.indexing_none()`.
A timescaledb background job is used for automatic index creation. Since timescaledb may not be installed
in a self-hosted environment, we default to `ai.indexing_none()`.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_default(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_none
You use `ai.indexing_none` to specify that no special indexing should be used for the embeddings.
This is useful when you don't need fast similarity searches or when you're dealing with a small amount of data.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_none(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_diskann
You use `ai.indexing_diskann` to configure indexing using the DiskANN algorithm, which is designed for high-performance
approximate nearest neighbor search on large-scale datasets. This is suitable for very large datasets that need to be
stored on disk.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_diskann(min_rows => 500000, storage_layout => 'memory_optimized'),
-- other parameters...
);
```
#### Parameters
`ai.indexing_diskann` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------|-|--------------------------------------------------|
|min_rows| int | 100000 |✖| The minimum number of rows before creating the index |
| storage_layout | text | - |✖| Set to either `memory_optimized` or `plain` |
| num_neighbors | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter. |
| search_list_size | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| max_alpha | float8 | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| num_dimensions | int | - |✖|Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| num_bits_per_dimension | int | - |✖| Advanced [DiskANN](https://github.com/microsoft/DiskANN/tree/main) parameter.|
| create_when_queue_empty | boolean | true |✖| Create the index only after all of the embeddings have been generated. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.indexing_hnsw
You use `ai.indexing_hnsw` to configure indexing using the [Hierarchical Navigable Small World (HNSW) algorithm](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world),
which is known for fast and accurate approximate nearest neighbor search.
HNSW is suitable for in-memory datasets and scenarios where query speed is crucial.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
indexing => ai.indexing_hnsw(min_rows => 50000, opclass => 'vector_l1_ops'),
-- other parameters...
);
```
#### Parameters
`ai.indexing_hnsw` takes the following parameters:
| Name | Type | Default | Required | Description |
|------|------|---------------------|-|----------------------------------------------------------------------------------------------------------------|
|min_rows| int | 100000 |✖| The minimum number of rows before creating the index |
|opclass| text | `vector_cosine_ops` |✖| The operator class for the index. Possible values are:`vector_cosine_ops`, `vector_l1_ops`, or `vector_ip_ops` |
|m| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) |
|ef_construction| int | - |✖| Advanced [HNSW parameters](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) |
| create_when_queue_empty| boolean | true |✖| Create the index only after all of the embeddings have been generated. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Scheduling configuration
You use scheduling functions in pgai to configure when and how often the vectorizer should run to process new or
updated data. These functions allow you to set up automated, periodic execution of the embedding
generation process. These are advanced options and most users should use the default.
By providing these scheduling options, pgai enables you to automate the process
of keeping your embeddings up-to-date with minimal manual intervention. This is
crucial for maintaining the relevance and accuracy of AI-powered search and
analysis capabilities, especially in systems where data is frequently updated or
added. The flexibility in scheduling also allows users to balance the freshness
of embeddings against system resource usage and other operational
considerations.
The available functions are:
- [ai.scheduling_default](#aischeduling_default): uses the platform-specific default scheduling configuration. On Timescale Cloud this is equivalent to `ai.scheduling_timescaledb()`. On self-hosted deployments, this is equivalent to `ai.scheduling_none()`.
- [ai.scheduling_none](#aischeduling_none): when you want manual control over when the vectorizer runs. Use this when you're using an external scheduling system, as is the case with self-hosted deployments.
- [ai.scheduling_timescaledb](#aischeduling_timescaledb): leverages TimescaleDB's robust job scheduling system, which is designed for reliability and scalability. Use this when you're using Timescale Cloud.
### ai.scheduling_default
You use `ai.scheduling_default` to use the platform-specific default scheduling configuration.
On Timescale Cloud, the default is `ai.scheduling_timescaledb()`. On self-hosted, the default is `ai.scheduling_none()`.
A timescaledb background job is used to periodically trigger a cloud vectorizer on Timescale Cloud.
This is not available in a self-hosted environment.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_default(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.scheduling_none
You use `ai.scheduling_none` to
- Specify that no automatic scheduling should be set up for the vectorizer.
- Manually control when the vectorizer runs or when you're using an external scheduling system.
You should use this for self-hosted deployments.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_none(),
-- other parameters...
);
```
#### Parameters
This function takes no parameters.
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
### ai.scheduling_timescaledb
You use `ai.scheduling_timescaledb` to:
- Configure automated scheduling using TimescaleDB's job scheduling system.
- Allow periodic execution of the vectorizer to process new or updated data.
- Provide fine-grained control over when and how often the vectorizer runs.
#### Example usage
- Basic usage (run every 5 minutes). This is the default:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(),
-- other parameters...
);
```
- Custom interval (run every hour):
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(interval '1 hour'),
-- other parameters...
);
```
- Specific start time and timezone:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(
interval '30 minutes',
initial_start => '2024-01-01 00:00:00'::timestamptz,
timezone => 'America/New_York'
),
-- other parameters...
);
```
- Fixed schedule:
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
scheduling => ai.scheduling_timescaledb(
interval '1 day',
fixed_schedule => true,
timezone => 'UTC'
),
-- other parameters...
);
```
#### Parameters
`ai.scheduling_timescaledb` takes the following parameters:
|Name|Type| Default | Required | Description |
|-|-|---------|-|--------------------------------------------------------------------------------------------------------------------|
|schedule_interval|interval| '10m' |✔| Set how frequently the vectorizer checks for new or updated data to process. |
|initial_start|timestamptz| - |✖| Delay the start of scheduling. This is useful for coordinating with other system processes or maintenance windows. |
|fixed_schedule|bool| - |✖|Set to `true` to use a fixed schedule such as every day at midnight. Set to `false` for a sliding window such as every 24 hours from the last run|
|timezone|text| - |✖| Set the timezone this schedule operates in. This ensures that schedules are interpreted correctly, especially important for fixed schedules or when coordinating with business hours. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Processing configuration
You use the processing configuration functions in pgai to specify
the way the vectorizer should process data when generating embeddings,
such as the batch size and concurrency. These are advanced options and most
users should use the default.
### ai.processing_default
You use `ai.processing_default` to specify the concurrency and batch size for the vectorizer.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
processing => ai.processing_default(batch_size => 200, concurrency => 5),
-- other parameters...
);
```
#### Parameters
`ai.processing_default` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|------------------------------|-|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|batch_size| int | Determined by the vectorizer |✖| The number of items to process in each batch. The optimal batch size depends on your data and cloud function configuration, larger batch sizes can improve efficiency but may increase memory usage. |
|concurrency| int | Determined by the vectorizer |✖| The number of concurrent processing tasks to run. The optimal concurrency depends on your cloud infrastructure and rate limits, higher concurrency can speed up processing but may increase costs and resource usage. |
#### Returns
A JSON configuration object that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Grant To configuration
You use the grant to configuration function in pgai to specify which users should be able to use
objects created by the vectorizer.
### ai.grant_to
Grant permissions to a comma-separated list of users.
Includes the users specified in the `ai.grant_to_default` setting.
#### Example usage
```sql
SELECT ai.create_vectorizer(
'my_table'::regclass,
grant_to => ai.grant_to('bob', 'alice'),
-- other parameters...
);
```
#### Parameters
This function takes a comma-separated list of usernames to grant permissions to.
#### Returns
An array of name values, that you can use as an argument for [ai.create_vectorizer](#create-vectorizers).
## Enable and disable vectorizer schedules
You use `ai.enable_vectorizer_schedule` and `ai.disable_vectorizer_schedule` to control
the execution of [scheduled vectorizer jobs](#scheduling-configuration). These functions
provide a way to temporarily pause or resume the automatic processing of embeddings, without
having to delete or recreate the vectorizer configuration.
These functions provide an important layer of operational control for managing
pgai vectorizers in production environments. They allow database administrators
and application developers to balance the need for up-to-date embeddings with
other system priorities and constraints, enhancing the overall flexibility and
manageability of pgai.
Key points about schedule enable and disable:
- These functions provide fine-grained control over individual vectorizer schedules without affecting other
vectorizers, or the overall system configuration.
- Disabling a schedule does not delete the vectorizer or its configuration; it simply stops scheduling future
executions of the job.
- These functions are particularly useful in scenarios such as:
- System maintenance windows where you want to reduce database load.
- Temporarily pausing processing during data migrations or large bulk updates.
- Debugging or troubleshooting issues related to the vectorizer.
- Implementing manual control over when embeddings are updated.
- When a schedule is disabled, new or updated data is not automatically processed. However, the data is still
queued, and will be processed when the schedule is re-enabled, or when the vectorizer is run manually.
- After re-enabling a schedule, for a vectorizer configured with
[ai.scheduling_timescaledb](#aischeduling_timescaledb), the next run is based
on the original scheduling configuration. For example, if the vectorizer was
set to run every hour, it will run at the next hour mark after being enabled.
Usage example in a maintenance scenario:
```sql
-- Before starting system maintenance
SELECT ai.disable_vectorizer_schedule(1);
SELECT ai.disable_vectorizer_schedule(2);
-- Perform maintenance tasks...
-- After maintenance is complete
SELECT ai.enable_vectorizer_schedule(1);
SELECT ai.enable_vectorizer_schedule(2);
```
The available functions are:
- [ai.enable_vectorizer_schedule](#aienable_vectorizer_schedule): activate, reactivate or resume a scheduled job.
- [ai.disable_vectorizer_schedule](#aidisable_vectorizer_schedule): disactivate or temporarily stop a scheduled job.
### ai.enable_vectorizer_schedule
You use `ai.enable_vectorizer_schedule` to:
- Activate or reactivate the scheduled job for a specific vectorizer.
- Allow the vectorizer to resume automatic processing of new or updated data.
#### Example usage
To resume the automatic scheduling for the vectorizer with ID 1.
```sql
SELECT ai.enable_vectorizer_schedule(1);
```
#### Parameters
`ai.enable_vectorizer_schedule` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|---------|-|-----------------------------------------------------------|
|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to enable. |
#### Returns
`ai.enable_vectorizer_schedule` does not return a value,
### ai.disable_vectorizer_schedule
You use `ai.disable_vectorizer_schedule` to:
- Deactivate the scheduled job for a specific vectorizer.
- Temporarily stop the automatic processing of new or updated data.
#### Example usage
To stop the automatic scheduling for the vectorizer with ID 1.
```sql
SELECT ai.disable_vectorizer_schedule(1);
```
#### Parameters
`ai.disable_vectorizer_schedule` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|---------|-|----------------------------------------------------------------------|
|vectorizer_id| int | - |✔| The identifier of the vectorizer whose schedule you want to disable. |
#### Returns
`ai.disable_vectorizer_schedule` does not return a value,
## Drop a vectorizer
`ai.drop_vectorizer` is a management tool that you use to remove a vectorizer that you
[created previously](#create-vectorizers), and clean up the associated
resources. Its primary purpose is to provide a controlled way to delete a
vectorizer when it's no longer needed, or when you want to reconfigure it from
scratch.
You use `ai.drop_vectorizer` to:
- Remove a specific vectorizer configuration from the system.
- Clean up associated database objects and scheduled jobs.
- Safely undo the creation of a vectorizer.
`ai.drop_vectorizer` performs the following on the vectorizer to drop:
- Deletes the scheduled job associated with the vectorizer if one exists.
- Drops the trigger from the source table used to queue changes.
- Drops the trigger function that backed the source table trigger.
- Drops the queue table used to manage the updates to be processed.
- Deletes the vectorizer row from the `ai.vectorizer` table.
By default, `ai.drop_vectorizer` does not:
- Drop the target table containing the embeddings.
- Drop the view joining the target and source tables.
There is an optional parameter named `drop_all` which is `false` by default. If you
explicitly pass `true`, the function WILL drop the target table and view.
This design allows you to keep the generated embeddings and the convenient view
even after dropping the vectorizer. This is useful if you want to stop
automatic updates but still use the existing embeddings.
#### Example usage
Best practices are:
- Before dropping a vectorizer, ensure that you will not need the automatic embedding updates it provides.
- After dropping a vectorizer, you may want to manually clean up the target table and view if they're no longer needed.
- To ensure that you are dropping the correct vectorizer, keep track of your vectorizer IDs. You can do this by querying
the `ai.vectorizer` table.
Examples:
- Remove the vectorizer with ID 1:
```sql
-- Assuming we have a vectorizer with ID 1
SELECT ai.drop_vectorizer(1);
```
- Remove the vectorizer with ID 1 and drop the target table and view as well:
```sql
SELECT ai.drop_vectorizer(1, drop_all=>true);
```
#### Parameters
`ai.drop_vectorizer` takes the following parameters:
|Name| Type | Default | Required | Description |
|-|------|-|-|-|
|vectorizer_id| int | -|✔|The identifier of the vectorizer you want to drop|
|drop_all| bool | false |✖|true to drop the target table and view as well|
#### Returns
`ai.drop_vectorizer` does not return a value, but it performs several cleanup operations.
## View vectorizer status
[ai.vectorizer_status view](#aivectorizer_status-view) and
[ai.vectorizer_queue_pending function](#aivectorizer_queue_pending-function) are
monitoring tools in pgai that provide insights into the state and performance of vectorizers.
These monitoring tools are crucial for maintaining the health and performance of
your pgai-enhanced database. They allow you to proactively manage your
vectorizers, ensure timely processing of embeddings, and quickly identify and
address any issues that may arise in your AI-powered data pipelines.
For effective monitoring, you use `ai.vectorizer_status`.
For example:
```sql
-- Get an overview of all vectorizers
SELECT * FROM ai.vectorizer_status;
```
Sample output:
| id | source_table | target_table | view | pending_items |
|----|--------------|--------------|------|---------------|
| 1 | public.blog | public.blog_contents_embedding_store | public.blog_contents_embeddings | 1 |
The `pending_items` column indicates the number of items still awaiting embedding creation. The pending items count helps you to:
- Identify bottlenecks in processing.
- Determine if you need to adjust scheduling or processing configurations.
- Monitor the impact of large data imports or updates on your vectorizers.
Regular monitoring using these tools helps ensure that your vectorizers are keeping up with data changes, and that
embeddings remain up-to-date.
Available views are:
- [ai.vectorizer_status](#aivectorizer_status-view): view, monitor and display information about a vectorizer.
Available functions are:
- [ai.vectorizer_queue_pending](#aivectorizer_queue_pending-function): retrieve just the queue count for a vectorizer.
### ai.vectorizer_status view
You use `ai.vectorizer_status` to:
- Get a high-level overview of all vectorizers in the system.
- Regularly monitor and check the health of the entire system.
- Display key information about each vectorizer's configuration and current state.
- Use the `pending_items` column to get a quick indication of processing backlogs.
#### Example usage
- Retrieve all vectorizers that have items waiting to be processed:
```sql
SELECT * FROM ai.vectorizer_status WHERE pending_items > 0;
```
- System health monitoring:
```sql
-- Alert if any vectorizer has more than 1000 pending items
SELECT id, source_table, pending_items
FROM ai.vectorizer_status
WHERE pending_items > 1000;
```
#### Returns
`ai.vectorizer_status` returns the following:
| Column name | Description |
|---------------|-----------------------------------------------------------------------|
| id | The unique identifier of this vectorizer |
|source_table | The fully qualified name of the source table |
|target_table | The fully qualified name of the table storing the embeddings |
|view | The fully qualified name of the view joining source and target tables |
| pending_items | The number of items waiting to be processed by the vectorizer |
### ai.vectorizer_queue_pending function
`ai.vectorizer_queue_pending` enables you to retrieve the number of items in a vectorizer queue
when you need to focus on a particular vectorizer or troubleshoot issues.
You use `vectorizer_queue_pending` to:
- Retrieve the number of pending items for a specific vectorizer.
- Allow for more granular monitoring of individual vectorizer queues.
#### Example usage
Return the number of pending items for the vectorizer with ID 1:
```sql
SELECT ai.vectorizer_queue_pending(1);
```
A queue with a very large number of items may be slow to count. The optional
`exact_count` parameter is defaulted to false. When false, the count is limited.
An exact count is returned if the queue has 10,000 or fewer items, and returns
9223372036854775807 (the max bigint value) if there are greater than 10,000
items.
To get an exact count, regardless of queue size, set the optional parameter to
`true` like this:
```sql
SELECT ai.vectorizer_queue_pending(1, exact_count=>true);
```
#### Parameters
`ai.vectorizer_queue_pending function` takes the following parameters:
| Name | Type | Default | Required | Description |
|---------------|------|---------|----------|---------------------------------------------------------|
| vectorizer_id | int | - | ✔ | The identifier of the vectorizer you want to check |
| exact_count | bool | false | ✖ | If true, return exact count. If false, capped at 10,000 |
### Returns
The number of items in the queue for the specified vectorizer
[timescale-cloud]: https://console.cloud.timescale.com/
[openai-use-env-var]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2
[openai-set-key]: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety#h_a1ab3ba7b2
[docker configuration]: https://github.com/timescale/pgai/blob/main/docs/vectorizer/worker.md#install-and-configure-vectorizer-worker
|
docs/vectorizer/quick-start-openai.md | Vectorizer quick start with OpenAI | # Vectorizer quick start with OpenAI
This page shows you how to create a vectorizer in a self-hosted Postgres instance, then use
the pgai vectorizer worker to create embeddings from data in your database. To finish off we show how simple it
is to do semantic search on the embedded data in one query!
## Setup a local developer environment
The local developer environment is a docker configuration you use to develop and test pgai, vectorizers and vectorizer
worker locally. It includes a:
- Postgres deployment image with the TimescaleDB and pgai extensions installed
- pgai vectorizer worker image
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Copy the following configuration into a file named `compose.yaml`:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg16
environment:
POSTGRES_PASSWORD: postgres
OPENAI_API_KEY: <your-api-key>
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
volumes:
data:
```
1. **Tune the developer image for your AI provider**
Replace `OPENAI_API_KEY` with a key from your AI provider.
1. **Start the database**
```shell
docker compose up -d db
```
## Create and run a vectorizer
To create and run a vectorizer, then query the auto-generated embeddings created by the vectorizer:
1. **Connection to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on your database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_openai('text-embedding-3-small', 768)
);
```
5. **Run the vectorizer worker**
When you install pgai on Timescale Cloud, vectorizers are run automatically using TimescaleDB scheduling.
For self-hosted, you run a pgai vectorizer worker so the vectorizer can process the data in `blog`.
In a new terminal, start the vectorizer worker:
```shell
docker compose up -d vectorizer-worker
```
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.openai_embed('text-embedding-3-small', 'good food', dimensions=>768) as distance
FROM blog_contents_embeddings
ORDER BY distance;
```
The results look like:
| chunk | distance |
|------|--------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.6720892190933228 |
| Blogging can be a great way to share your thoughts and expertise... | 0.7744888961315155 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.815629243850708 |
| Cloud computing has revolutionized the way businesses operate... | 0.8913049921393394 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.9215681301612775 |
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp).
|
docs/vectorizer/worker.md | Run vectorizers using pgai vectorizer worker |
# Run vectorizers using pgai vectorizer worker
When you install pgai on Timescale Cloud or another cloud installation, you use
scheduling to control the times when vectorizers are run. A scheduled job detects whether work is to be done for the
vectorizers. If there is, the job runs the cloud function to embed the data.
Note: Timescale Cloud currently supports embedding natively with OpenAI. To use Ollama on the data in your Timescale Cloud service, set [scheduling => ai.scheduling_none()](/docs/vectorizer/api-reference.md#scheduling-configuration) in the configuration for your service, then [install the vectorizer worker locally](/docs/vectorizer/worker.md#install-and-configure-vectorizer-worker) and configure it to connect to your Timescale Cloud service.
When you have [defined vectorizers](/docs/vectorizer/overview.md#define-a-vectorizer) on a self-hosted Postgres installation or on another cloud provider other than Timescale Cloud, you
use vectorizer worker to asynchronously processes them. By default, when you run `pgai vectorizer worker`, it
loops over the vectorizers defined in your database and processes each vectorizer in turn.
This page shows you how to install, run, and manage the workers that run vectorizers in your database:
- [Install and configure vectorizer worker](#install-and-configure-vectorizer-worker): setup the environment
to securely run vectorizers defined in a self-hosted Postgres environment
- [Run vectorizers with vectorizer worker](#run-vectorizers-with-vectorizer-worker): run specific
vectorizers in your database as either single, parallel or concurrent tasks
- [Set the time between vectorizer worker runs](#set-the-time-between-vectorizer-worker-runs): manage run scheduling
- [Additional configuration via environment variables](#additional-configuration-via-environment-variables) an overview of the environment variables and their purpose
## Prerequisites
To run vectorizer workers, you need to:
* Install:
* Container environment: [Docker][docker]
* Local environment: [Python3][python3] and [pip][pip]
* All environments: A Postgres client like [psql][psql]
* Create a key for your AI provider:
* [OpenAI][openai-key]
* [Voyage AI][voyage-key]
## Install and configure vectorizer worker
To be able to run vectorizers in your self-hosted database, use one of the following setups:
- [End-to-end vectorizer worker with Docker Compose](#end-to-end-vectorizer-worker-with-docker-compose): a Docker Compose configuration with a database instance and pgai vectorizer worker
- [Standalone vectorizer worker with Docker](#standalone-vectorizer-worker-with-docker): a Docker image you use to run vectorizers on any self-hosted Postgres database with the pgai
extension activated
- [Install vectorizer worker as a python package](#install-vectorizer-worker-as-a-python-package): install pgai as a python package so you can run vectorizers on any self-hosted
Postgres database with the pgai extension activated
### End-to-end vectorizer worker with Docker Compose
The end-to-end vectorizer worker is a batteries-included Docker Compose
configuration which you use to test pgai, vectorizers and vectorizer worker
locally. It includes a:
- local Postgres instance with pgai installed,
- Ollama embedding API service
- pgai vectorizer worker
On your local machine:
1. **Copy the following configuration into a file named `compose.yaml`**
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
OLLAMA_HOST: http://ollama:11434
command: [ "--poll-interval", "5s" ]
ollama:
image: ollama/ollama
volumes:
data:
```
1. **Start the services locally**
```shell
docker compose up -d
```
1. **Connect to your self-hosted database**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
### Standalone vectorizer worker with Docker
The `timescale/pgai-vectorizer-worker` docker image supplies the pgai vectorizer worker.
You use this image to run vectorizers on any self-hosted Postgres database that has the
pgai extension activated.
On your local machine:
1. **Run the vectorizer worker**
For self-hosted, you run a pgai vectorizer worker to automatically create embedding from the data in your
database using [vectorizers you defined previously](/docs/vectorizer.md#define-a-vectorizer).
Start the vectorizer worker:
```
docker run timescale/pgai-vectorizer-worker:{tag version} --db-url <DB URL>
```
### Install vectorizer worker as a python package
On your local machine:
1. **Install [pgai](https://pypi.org/project/pgai/) from PyPI**
```shell
pip install pgai
```
The vectorizer worker, `pgai vectorizer worker` is now in your `$PATH`.
1. **Run the vectorizer worker**
After you [define a vectorizer in your database](/docs/vectorizer.md#define-a-vectorizer), you run
a vectorizer worker to generate and update your embeddings:
1. Configure environment variables if necessary (see [Additional configuration via environment variables](#additional-configuration-via-environment-variables))
for a list of the available environment variables.
1. Run the vectorizer worker:
```shell
pgai vectorizer worker -d <db-connection-string>
```
## Run vectorizers with vectorizer worker
By default, when you run a vectorizer worker, it loops over the vectorizers defined in
your database and processes each vectorizer in turn. Five minutes after completing each
vectorizer run, the vectorizer worker loops over the vectorizers again.
For a [local installation](#install-vectorizer-worker-in-your-local-environment-), you use the
`-i` / `--vectorizer-id` command line argument to manage which vectorizers that are run by that
worker instance. For `docker compose` you add arguments using either the `command` or `environment`
flags in `compose.yaml`.
A vectorizer worker can:
- Run all vectorizers:
To run all current and future vectorizers:
- local: `pgai vectorizer worker`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version}`
- Docker Compose: `command: []`
- Run a single vectorizer:
To run the vectorizer with id 42:
- local: `pgai vectorizer worker -i 42`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
- Run multiple specific vectorizers:
To run the vectorizers with ids `42`, `64`, and `8`:
- local: `pgai vectorizer worker -i 42 -i 64 -i 8`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42 -i 64 -i 8`
- Docker Compose: `command: ["-i", "42", "-i", "64", "-i", "8"]`
- Run multiple vectorizers in concurrent vectorizer workers:
To run the vectorizers with id `42` and `64` in different vectorizer workers:
1. In a first shell, run:
- local: `pgai vectorizer worker -i 42`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
1. In another shell, run:
- local: `pgai vectorizer worker -i 64`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 64`
- Docker Compose: `command: ["-i", "64"]`
- Run concurrent vectorizer workers on a single vectorizer
More than one vectorizer worker can efficiently process the same vectorizer id
at the same time. To run the vectorizer with id `41` in different vectorizer workers:
1. In a first shell, run:
- local: `pgai vectorizer worker -i 42`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
1. In another shell, run:
- local: `pgai vectorizer worker -i 42`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -i 42`
- Docker Compose: `command: ["-i", "42"]`
You find the vectorizer ids in the `ai.vectorizer` table.
## Set the time between vectorizer worker runs
When you run a vectorizer worker, it loops over the vectorizers defined in your database.
Each vectorizer worker processes vectorizer queue until it is empty. By
default, the vectorizer worker sleeps for five minutes, then start over.
To control the time between vectorizer worker iterations, set the integer seconds or a duration string
in the `--poll-interval` parameter:
- Run every hour:
- local: `pgai vectorizer worker --poll-interval=1h`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=1h`
- Docker Compose: `command: ["--poll-interval", "1h"]`
- Run every 45 minutes:
- local: `pgai vectorizer worker --poll-interval=45m`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=45m`
- Docker Compose: `command: ["--poll-interval", "45m"]`
- Run every 900 seconds:
- local: `pgai vectorizer worker --poll-interval=900`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --poll-interval=900`
- Docker Compose: `command: ["--poll-interval", "900"]`
- Run once and then exit:
- local: `pgai vectorizer worker --once`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} --once`
- Docker Compose: `command: ["--once"]`
This is useful if you want to run the vectorizer worker on a cron job.
### Set the number of asynchronous tasks running in a vectorizer worker
Use the `-c` / `--concurrency` option to cause the vectorizer worker to use
multiple asynchronous tasks to process a queue:
- local: `pgai vectorizer worker -c 3`
- Docker: `docker run timescale/pgai-vectorizer-worker:{tag version} -c 3`
- Docker Compose: `command: ["-c", "3"]`
## Additional configuration via environment variables
Some important internals of the vectorizer worker are configured through
the following environment variables.
| Environment Variable | Default | Purpose |
|---------------------------------------------|------------------------|---------------------------------------------------------------------------------------------------------------------------------|
| PGAI_VECTORIZER_WORKER_DB_URL | - | Configures the database url that the vectorizer worker uses to procesa vectorizers. |
| OPENAI_API_KEY | - | The API key that the vectorizer worker uses to authenticate against the OpenAI API. Can also be provided on the db instance. |
| VOYAGE_API_KEY | - | The API key that the vectorizer worker uses to authenticate against the Voyage AI API. Can also be provided on the db instance. |
| OLLAMA_HOST | http://localhost:11434 | The host to use when communicating with the Ollama API. |
| PGAI_VECTORIZER_OLLAMA_MAX_CHUNKS_PER_BATCH | 2048 | Configures the number of chunks of data embedded in one Ollama API call, defaults to 2048 |
[python3]: https://www.python.org/downloads/
[pip]: https://pip.pypa.io/en/stable/installation/#supported-methods
[docker]: https://docs.docker.com/get-docker/
[psql]: https://www.timescale.com/blog/how-to-install-psql-on-mac-ubuntu-debian-windows/
[openai-key]: https://platform.openai.com/api-keys
[voyage-key]: https://docs.voyageai.com/docs/faq#how-do-i-get-the-voyage-api-key
|
docs/vectorizer/quick-start-ollama.md | Vectorizer quick start with Ollama | # Vectorizer quick start with Ollama
## Go to our vectorizer-quickstart [here](/docs/vectorizer/quick-start.md) to start with pgai and ollama. |
docs/vectorizer/python-integration.md | Creating vectorizers from python | # Creating vectorizers from python
To create a vectorizer from python you use the `CreateVectorizer` helper class from the `pgai.vectorizer` module.
It accepts all the options listed in the [SQL API](/docs/vectorizer/api-reference.md) and exposes the `to_sql`
method to generate a SQL query which you can then run through the SQL library of your choice.
First install the pgai library:
```bash
pip install pgai
```
Then you can create a vectorizer from python:
```python
from pgai.vectorizer import CreateVectorizer
from pgai.vectorizer.configuration import EmbeddingOpenaiConfig, ChunkingCharacterTextSplitterConfig, FormattingPythonTemplateConfig, LoadingColumnConfig
vectorizer_statement = CreateVectorizer(
source="blog",
target_table='blog_embeddings',
loading=LoadingColumnConfig(column_name='content'),
embedding=EmbeddingOpenaiConfig(
model='text-embedding-3-small',
dimensions=768
),
chunking=ChunkingCharacterTextSplitterConfig(
chunk_size=800,
chunk_overlap=400,
separator='.',
is_separator_regex=False
),
formatting=FormattingPythonTemplateConfig(template='$title - $chunk')
).to_sql()
```
# SQLAlchemy Integration with pgai Vectorizer
The `vectorizer_relationship` is a SQLAlchemy helper that integrates pgai's vectorization capabilities directly into your SQLAlchemy models.
Think of it as a normal SQLAlchemy [relationship](https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html), but with a preconfigured model instance under the hood.
This allows you to easily query vector embeddings created by pgai using familiar SQLAlchemy patterns.
## Installation
To use the SQLAlchemy integration, install pgai with the SQLAlchemy extras:
```bash
pip install "pgai[sqlalchemy]"
```
## Basic Usage
Here's a basic example of how to use the `vectorizer_relationship`:
```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from pgai.sqlalchemy import vectorizer_relationship
class Base(DeclarativeBase):
pass
class BlogPost(Base):
__tablename__ = "blog_posts"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str]
content: Mapped[str]
# Add vector embeddings for the content field
content_embeddings = vectorizer_relationship(
dimensions=768
)
```
Note if you work with alembics autogenerate functionality for migrations, also check [Working with alembic](#working-with-alembic).
### Semantic Search
You can then perform semantic similarity search on the field using [pgvector-python's](https://github.com/pgvector/pgvector-python) distance functions:
```python
from sqlalchemy import func, text
similar_posts = (
session.query(BlogPost.content_embeddings)
.order_by(
BlogPost.content_embeddings.embedding.cosine_distance(
func.ai.openai_embed(
"text-embedding-3-small",
"search query",
text("dimensions => 768")
)
)
)
.limit(5)
.all()
)
```
Or if you already have the embeddings in your application:
```python
similar_posts = (
session.query(BlogPost.content_embeddings)
.order_by(
BlogPost.content_embeddings.embedding.cosine_distance(
[3, 1, 2]
)
)
.limit(5)
.all()
)
```
## Configuration
The `vectorizer_relationship` accepts the following parameters:
- `dimensions` (int): The size of the embedding vector (required)
- `target_schema` (str, optional): Override the schema for the embeddings table. If not provided, inherits from the parent model's schema
- `target_table` (str, optional): Override the table name for embeddings. Default is `{table_name}_embedding_store`
Additional parameters are simply forwarded to the underlying [SQLAlchemy relationship](https://docs.sqlalchemy.org/en/20/orm/relationships.html) so you can configure it as you desire.
Think of the `vectorizer_relationship` as a normal SQLAlchemy relationship, but with a preconfigured model instance under the hood.
The relationship into the other direction is also automatically set, if you want to change it's configuration you can set the
`parent_kwargs`parameter. E.g. `parent_kwargs={"lazy": "joined"}` to configure eager loading.
## Setting up the Vectorizer
After defining your model, you need to create the vectorizer using pgai's SQL functions:
```sql
SELECT ai.create_vectorizer(
'blog_posts'::regclass,
loading => ai.loading_column('content'),
embedding => ai.embedding_openai('text-embedding-3-small', 768),
chunking => ai.chunking_recursive_character_text_splitter(
50, -- chunk_size
10 -- chunk_overlap
)
);
```
We recommend adding this to a migration script and run it via alembic.
## Querying Embeddings
The `vectorizer_relationship` provides several ways to work with embeddings:
### 1. Direct Access to Embeddings
If you access the class property of your model the `vectorizer_relationship` provide a SQLAlchemy model that you can query directly:
```python
# Get all embeddings
embeddings = session.query(BlogPost.content_embeddings).all()
# Access embedding properties
for embedding in embeddings:
print(embedding.embedding) # The vector embedding
print(embedding.chunk) # The text chunk
```
The model will have the primary key fields of the parent model as well as the following fields:
- `chunk` (str): The text chunk that was embedded
- `embedding` (Vector): The vector embedding
- `chunk_seq` (int): The sequence number of the chunk
- `embedding_uuid` (str): The UUID of the embedding
- `parent` (ParentModel): The parent model instance
### 2. Relationship Access
```python
blog_post = session.query(BlogPost).first()
for embedding in blog_post.content_embeddings:
print(embedding.chunk)
```
Access the original posts through the parent relationship
```python
for embedding in similar_posts:
print(embedding.parent.title)
```
### 3. Join Queries
You can combine embedding queries with regular SQL queries using the relationship:
```python
results = (
session.query(BlogPost, BlogPost.content_embeddings)
.join(BlogPost.content_embeddings)
.filter(BlogPost.title.ilike("%search term%"))
.all()
)
for post, embedding in results:
print(f"Title: {post.title}")
print(f"Chunk: {embedding.chunk}")
```
## Working with alembic
### Excluding managed tables
The `vectorizer_relationship` generates a new SQLAlchemy model, that is available under the attribute that you specify. If you are using alembic's autogenerate functionality to generate migrations, you will need to exclude these models from the autogenerate process.
These are added to a list in your metadata called `pgai_managed_tables` and you can exclude them by adding the following to your `env.py`:
```python
def include_object(object, name, type_, reflected, compare_to):
if type_ == "table" and name in target_metadata.info.get("pgai_managed_tables", set()):
return False
return True
context.configure(
connection=connection,
target_metadata=target_metadata,
include_object=include_object
)
```
This should now prevent alembic from generating tables for these models when you run `alembic revision --autogenerate`.
### Creating vectorizers
pgai provides native Alembic operations for managing vectorizers. For them to work you need to run `register_operations` in your env.py file. Which registers the pgai operations under the global op context:
```python
from pgai.alembic import register_operations
register_operations()
```
Then you can use the `create_vectorizer` operation to create a vectorizer for your model. As well as the `drop_vectorizer` operation to remove it.
```python
from alembic import op
from pgai.vectorizer.configuration import (
EmbeddingOpenaiConfig,
ChunkingCharacterTextSplitterConfig,
FormattingPythonTemplateConfig,
LoadingColumnConfig
)
def upgrade() -> None:
op.create_vectorizer(
source="blog",
target_table='blog_embeddings',
loading=LoadingColumnConfig(column_name='content'),
embedding=EmbeddingOpenaiConfig(
model='text-embedding-3-small',
dimensions=768
),
chunking=ChunkingCharacterTextSplitterConfig(
chunk_size=800,
chunk_overlap=400,
separator='.',
is_separator_regex=False
),
formatting=FormattingPythonTemplateConfig(template='$title - $chunk')
)
def downgrade() -> None:
op.drop_vectorizer(target_table="blog_embeddings", drop_all=True)
```
The `create_vectorizer` operation supports all configuration options available in the [SQL API](/docs/vectorizer/api-reference.md).
|
docs/vectorizer/overview.md | Automate AI embedding with pgai Vectorizer | # Automate AI embedding with pgai Vectorizer
Vector embeddings have emerged as a powerful tool for transforming text into
compact, semantically rich representations. This approach unlocks the potential
for more nuanced and context-aware searches, surpassing traditional
keyword-based methods. By leveraging vector embeddings, users can search through
things that have similar meanings but use completely different words.
While modern vector databases like PostgreSQL excel at storing and querying
these embeddings efficiently, the challenge of maintaining synchronization
between embeddings and their source data has typically fallen to developers,
requiring manual workflows and custom solutions.
Enter our innovative SQL-level interface for embedding services. This guide
introduces a groundbreaking approach that automates the embedding process within
the database management system itself. By treating embeddings as a declarative,
DDL-like feature—akin to an index -- but with the added flexibility of
representing only a part of a row's data -- we've simplified the entire workflow.
Our system empowers you to:
- Designate any text column for embedding using customizable rules
- Automatically generate and maintain searchable embedding tables
- Keep embeddings continuously synchronized with source data (asynchronously)
- Utilize a convenient view that seamlessly joins base tables with their embeddings
This page offers a comprehensive overview of Vectorizer features,
demonstrating how it streamlines the process of working with vector embeddings
in your database. To quickly try out embeddings using a pre-built Docker developer environment, see the
[Vectorizer quick start](/docs/vectorizer/quick-start.md). For a more detailed technical specification, see the
[Vectorizer API reference](/docs/vectorizer/api-reference.md).
To make embedding generation performant, and resilient to intermittent LLM
endpoint failures, we use a background worker to perform the embedding
generation. When you create Vectorizers in a [Timescale Cloud](https://tsdb.co/gh-pgai-signup) database, the
worker runs automatically and creates and synchronizes the embeddings in the
background. When using a database on another cloud provider (AWS RDS, Supabase,
etc.) or self-hosted Postgres, you can use the [vectorizer worker](/docs/vectorizer/worker.md) to
process your vectorizers.
Let's explore how the Vectorizer can transform your approach to unstructured,
textual, data analysis, and semantic search:
- [Select an embedding provider and set up your API Keys](#select-an-embedding-provider-and-set-up-your-api-keys)
- [Define a vectorizer](#define-a-vectorizer)
- [Query an embedding](#query-an-embedding)
- [Inject context into vectorizer chunks](#inject-context-into-vectorizer-chunks)
- [Improve query performance on your Vectorizer](#improve-query-performance-on-your-vectorizer)
- [Control vectorizer run time](#control-the-vectorizer-run-time-)
- [The embedding storage table](#the-embedding-storage-table)
- [Monitor a vectorizer](#monitor-a-vectorizer)
## Select an embedding provider and set up your API Keys
Vectorizer supports the following vector embedding providers as first-party integrations:
- [Ollama](https://ollama.com/)
- [Voyage AI](https://www.voyageai.com/)
- [OpenAI](https://openai.com/)
Additionally, through the [LiteLLM](https://litellm.ai) provider we support:
- [Cohere](https://cohere.com/)
- [HuggingFace Inference Endpoints](https://endpoints.huggingface.co/)
- [Mistral](https://mistral.ai/)
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
- [AWS Bedrock](https://aws.amazon.com/bedrock/)
- [Vertex AI](https://cloud.google.com/vertex-ai)
When using an external embedding service, you need to setup your API keys to access
the service. To store several API keys, you give each key a name and reference them
in the `embedding` section of the Vectorizer configuration. The default API key
names match the embedding provider's default name.
The default key names are:
| Provider | Key name |
|-----------|----------------|
| OpenAI | OPENAI_API_KEY |
| Voyage AI | VOYAGE_API_KEY |
Setting up your API keys is done differently depending on whether you are using Vectorizer in
Timescale Cloud or on a self-hosted Postgres server.
- Timescale Cloud
1. In [Timescale Console > Project Settings](https://console.cloud.timescale.com/dashboard/settings), click `AI Model API Keys`.
1. Click `Add AI Model API Keys`, add your key, then click `Add API key`.
Your API key is stored securely in Timescale Cloud, not your database.
- Self-hosted Postgres
Set an environment variable that is the [same as your API key name](/docs/vectorizer/worker.md#install-and-configure-vectorizer-worker).
For example:
```bash
export OPENAI_API_KEY="Your OpenAI API key"
```
## Define a vectorizer
You can configure the system to automatically generate and update embeddings
for a table's data. Let's consider the following example table:
```sql
CREATE TABLE blog(
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
To configure the system to embed this data automatically, you can use a SQL
query like this:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_ollama('nomic-embed-text', 768)
);
```
This example uses the `nomic-embed-text` embedding model hosted on a local
Ollama instance. Vectorizer supports other embedding providers, for more details
consult the [embedding configuration](/docs/vectorizer/api-reference.md#embedding-configuration)
section of the vectorizer API reference.
The `loading` parameter specifies the source of the data to generate embeddings from. E.g. from the `contents` column.
Vectorizer supports other loaders, such as the
`ai.loading_uri`, which loads external documents from local or remote buckets like S3, etc.
For more details, check the [loading configuration](/docs/vectorizer/api-reference.md#loading-configuration) section
of the vectorizer API reference.
Additionally, if the `contents` field is lengthy, it is split into multiple chunks,
resulting in several embeddings for a single blog post. Chunking helps
ensure that each embedding is semantically coherent, typically representing a
single thought or concept. A useful mental model is to think of embedding one
paragraph at a time.
However, splitting text into chunks can sometimes lead to losing context. To
mitigate this, you can reintroduce context into each chunk. For instance, you
might want to repeat the blog post's title in every chunk. This is easily
achieved using the `formatting` parameter, which allows you to inject row data
into each chunk:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title: $chunk')
);
```
This approach ensures that each chunk retains important contextual information,
improving the quality and relevance of the embeddings.
On Timescale Cloud, vectorizers are created automatically and scheduled using TimescaleDB background jobs running
every five minutes. If you are self-hosting, you need to [run the vectorizer-worker](/docs/vectorizer/worker.md)
manually to create and run the vectorizer.
## Query an embedding
The `create_vectorizer` command generates a view with the same name as the
specified destination. This view contains all the embeddings for the blog table.
Note that you'll typically have multiple rows in the view for each blog entry,
as multiple embeddings are usually generated for each source document.
The view includes all columns from the blog table plus the following additional columns:
| Column | Type | Description |
|----------------|--------|-----------------------------------------------------------------|
| embedding_uuid | UUID | Unique identifier for the embedding |
| chunk | TEXT | The text segment that was embedded |
| embedding | VECTOR | The vector representation of the chunk |
| chunk_seq | INT | Sequence number of the chunk within the document, starting at 0 |
To find the closest embeddings to a query, use this canonical SQL query:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance
FROM blog_contents_embeddings
ORDER BY distance
LIMIT 10;
```
The `<=>` operator calculates the distance between the query embedding and each
row's embedding vector. This is a simple way to do semantic search.
**Tip**: You can use the `ai.ollama_embed` function in our [PostgreSQL extension](/projects/extension/README.md) to generate an embedding for a user-provided query right inside the database.
You can combine this with metadata filters by adding a WHERE clause:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance
FROM blog_contents_embeddings
WHERE
metadata->>'department' = 'finance'
ORDER BY
distance
LIMIT 10;
```
This approach works with any column from the blog table. For example, to search by author:
```sql
SELECT
chunk,
embedding <=> <query embedding> as distance,
author
FROM blog_contents_embeddings
WHERE
author = 'Bulgakov'
ORDER BY
distance
LIMIT 10;
```
<details>
<summary>Click to see SQLAlchemy examples for querying the embeddings</summary>
Given an example SQLAlchemy model:
```python
class Wiki(Base):
__tablename__ = "wiki"
id: Mapped[int] = mapped_column(primary_key=True)
url: Mapped[str]
title: Mapped[str]
text: Mapped[str]
# Add vector embeddings for the text field
text_embeddings = vectorizer_relationship(
target_table='wiki_embeddings',
dimensions=384
)
```
You can use the text_embeddings relationship to perform semantic search on the embeddings by ordering the results by distance.
```python
async def _find_relevant_chunks(client: ollama.AsyncClient, query: str, limit: int = 2) -> WikiSearchResult:
response = await client.embed(model="all-minilm", input=query)
embedding = response.embeddings[0]
with Session(engine) as session:
# Query both the Wiki model and its embeddings
result = session.query(
Wiki,
Wiki.text_embeddings.embedding.cosine_distance(embedding).label('distance')
).join(Wiki.text_embeddings).order_by(
'distance'
).limit(limit).all()
return result
```
You can, of course, add any other filters to the query.
</details>
## Inject context into vectorizer chunks
Formatting allows you to inject additional information into each chunk. This is
needed because splitting the text into chunks can lead to losing important
context. For instance, you might want to include the authors and title with each
chunk. This is achieved using Python template strings, which have access to all
columns in the row and a special `$chunk` variable containing the chunk's text.
You may need to reduce the chunk size to ensure the formatted text fits within
token limits. Adjust the `chunk_size` parameter of the text_splitter
accordingly:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title - by $author - $chunk')
);
```
The default format string is simply `$chunk`.
## Improve query performance on your Vectorizer
A vector index on the embedding column improves query performance. On Timescale Cloud, a vectorscale
index is automatically created after 100,000 rows of vector data are present.
This behaviour is configurable, you can also specify other vector index types. The following
example uses a HNSW index:
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_ollama('nomic-embed-text', 768),
formatting => ai.formatting_python_template('$title - by $author - $chunk'),
indexing => ai.indexing_hnsw(min_rows => 100000, opclass => 'vector_l2_ops')
);
```
Note: Indexing relies on a background job that runs periodically, so this
feature will not work if scheduling is disabled (which is the default for self-hosted installations).
## Control the vectorizer run time
When you use Vectorizer on Timescale Cloud, you use scheduling to control the time when vectorizers run.
A scheduled job checks for work to be done and, if so, runs the cloud function to embed the data.
By default, scheduling uses TimescaleDB background jobs running every five minutes.
Once the table is large enough, scheduling also handles index creation on the embedding column.
When you self-host vectorizer, the vectorizer worker uses a polling mechanism to check whether
there is work to be done. Thus, scheduling is not needed and is deactivated by default.
Note: when scheduling is disabled, the index is not created automatically. You need to create it manually.
## The embedding storage table
The view is based on a table storing blog embeddings, named
`blog_contents_embeddings_store`. You can query this table directly for
potentially more efficient queries. The table structure is as follows:
```sql
CREATE TABLE blog_contents_embeddings_store(
embedding_uuid UUID NOT NULL PRIMARY KEY DEFAULT gen_random_uuid(),
id INT, -- primary key referencing the blog table
chunk_seq INT NOT NULL,
chunk TEXT NOT NULL,
embedding VECTOR(768) NOT NULL,
UNIQUE (id, chunk_seq),
FOREIGN KEY (id) REFERENCES public.blog(id) ON DELETE CASCADE
);
```
## Monitor a vectorizer
Since embeddings are created asynchronously, a delay may occur before they
become available. Use the `vectorizer_status` view to monitor the vectorizer's
status:
```sql
SELECT * FROM ai.vectorizer_status;
```
Sample output:
| id | source_table | target_table | view | pending_items |
|----|--------------|--------------------------------------|---------------------------------|---------------|
| 1 | public.blog | public.blog_contents_embeddings_store | public.blog_contents_embeddings | 1 |
The `pending_items` column indicates the number of items still awaiting embedding creation.
If the number of pending items exceeds 10,000, we return the maximum value of a bigint (`9223372036854775807`)
instead of exhaustively counting the items. This is done for performance.
Alternately, you can call the `ai.vectorizer_queue_pending` function to get the count of pending items
for a single vectorizer. The `exact_count` parameter is defaulted to `false`, but passing `true`
will exhaustively count the exact number of pending items.
```sql
select ai.vectorizer_queue_pending(1, exact_count=>true);
```
|
docs/vectorizer/adding-embedding-integration.md | Adding a Vectorizer embedding integration | # Adding a Vectorizer embedding integration
We welcome contributions to add new vectorizer embedding integrations.
The vectorizer consists of two components: the configuration, and the
vectorizer worker.
## Configuration
The vectorizer configuration lives in the database, in the `ai.vectorizer`
table. The `ai.create_vectorizer` function creates and inserts this
configuration into the table. When adding a new integration, only the argument
passed to the `embedding` parameter of `ai.create_vectorizer` is relevant. This
value is `jsonb` generated by the `ai.embedding_*` family of functions.
To add a new integration, add a new integration-specific function to the pgai
extension. This function generates the jsonb configuration for the new
integration. Refer to the existing `ai.embedding_openai` and
`ai.embedding_ollama` functions for examples of what these look like.
The configuration function should minimise mandatory arguments, while allowing
as many optional arguments as needed. Avoid using non-null default values for
optional arguments, as leaving a value unconfigured in the vectorizer may be
preferable, allowing it to be set in the vectorizer worker instead.
Update the implementation of `ai._validate_embedding` to account for the new
integration. Update the tests to account for the new function.
## Vectorizer Worker
The vectorizer worker reads the database's vectorizer configuration at runtime
and turns it into a `pgai.vectorizer.Config`.
To add a new integration, add a new file containing the embedding class
with fields corresponding to the database's jsonb configuration into the
[embedders directory] directory. See
the existing implementations for examples of how to do this. Implement the
`Embedder` class' abstract methods. Use first-party python libraries for the
integration, if available. If no first-party python libraries are available,
use direct HTTP requests.
Remember to include the import line of your recently created class into the
[embedders \_\_init\_\_.py].
Add tests which perform end-to-end testing of the new integration. There are
two options for handling API calls to the integration API:
1. Use [vcr.py] to cache real requests to the API
2. Run against the real API
At minimum the integration should use option 1: vcr.py. Option 2 should be
used conservatively. We will determine on a case-by-case basis what level of
testing we would like.
[vcr.py]:https://vcrpy.readthedocs.io/en/latest/
[embedders directory]:/projects/pgai/pgai/vectorizer/embedders
[embedders \_\_init\_\_.py]:/projects/pgai/pgai/vectorizer/embedders/__init__.py
## pgai library
The pgai library exposes helpers to create a vectorizer via pure python.
The classes for this are autogenerated via code generation. To update the classes
with a new integration look into the code generator docs in [/projects/pgai/pgai/vectorizer/generate](/projects/pgai/pgai/vectorizer/generate/README.md).
## Documentation
Ensure that the new integration is documented:
- Document the new database function in [/docs/vectorizer/api-reference.md](/docs/vectorizer/api-reference.md).
- Document any changes to the vectorizer worker in [/docs/vectorizer/worker.md](/docs/vectorizer/worker.md).
- Add a new row in [Supported features in each model](/docs/README.md#pgai-model-calling) for your worker.
|
docs/vectorizer/quick-start.md | Vectorizer quick start | # Vectorizer quick start
This page shows you how to create an Ollama-based vectorizer in a self-hosted Postgres instance. We also show how simple it is to do semantic search on the automatically embedded data!
If you prefer working with the OpenAI API instead of self-hosting models, you can jump over to the [openai quick start](/docs/vectorizer/quick-start-openai.md).
## Setup a local development environment
To set up a development environment, use a docker compose file that includes a:
- Postgres deployment image with the TimescaleDB and pgai extensions installed
- pgai vectorizer worker image
- ollama image to host embedding and large language models
On your local machine:
1. **Create the Docker configuration for a local developer environment**
Create the following `compose.yaml` in a new directory:
```yaml
name: pgai
services:
db:
image: timescale/timescaledb-ha:pg17
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- data:/home/postgres/pgdata/data
vectorizer-worker:
image: timescale/pgai-vectorizer-worker:latest
environment:
PGAI_VECTORIZER_WORKER_DB_URL: postgres://postgres:postgres@db:5432/postgres
OLLAMA_HOST: http://ollama:11434
command: [ "--poll-interval", "5s" ]
ollama:
image: ollama/ollama
volumes:
data:
```
1. **Start the services**
```shell
docker compose up -d
```
## Create and run a vectorizer
Now we can create and run a vectorizer. A vectorizer is a pgai concept, it processes data in a table and automatically creates embeddings for it.
1. **Connect to the database in your local developer environment**
- Docker: `docker compose exec -it db psql`
- psql: `psql postgres://postgres:postgres@localhost:5432/postgres`
1. **Enable pgai on your database**
```sql
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
```
1. **Create the `blog` table with the following schema**
```sql
CREATE TABLE blog (
id SERIAL PRIMARY KEY,
title TEXT,
authors TEXT,
contents TEXT,
metadata JSONB
);
```
1. **Insert some data into `blog`**
```sql
INSERT INTO blog (title, authors, contents, metadata)
VALUES
('Getting Started with PostgreSQL', 'John Doe', 'PostgreSQL is a powerful, open source object-relational database system...', '{"tags": ["database", "postgresql", "beginner"], "read_time": 5, "published_date": "2024-03-15"}'),
('10 Tips for Effective Blogging', 'Jane Smith, Mike Johnson', 'Blogging can be a great way to share your thoughts and expertise...', '{"tags": ["blogging", "writing", "tips"], "read_time": 8, "published_date": "2024-03-20"}'),
('The Future of Artificial Intelligence', 'Dr. Alan Turing', 'As we look towards the future, artificial intelligence continues to evolve...', '{"tags": ["AI", "technology", "future"], "read_time": 12, "published_date": "2024-04-01"}'),
('Healthy Eating Habits for Busy Professionals', 'Samantha Lee', 'Maintaining a healthy diet can be challenging for busy professionals...', '{"tags": ["health", "nutrition", "lifestyle"], "read_time": 6, "published_date": "2024-04-05"}'),
('Introduction to Cloud Computing', 'Chris Anderson', 'Cloud computing has revolutionized the way businesses operate...', '{"tags": ["cloud", "technology", "business"], "read_time": 10, "published_date": "2024-04-10"}');
```
4. **Create a vectorizer for `blog`**
```sql
SELECT ai.create_vectorizer(
'blog'::regclass,
loading => ai.loading_column('contents'),
destination => 'blog_contents_embeddings',
embedding => ai.embedding_ollama('nomic-embed-text', 768)
);
```
1. **Check the vectorizer worker logs**
```shell
docker compose logs -f vectorizer-worker
```
You see the vectorizer worker pick up the table and process it.
```shell
vectorizer-worker-1 | 2024-10-23 12:56:36 [info ] running vectorizer vectorizer_id=1
```
1. **See the embeddings in action**
Run the following search query to retrieve the embeddings:
```sql
SELECT
chunk,
embedding <=> ai.ollama_embed('nomic-embed-text', 'good food', host => 'http://ollama:11434') as distance
FROM blog_contents_embeddings
ORDER BY distance
LIMIT 10;
```
The results look like:
| chunk | distance |
|-------------------------------------------------------------------------------|--------------------|
| Maintaining a healthy diet can be challenging for busy professionals... | 0.5030059372474176 |
| PostgreSQL is a powerful, open source object-relational database system... | 0.5868937074856113 |
| PostgreSQLBlogging can be a great way to share your thoughts and expertise... | 0.5928412342761966 |
| As we look towards the future, artificial intelligence continues to evolve... | 0.6161160890734267 |
| Cloud computing has revolutionized the way businesses operate... | 0.6664001441252841 |
That's it, you're done. You now have a table in Postgres that pgai vectorizer automatically creates
and syncs embeddings for. You can use this vectorizer for semantic search, RAG or any other AI
app you can think of! If you have any questions, reach out to us on [Discord](https://discord.gg/KRdHVXAmkp).
|
README.md exists but content is empty.
- Downloads last month
- 90