id
stringlengths 14
17
| text
stringlengths 42
2.1k
|
---|---|
a6154f9f69ad-0 | Page Title: Pinecone
Paragraphs: © Pinecone Systems, Inc. | San Francisco, CA | Terms | Privacy | Cookies | Trust & Security | System Status Pinecone is a registered trademark of Pinecone Systems, Inc.
Page Title: Overview
Paragraphs: An introduction to the Pinecone vector database. Pinecone makes it easy to provide long-term memory for high-performance AI applications. It’s a managed, cloud-native vector database with a simple API and no infrastructure hassles. Pinecone serves fresh, filtered query results with low latency at the scale of billions of vectors. Applications that involve large language models, generative AI, and semantic search rely on vector embeddings, a type of data that represents semantic information. This information allows AI applications to gain understanding and maintain a long-term memory that they can draw upon when executing complex tasks. Vector databases like Pinecone offer optimized storage and querying capabilities for embeddings. Traditional scalar-based databases can’t keep up with the complexity and scale of such data, making it difficult to extract insights and perform real-time analysis. Vector indexes like FAISS lack useful features that are present in any database. Vector databases combine the familiar features of traditional databases with the optimized performance of vector indexes. Each record in a Pinecone index contains a unique ID and an array of floats representing a dense vector embedding. Each record may also contain a sparse vector embedding for hybrid search and metadata key-value pairs for filtered queries. Pinecone returns low-latency, accurate results for indexes with billions of vectors. |
a6154f9f69ad-1 | High-performance pods return up to 200 queries per second per replica. Queries reflect up-to-the-second updates such as upserts and deletes. Filter by namespaces and metadata or add resources to improve performance. Perform CRUD operations and query your vectors using HTTP, Python, or Node.js. Specify the distance metric your index uses to evaluate vector similarity, along with dimensions and replicas. Find the top k most similar vectors, or query by ID. Go to the quickstart guide to get a production-ready vector search service up and running in minutes. Updated 28 days ago
Paragraphs:
Paragraphs:
Page Title: Quickstart |
a6154f9f69ad-2 | Paragraphs:
Paragraphs:
Page Title: Quickstart
Paragraphs: How to get started with the Pinecone vector database. This guide explains how to set up a Pinecone vector database in minutes. This step is optional. Do this step only if you want to use the Python client. Use the following shell command to install Pinecone: For other clients, see Libraries. To use Pinecone, you must have an API key. To find your API key, open the Pinecone console and click API Keys. This view also displays the environment for your project. Note both your API key and your environment. To verify that your Pinecone API key works, use the following commands: If you don't receive an error message, then your API key is valid. You can complete the remaining steps in three ways: 1. Initialize Pinecone 2. Create an index. The commands below create an index named "quickstart" that performs approximate nearest-neighbor search using the Euclidean distance metric for 8-dimensional vectors. Index creation takes roughly a minute. ⚠️Warning Warning In general, indexes on the Starter (free) plan are archived as collections and deleted after 7 days of inactivity; for indexes created by certain open source projects such as AutoGPT, indexes are archived and deleted after 1 day of inactivity. To prevent this, you can send any API request to Pinecone and the counter will reset. 3. Retrieve a list of your indexes. Once your index is created, its name appears in the index list. Use the following commands to return a list of your indexes. 4. |
a6154f9f69ad-3 | Connect to the index (Client only). Before you can query your index using a client, you must connect to the index. Use the following commands to connect to your index. 5. Insert the data. To ingest vectors into your index, use the upsert operation. The upsert operation inserts a new vector in the index or updates the vector if a vector with the same ID is already present. The following commands upsert 5 8-dimensional vectors into your index. The cURL command above uses the endpoint for your Pinecone index. ℹ️Note Note When upserting larger amounts of data, upsert data in batches of 100 vectors or fewer over multiple upsert requests. 6. Get statistics about your index. The following commands return statistics about the contents of your index. 7. Query the index and get similar vectors. The following example queries the index for the three (3) vectors that are most similar to an example 8-dimensional vector using the Euclidean distance metric specified in step 2 ("Create an index.") above. 8. Delete the index. Once you no longer need the index, use the delete_index operation to delete it. The following commands delete the index. After you delete an index, you cannot use it again. Now that you’re successfully making indexes with your API key, you can start inserting data or view more examples. Updated about 22 hours ago
Page Title: Choosing index type and size |
a6154f9f69ad-4 | Page Title: Choosing index type and size
Paragraphs: When planning your Pinecone deployment, it is important to understand the approximate storage requirements of your vectors to choose the appropriate pod type and number. This page will give guidance on sizing to help you plan accordingly. As with all guidelines, these considerations are general and may not apply to your specific use case. We caution you to always test your deployment and ensure that the index configuration you are using is appropriate to your requirements. Collections make it easy to create new versions of your index with different pod types and sizes, and we encourage you to take advantage of that feature to test different configurations. This guide is merely an overview of sizing considerations and should not be taken as a definitive guide. Users on the Standard, Enterprise, and Enterprise Dedicated plans can contact support for further help with sizing and testing. There are five main considerations when deciding how to configure your Pinecone index: Each of these considerations comes with requirements for index size, pod type, and replication strategy. The most important consideration in sizing is the number of vectors you plan on working with. As a rule of thumb, a single p1 pod can store approximately 1M vectors, while a s1 pod can store 5M vectors. However, this can be affected by other factors, such as dimensionality and metadata, which are explained below. |
a6154f9f69ad-5 | The rules of thumb above for how many vectors can be stored in a given pod assumes a typical configuration of 768 dimensions per vector. As your individual use case will dictate the dimensionality of your vectors, the amount of space required to store them may necessarily be larger or smaller. Each dimension on a single vector consumes 4 bytes of memory and storage per dimension, so if you expect to have 1M vectors with 768 dimensions each, that’s about 3GB of storage without factoring in metadata or other overhead. Using that reference, we can estimate the typical pod size and number needed for a given index. Table 1 below gives some examples of this. Table 1: Estimated number of pods per 1M vectors by dimensionality Pinecone does not support fractional pod deployments, so always round up to the next nearest whole number when choosing your pods. QPS speeds are governed by a combination of the pod type of the index, the number of replicas, and the top_k value of queries. The pod type is the primary factor driving QPS, as the different pod types are optimized for different approaches. The p1 pods are performance-optimized pods which provide very low query latencies, but hold fewer vectors per pod than s1 pods. They are ideal for applications with low latency requirements (<100ms). The s1 pods are optimized for storage and provide large storage capacity and lower overall costs with slightly higher query latencies than p1 pods. |
a6154f9f69ad-6 | They are ideal for very large indexes with moderate or relaxed latency requirements. The p2 pod type provides greater query throughput with lower latency. They support 200 QPS per replica and return queries in less than 10ms. This means that query throughput and latency are better than s1 and p1, especially for low dimension vectors (<512D). As a rule, a single p1 pod with 1M vectors of 768 dimensions each and no replicas can handle about 20 QPS. It’s possible to get greater or lesser speeds, depending on the size of your metadata, number of vectors, the dimensionality of your vectors, and the top_K value for your search. See Table 2 below for more examples. Table 2: QPS by pod type and top_k value* *The QPS values in Table 2 represent baseline QPS with 1M vectors and 768 dimensions. Adding replicas is the simplest way to increase your QPS. Each replica increases the throughput potential by roughly the same QPS, so aiming for 150 QPS using p1 pods means using the primary pod and 5 replicas. Using threading or multiprocessing in your application is also important, as issuing single queries sequentially still subjects you to delays from any underlying latency. The Pinecone gRPC client can also be used to increase throughput of upserts. The last consideration when planning your indexes is the cardinality and size of your metadata. |
a6154f9f69ad-7 | While the increases are small when talking about a few million vectors, they can have a real impact as you grow to hundreds of millions or billions of vectors. Indexes with very high cardinality, like those storing a unique user ID on each vector, can have significant memory requirements, resulting in fewer vectors fitting per pod. Also, if the size of the metadata per vector is larger, the index requires more storage. Limiting which metadata fields are indexed using selective metadata indexing can help lower memory usage. You can also start with one of the larger pod sizes, like p1.x2. Each step up in pod size doubles the space available for your vectors. We recommend starting with x1 pods and scaling as you grow. This way, you don’t start with too large a pod size and have nowhere else to go up, meaning you have to migrate to a new index before you’re ready. Projects on the gcp-starter environment do not use pods. The following examples will showcase how to use the sizing guidelines above to choose the appropriate type, size, and number of pods for your index. In our first example, we’ll use the demo app for semantic search from our documentation. In this case, we’re only working with 204,135 vectors. The vectors use 300 dimensions each, well under the general measure of 768 dimensions. Using the rule of thumb above of up to 1M vectors per p1 pod, we can run this app comfortably with a single p1.x1 pod. |
a6154f9f69ad-8 | For this example, suppose you’re building an application to identify customers using facial recognition for a secure banking app. Facial recognition can work with as few as 128 dimensions, but in this case, because the app will be used for access to finances, we want to make sure we’re certain that the person using it is the right one. We plan for 100M customers and use 2048 dimensions per vector. We know from our rules of thumb above that 1M vectors with 768 dimensions fit nicely in a p1.x1 pod. We can just divide those numbers into the new targets to get the ratios we’ll need for our pod estimate: So we need 267 p1.x1 pods. We can reduce that by switching to s1 pods instead, sacrificing latency by increasing storage availability. They hold five times the storage of p1.x1, so the math is simple: So we estimate that we need 54 s1.x1 pods to store very high dimensional data for the face of each of the bank’s customers. Updated 2 months ago
Page Title: Understanding organizations |
a6154f9f69ad-9 | Page Title: Understanding organizations
Paragraphs: A Pinecone organization is a set of projects that use the same billing. Organizations allow one or more users to control billing and project permissions for all of the projects belonging to the organization. Each project belongs to an organization. For a guide to adding users to an organization, see Add users to a project or organization. Each organization contains one or more projects that share the same organization owners and billing settings. Each project belongs to exactly one organization. If you need to move a project from one organization to another, contact Pinecone support. All of the projects in an organization share the same billing method and settings. The billing settings for the organization are controlled by the organization owners. There are two organization roles: organization owner and organization user. Organization owners manage organization billing, users, and projects. Organization owners are also project owners for every project belonging to the organization. This means that organization owners have all permissions to manage project members, API keys, and quotas for these projects. Unlike organization owners, organization users cannot edit billing settings or invite new users to the organization. Organization users can create new projects, and project owners can add organization members to a project. New users have whatever role the organization owners and project owners grant them.
Project owners can add users to a project if those users belong to the same organization as the project. Table 1: Organization roles and permissions SSO allows organizations to manage their teams' access to Pinecone through their identity management solution. Once your integration is configured, you can require that users from your domain sign in through SSO, and you can specify a default role for teammates when they sign up. Only organizations in the enterprise tier can set up SSO. To set up your SSO integration, contact Pinecone support at [email protected]. Updated about 2 months ago
Page Title: Managing cost |
a6154f9f69ad-10 | Page Title: Managing cost
Paragraphs: This topic provides guidance on managing the cost of Pinecone. For the latest pricing details, see our pricing page. For help estimating total cost, see Understanding total cost. To see a calculation of your current usage and costs, see the usage dashboard in the Pinecone console. The total cost of Pinecone usage derives from pod type, the number of pods in use, pod size, the total time each pod is running, and the billing plan. This topic describes several ways you can manage your overall Pinecone cost by adjusting these variables. The Starter Plan incurs no costs, and supports roughly 100,000 vectors with 1536 dimensions. If this meets the needs of your project, you can use Pinecone for free; if you decide to scale your index or move it to production, you can upgrade your billing plan later. Different Pinecone pod sizes are designed for different applications, and some are more expensive than others. By choosing the appropriate pod type and size, you can pay for the resources you need. For example, the s1 pod type provides large storage capacity and lower overall costs with slightly higher query latencies than p1 pods. By switching to a different pod type, you may be able to reduce costs while still getting the performance your application needs. When a specific index is not in use, back it up using collections and delete the inactive index. When you're ready to use these vectors again, you can create a new index from the collection. |
a6154f9f69ad-11 | This new index can also use a different index type or size. Because it's relatively cheap to store collections, you can reduce costs by only running an index when it's in use. If your application requires you to separate users into groups, consider using namespaces to isolate segments of vectors within a single index. Depending on your application requirements, this may allow you to reduce the total number of active indexes. Users who commit to an annual contract may qualify for discounted rates. To learn more, contact Pinecone sales. Users on the Standard and Enterprise plans can contact support for help in optimizing costs.
Page Title: Understanding cost |
a6154f9f69ad-12 | Page Title: Understanding cost
Paragraphs: This topic describes the calculation of total cost for Pinecone, including an example. All prices are examples; for the latest pricing details, please see our pricing page. While our pricing page lists rates on an hourly basis for ease of comparison, this topic lists prices per minute, as this is how Pinecone calculates billing. For each index, billing is determined by the per-minute price per pod and the number of pods the index uses, regardless of index activity. The per-minute price varies by pod type, pod size, account plan, and cloud region. Total cost depends on a combination of factors: The following equation calculates the total costs accrued over time: (Number of pods) * (pod size) * (number of replicas) * (minutes pod exists) * (pod price per minute) To see a calculation of your current usage and costs, see the usage dashboard in the Pinecone console. An example application has the following requirements: Based on these requirements, the organization chooses to configure the project to use the Standard billing plan to host one p1.x2 pod with two replicas and a collection containing 1 GB of data. This project runs continuously for the month of January on the Standard plan. The components of the total cost for this example are given in Table 1 below: Table 1: Example billing components The invoice for this example is given in Table 2 below: Table 2: Example invoice Amount due $514.54 Pinecone offers tools to help you understand and control your costs.
Monitoring usage. Using the usage dashboard in the Pinecone console, you can monitor your Pinecone usage and costs as these accrue. Pod limits. Pinecone project owners can set limits for the total number of pods across all indexes in the project. The default pod limit is 5.
Page Title: Monitoring your usage |
a6154f9f69ad-13 | Page Title: Monitoring your usage
Paragraphs: This document describes how to monitor the usage and costs for your Pinecone organization through the Pinecone console. To view your Pinecone usage, you must be the organization owner for your organization. This feature is only available to organizations on the Standard or Enterprise plans. To view your usage through the Pinecone console, follow these steps: All dates are given in UTC to match billing invoices.
Page Title: Manage billing
Paragraphs: This category contains guides for tasks related to Pinecone billing.
Page Title: Understanding subscription status
Paragraphs: This document describes the different subscription statuses for your Pinecone organization. Users on the Standard and Enterprise Plans pay regular payments based on usage. When a payment is past due, Pinecone may restrict your account. Past due accounts have one of the following subscription statuses: If your organization is deactivated, follow these steps to reinstate your subscription and reactivate your indexes:
Page Title: Changing your billing plan
Paragraphs: This document describes how to change the billing plan for your Pinecone organization through the Pinecone console.. Accounts created by signing up through GCP Marketplace must change billing plans through the Pinecone console using this workflow. To change your billing plan, you must be the organization owner for your organization. To change your billing plan through the Pinecone console, follow these steps:
Page Title: Setting up billing through AWS Marketplace |
a6154f9f69ad-14 | Page Title: Setting up billing through AWS Marketplace
Paragraphs: This document describes how to configure pay-as-you-go billing for your Pinecone organization through Amazon Web Services (AWS) Marketplace. To commit to annual spending, contact Pinecone. This workflow creates a new Pinecone organization. If you already have an organization, signing up through AWS Marketplace creates an additional organization. To configure Pinecone billing through the AWS Marketplace, follow these steps: When you sign in, Pinecone creates a new organization linked to your AWS billing. If you already have a Pinecone organization, you can select the new "AWS Linked" organization in the top-left drop-down menu in the console.
Page Title: Setting up billing through GCP Marketplace
Paragraphs: This document describes how to configure pay-as-you-go billing for your Pinecone organization through Google Cloud Platform (GCP) Marketplace. To commit to annual spending, contact Pinecone. This workflow creates a new Pinecone organization. If you already have an organization, signing up through GCP Marketplace creates an additional organization. To configure Pinecone billing through the GCP Marketplace, follow these steps: When you sign in, Pinecone creates a new organization linked to your GCP billing. If you already have a Pinecone organization, you can select the new "GCP Linked" organization in the top-left drop-down menu in the console.
Page Title: Understanding projects |
a6154f9f69ad-15 | Page Title: Understanding projects
Paragraphs: This document explains the concepts related to Pinecone projects. Each Pinecone project contains a number of indexes and users. Only a user who belongs to the project can access the indexes in that project. Each project also has at least one project owner. All of the pods in a single project are located in a single environment. When you create a new project, you can choose the name, deployment environment, and pod limit. When creating a project, you must choose a cloud environment for the indexes in that project. Your project environment can affect your pricing. The following table lists the available cloud regions, the corresponding values of the environment parameter for the init() operation, and which billing tier has access to each environment: * This environment has unique features and limitations. See gcp-starter environment for more information. Contact us if you need a dedicated deployment in other regions. The environment cannot be changed after the project is created. You can set the maximum number of pods that can be used in total across all indexes in a project. Use this to control costs. The pod limit can be changed only by the project owner. There are two project roles: Project owner and project member. Table 1 below summarizes the permissions for each role. Table 1: Project roles and permissions Each Pinecone project has one or more API keys. |
a6154f9f69ad-16 | In order to make calls to the Pinecone API, a user must provide a valid API key for the relevant Pinecone project. To view the API key for your project, open the Pinecone console, select the project, and click API Keys. Each Pinecone project has a project ID. This hexadecimal string appears as part of the URL for API calls. To find a project's ID, follow these steps: Go to the Pinecone console. In the upper-left corner, select your project. Click Indexes. Under the name of your indexes, find the index URL. For example: example-index-1e3g52e.svc.us-east1-gcp.pinecone.io The portion of the index URL after the index name and before the dot is the project ID. For example, in the index URL test-index-3e2f43f.svc.us-east1-gcp.pinecone.io, the project ID is 3e2f43f. Updated 5 days ago
Page Title: Create a project
Paragraphs: ℹ️Info Info Starter (free) users can only have 1 owned project. To create a new project, Starter users must upgrade to the Standard or Enterprise plan or delete their default project. Follow these steps to create a new project: Access the Pinecone Console. Click Organizations in the left menu. In the Organizations view, click the PROJECTS tab. Click the +CREATE PROJECT button. Enter the Project Name. Select a cloud provider and region. Enter the project pod limit. Click CREATE PROJECT. Updated about 1 month ago
Page Title: Add users to projects and organizations |
a6154f9f69ad-17 | Page Title: Add users to projects and organizations
Paragraphs: If you are a project or organization owner, follow these steps to add users to organizations and projects. Click Settings in the left menu. In the Settings view, click the USERS tab. Click +INVITE USER. (Organization owner only) Select an organization role. Select one or more projects. Select a project role. Enter the user's email address. When you invite another user to join your organization or project, Pinecone sends them an email containing a link that enables them to gain access to the organization or project. If they already have a Pinecone account, they still receive an email, but they can also immediately view the project.
Page Title: Change project pod limit
Paragraphs: If you are a project owner, follow these steps to change the maximum total number of pods in your project.
Page Title: Rename a project
Paragraphs: If you are a project owner, follow these steps to change the name of your project. In the Settings view, click the PROJECTS tab. Next to the project you want to update, click . Under Project Name, enter the new project name. Click SAVE CHANGES.
Page Title: gcp
starter environment |
a6154f9f69ad-18 | Page Title: gcp
starter environment
Paragraphs: This document describes concepts related to the gcp-starter environment. To learn about indexes and other environments, see Understanding. Users on the Starter Plan can choose to deploy their project on one of multiple environments. One option is the gcp-starter region. Unlike other Starter Plan regions, the gcp starter region has unique features and limitations. Like other Starter Plan environments, projects on the gcp-starter environment support one pod with enough resources to support approximately 100,000 vectors with 1536-dimensional embeddings and metadata; the capacity is proportional for other dimensions. Indexes on the gcp-starter environment do not specify pod types; create_index calls ignore the pod_type parameter. Unlike other Starter Plan environments, projects in the gcp-starter region have no retention limits; data is retained indefinitely. Indexes in these projects are not deleted after inactivity as in other environments. After upgrading from the Starter Plan, you keep your free gcp-starter project: you are not charged for use on this project. To use your data on a Standard or Enterprise environment, insert the data to a new project in a supported environment. Like other Starter Plan environments, projects in the gcp-starter environment do not support replicas. After upserting records to indexes in gcp-starter, the query and describe_index_stats operations may not return updated records for up to 10 seconds. |
a6154f9f69ad-19 | The gcp-starter environment does not support the following features: Because projects on the gcp-starter region do not support the above features, you may need to use different features when developing your project. Because projects in the gcp-starter environment do not support namespaces, you may wish to use metadata filtering to support multitenancy in your project. Another alternative is to upgrade to the Standard or Enterprise plans and use multiple indexes to support multitenancy. Projects in the gcp-starter environment do not support the collections feature. However, collections may not be necessary or appropriate for projects in the gcp-starter environment. Collections serve two primary purposes: decreasing usage by archiving inactive indexes, and experimenting with different index configurations. However, projects on the gcp-starter environment neither incur usage costs nor specify pod types or sizes. Projects in the gcp-starter environment do not support deleting records by metadata. In some cases, you may be able to delete records by ID instead. This may require you to first query your index with metadata filters, then extract the record IDs.
Page Title: Understanding indexes |
a6154f9f69ad-20 | Page Title: Understanding indexes
Paragraphs: This document describes concepts related to Pinecone indexes. To learn how to create or modify an index, see Manage indexes. An index is the highest-level organizational unit of vector data in Pinecone. It accepts and stores vectors, serves queries over the vectors it contains, and does other vector operations over its contents. Each index runs on at least one pod. Pods are pre-configured units of hardware for running a Pinecone service. Each index runs on one or more pods. Generally, more pods mean more storage capacity, lower latency, and higher throughput. You can also create pods of different sizes. Once an index is created using a particular pod type, you cannot change the pod type for that index. However, you can create a new index from that collection with a different pod type. Different pod types are priced differently. See pricing for more details. When using the starter plan, you can create one pod with enough resources to support approximately 100,000 vectors with 1536-dimensional embeddings and metadata; the capacity is proportional for other dimensions. When using a starter plan, all create_index calls ignore the pod_type parameter. These storage-optimized pods provide large storage capacity and lower overall costs with slightly higher query latencies than p1 pods. They are ideal for very large indexes with moderate or relaxed latency requirements. Each s1 pod has enough capacity for around 5M vectors of 768 dimensions. |
a6154f9f69ad-21 | These performance-optimized pods provide very low query latencies, but hold fewer vectors per pod than s1 pods. They are ideal for applications with low latency requirements (<100ms). Each p1 pod has enough capacity for around 1M vectors of 768 dimensions. The p2 pod type provides greater query throughput with lower latency. For vectors with fewer than 128 dimension and queries where topK is less than 50, p2 pods support up to 200 QPS per replica and return queries in less than 10ms. This means that query throughput and latency are better than s1 and p1. Each p2 pod has enough capacity for around 1M vectors of 768 dimensions. However, capacity may vary with dimensionality. The data ingestion rate for p2 pods is significantly slower than for p1 pods; this rate decreases as the number of dimensions increases. For example, a p2 pod containing vectors with 128 dimensions can upsert up to 300 updates per second; a p2 pod containing vectors with 768 dimensions or more supports upsert of 50 updates per second. Because query latency and throughput for p2 pods vary from p1 pods, test p2 pod performance with your dataset. The p2 pod type does not support sparse vector values. Pod performance varies depending on a variety of factors. To observe how your workloads perform on a given pod type, experiment with your own data set. Each pod type supports four pod sizes: x1, x2, x4, and x8. Your index storage and compute capacity doubles for each size step. The default pod size is x1. |
a6154f9f69ad-22 | You can increase the size of a pod after index creation. To learn about changing the pod size of an index, see Manage indexes. You can choose from different metrics when creating a vector index: For the full list of parameters available to customize an index, see the create_index API reference. Depending on your application, some metrics have better recall and precision performance than others. For more information, see: What is Vector Similarity Search?
Page Title: Manage indexes
Paragraphs: In this section, we explain how you can get a list of your indexes, create an index, delete an index, and describe an index. To learn about the concepts related to indexes, see Indexes. Indexes on the Starter (free) plan are deleted after 7 days of inactivity. To prevent this, send any API request or log into the console. This will count as activity. List all your Pinecone indexes: Get the configuration and current status of an index named "pinecone-index": The simplest way to create an index is as follows. This gives you an index with a single pod that will perform approximate nearest neighbor (ANN) search using cosine similarity: A more complex index can be created as follows. This creates an index that measures similarity by Euclidean distance and runs on 4 s1 (storage-optimized) pods of size x1: To create an index from a collection, use the create_index operation and provide a source_collection parameter containing the name of the collection from which you wish to create an index. The new index is queryable and writable. Creating an index from a collection generally takes about 10 minutes. Creating a p2 index from a collection can take several hours when the number of vectors is on the order of 1M. Example The following example creates an index named example-index with 128 dimensions from a collection named example-collection. For more information about each pod type and size, see Indexes. The default pod size is x1. |
a6154f9f69ad-23 | After index creation, you can increase the pod size for an index. Increasing the pod size of your index does not result in downtime. Reads and writes continue uninterrupted during the scaling process. Currently, you cannot reduce the pod size of your indexes. Your number of replicas and your total number of pods remain the same, but each pod changes size. Resizing completes in about 10 minutes. To learn more about pod sizes, see Indexes. To change the pod size of an existing index, use the configure_index operation and append the new size to the pod_type parameter, separated by a period (.). Projects in the gcp-starter environment do not use pods. The following example assumes that my_index has size x1 and changes the size to x2. To check the status of a pod size change, use the describe_index operation. The status field in the results contains the key-value pair "state":"ScalingUp" or "state":"ScalingDown" during the resizing process and the key-value pair "state":"Ready" after the process is complete. The index fullness metric provided by describe_index_stats may be inaccurate until the resizing process is complete. The following example uses describe_index to get the index status of the index example-index. The status field contains the key-value pair "state":"ScalingUp", indicating that the resizing process is still ongoing. Results: You can increase the number of replicas for your index to increase throughput (QPS). All indexes start with replicas=1. |
a6154f9f69ad-24 | Indexes in the gcp-starter environment do not support replicas. The following example uses the configure_index operation to set the number of replicas for the index example-index to 4. See the configure_index API reference for more details. By default, Pinecone indexes all metadata. When you index metadata fields, you can filter vector search queries using those fields. When you store metadata fields without indexing them, you keep memory utilization low, especially when you have many unique metadata values, and therefore can fit more vectors per pod. Searches without metadata filters do not consider metadata. To combine keywords with semantic search, see sparse-dense embeddings. When you create a new index, you can specify which metadata fields to index using the metadata_config parameter. Projects on the gcp-starter environment do not support the metadata_config parameter. The value for the metadata_config parameter is a JSON object containing the names of the metadata fields to index. When you provide a metadata_config object, Pinecone only indexes the metadata fields present in that object: any metadata fields absent from the metadata_config object are not indexed. When a metadata field is indexed, you can filter your queries using that metadata field; if a metadata field is not indexed, metadata filtering ignores that field. The following example creates an index that only indexes the genre metadata field.
Queries against this index that filter for the genre metadata field may return results; queries that filter for other metadata fields behave as though those fields do not exist. This operation will delete all of the data and the computing resources associated with the index. When you create an index, it runs as a service until you delete it. Users are billed for running indexes, so we recommend you delete any indexes you're not using. This will minimize your costs. Delete a Pinecone index named "pinecone-index":
Page Title: Scale indexes |
a6154f9f69ad-25 | Page Title: Scale indexes
Paragraphs: In this topic, we explain how you can scale your indexes horizontally and vertically. Projects in the gcp-starter environment do not support the features referred to here, including pods, replicas, and collections. If you need to scale your environment to accommodate more vectors, you can modify your existing index to scale it vertically or create a new index and scale horizontally. This article will describe both methods and how to scale your index effectively. Scaling vertically is fast and involves no downtime. This is a good choice when you can't pause upserts and must continue serving traffic. It also allows you to double your capacity instantly. However, there are some factors to consider. By changing the pod size, you can scale to x2, x4, and x8 pod sizes, which means you are doubling your capacity at each step. Moving up to a new capacity will effectively double the number of pods used at each step. If you need to scale by smaller increments, then consider horizontal scaling. The number of base pods you specify when you initially create the index is static and cannot be changed. For example, if you start with 10 pods of p1.x1 and vertically scale to p1.x2, this equates to 20 pods worth of usage. Neither can you change pod types with vertical scaling. If you want to change your pod type while scaling, then horizontal scaling is the better option. You can only scale index sizes up and cannot scale them back down. |
a6154f9f69ad-26 | See our learning center for more information on vertical scaling. There are two approaches to horizontal scaling in Pinecone: adding pods and adding replicas. Adding pods increases all resources but requires a pause in upserts; adding replicas only increases throughput and requires no pause in upserts. Adding pods to an index increases all resources, including available capacity. Adding pods to an existing index is possible using our collections feature. A collection is an immutable snapshot of your index in time: a collection stores the data but not the original index definition. When you create an index from a collection, you define the new index configuration. This allows you to scale the base pod count horizontally without scaling vertically. The main advantage of this approach is that you can scale incrementally instead of doubling capacity as with vertical scaling. Also, you can redefine pod types if you are experimenting or if you need to use a different pod type, such asperformance-optimized pods or storage-optimized pods. Another advantage of this method is that you can change your metadata configuration to redefine metadata fields as indexed or stored-only. This is important when tuning your index for the best throughput.
Here are the general steps to make a copy of your index and create a new index while changing the pod type, pod count, metadata configuration, replicas, and all typical parameters when creating a new collection: Each replica duplicates the resources and data in an index. This means that adding additional replicas increases the throughput of the index but not its capacity. However, adding replicas does not require downtime. Throughput in terms of queries per second (QPS) scales linearly with the number of replicas per index. To add replicas, use the configure_index operation to increase the number of replicas for your index.
Page Title: Understanding collections |
a6154f9f69ad-27 | Page Title: Understanding collections
Paragraphs: This document explains the concepts related to collections in Pinecone. This is a public preview ("Beta") feature. Test thoroughly before using this feature for production workloads. No SLAs or technical support commitments are provided for this feature. A collection is a static copy of an index. It is a non-queryable representation of a set of vectors and metadata. You can create a collection from an index, and you can create a new index from a collection. This new index can differ from the original source index: the new index can have a different number of pods, a different pod type, or a different similarity metric. Indexes in the gcp-starter environment do not support collections. Creating a collection from your index is useful when performing tasks like the following: To learn about creating backups with collections, see Back up indexes. To learn about creating indexes from collections, see Manage indexes. Collections operations perform differently with different pod types. You cannot query or write to a collection after its creation. For this reason, a collection only incurs storage costs. You can only perform operations on collections in the current Pinecone project.
Page Title: Back up indexes |
a6154f9f69ad-28 | Page Title: Back up indexes
Paragraphs: This document describes how to make backup copies of your indexes using collections. To learn how to create an index from a collection, see Manage indexes. This document uses collections. This is a public preview feature. Test thoroughly before using this feature with production workloads. To create a backup of your index, use the create_collection operation. A collection is a static copy of your index that only consumes storage. The following example creates a collection named example-collection from an index named example-index. To retrieve the status of the process creating a collection and the size of the collection, use the describe_collection operation. Specify the name of the collection to check. You can only call describe_collection on a collection in the current project. The describe_collection operation returns an object containing key-value pairs representing the name of the collection, the size in bytes, and the creation status of the collection. The following example gets the creation status and size of a collection named example-collection. To get a list of the collections in the current project, use the list_collections operation. The following example gets a list of all collections in the current project. Results To delete a collection, use the delete_collection operation. Specify the name of the collection to delete. Deleting the collection takes several minutes. During this time, the describe_collection operation returns the status "deleting".
The following example deletes the collection example-collection.
Page Title: Using namespaces |
a6154f9f69ad-29 | The following example deletes the collection example-collection.
Page Title: Using namespaces
Paragraphs: Pinecone allows you to partition the vectors in an index into namespaces. Queries and other operations are then limited to one namespace, so different requests can search different subsets of your index. For example, you might want to define a namespace for indexing articles by content, and another for indexing articles by title. For a complete example, see: Semantic Text Search (Example). Every index is made up of one or more namespaces. Every vector exists in exactly one namespace. Namespaces are uniquely identified by a namespace name, which almost all operations accept as a parameter to limit their work to the specified namespace. When you don't specify a namespace name for an operation, Pinecone uses the default namespace name of "" (the empty string). Projects in the gcp-starter environment do not support namespaces. A destination namespace can be specified when vectors are upserted. If the namespace doesn't exist, it is created implicitly. The example below will create a "my-first-namespace" namespace if it doesn’t already exist: Then you can submit queries and other operations specifying that namespace as a parameter. For example, to query the vectors in namespace "my-first-namespace": You can create more than one namespace.
For example, insert data into separate namespaces: All vector operations apply to a single namespace, with one exception: The DescribeIndexStatistics operation returns per-namespace statistics about the contents of all namespaces in an index. More details
Page Title: Insert data |
a6154f9f69ad-30 | Page Title: Insert data
Paragraphs: After creating a Pinecone index, you can start inserting vector embeddings and metadata into the index. Immediately after the upsert response is received, vectors may not be visible to queries yet. This is because Pinecone is eventually consistent. In most situations, you can check if the vectors have been received by checking for the vector counts returned by describe_index_stats() to be updated. This technique may not work if the index has multiple replicas. For clients upserting larger amounts of data, you should insert data into an index in batches of 100 vectors or fewer over multiple upsert requests. By default, all vector operations block until the response has been received. But using our client they can be made asynchronous. For the Batching Upserts example this can be done as follows: Pinecone is thread-safe, so you can launch multiple read requests and multiple write requests in parallel. Launching multiple requests can help with improving your throughput. However, reads and writes can’t be performed in parallel, therefore writing in large batches might affect query latency and vice versa. If you experience slow uploads, see Performance tuning for advice. You can organize the vectors added to an index into partitions, or "namespaces," to limit queries and other vector operations to only one such namespace at a time. For more information, see: Namespaces. You can insert vectors that contain metadata as key-value pairs. |
a6154f9f69ad-31 | You can then use the metadata to filter for those criteria when sending the query. Pinecone will search for similar vector embeddings only among those items that match the filter. For more information, see: Metadata Filtering. Sparse vector values can be upserted alongside dense vector values. The following limitations apply to upserting sparse vectors: When upserting data, you may receive the following error: New upserts may fail as the capacity becomes exhausted. While your index can still serve queries, you need to scale your environment to accommodate more vectors. To resolve this issue, you can scale your index. Updated 8 days ago
Page Title: Manage data |
a6154f9f69ad-32 | Page Title: Manage data
Paragraphs: In addition to inserting and querying data, there are other ways you can interact with vector data in a Pinecone index. This section walks through the various vector operations available. If you're using a Pinecone client library to access an index, you'll need to open a session with the index: Pinecone indexes each have their own DNS endpoint. For cURL and other direct API calls to a Pinecone index, you'll need to know the dedicated endpoint for your index. Index endpoints take the following form: https://{index-name}-{project-name}.svc.YOUR_ENVIRONMENT.pinecone.io The following command retrieves your Pinecone project name. Get statistics about an index, such as vector count per namespace: The Fetch operation looks up and returns vectors, by id, from an index. The returned vectors include the vector data and/or metadata. Typical fetch latency is under 5ms. Fetch items by their ids: There are two methods for updating vectors and metadata, using full or partial updates. Full updates modify the entire item, that is vectors and metadata. Updating an item by id is done the same way as inserting items. (Write operations in Pinecone are idempotent.) The Upsert operation writes vectors into an index. If a new value is upserted for an existing vector id, it will overwrite the previous value. The Update operation performs partial updates that allow changes to part of an item. |
a6154f9f69ad-33 | Given an id, we can update the vector value with the values argument or update metadata with the set_metadata argument. The Update operation does not validate the existence of ids within an index. If a non-existent id is given then no changes are made and a 200 OK will be returned. To update the value of item ("id-3", [3., 3. ], {"type": "doc", "genre": "drama"}): The updated item would now be ("id-3", [4., 2. ], {"type": "doc", "genre": "drama"}). When updating metadata only specified fields will be modified. If a specified field does not exist, it is added. Metadata updates apply only to fields passed to the set_metadata argument. Any other fields will remain unchanged. To update the metadata of item ("id-3", [4., 2. ], {"type": "doc", "genre": "drama"}): The updated item would now be ("id-3", [4., 2. ], {"type": "web", "genre": "drama", "new": "true"}). Both vector and metadata can be updated at once by including both values and set_metadata arguments. To update the "id-3" item we write: The updated item would now be ("id-3", [1., 2. ], {"type": "webdoc", "genre": "drama", "new": "true"}). The Delete operation deletes vectors, by ID, from an index. Alternatively, it can also delete all vectors from an index or namespace. When deleting large numbers of vectors, limit the scope of delete operations to hundreds of vectors per operation. Instead of deleting all vectors in an index, delete the index and recreate it. |
a6154f9f69ad-34 | To delete vectors by their IDs, specify an ids parameter to delete. The ids parameter is an array of strings containing vector IDs. To delete all vectors from a namespace, specify the appropriate parameter for your client and provide a namespace parameter. If you delete all vectors from a single namespace, it will also delete the namespace. Projects on the gcp-starter environment do not support deleting vectors by namespace. Example: To delete vectors by metadata, pass a metadata filter expression to the delete operation. Updated 26 days ago
Page Title: Hybrid search with sparse-dense embeddings
Paragraphs: Pinecone supports vectors with sparse and dense values, which allows you to perform semantic and keyword search over your data in one query and combine the results for more relevant results. This topic describes how sparse-dense vectors work in Pinecone. To see sparse-dense embeddings in action, see the Ecommerce hybrid search example. Pinecone sparse-dense vectors allows you to perform hybrid search. Semantic search results for out-of-domain queries can be less relevant; combining these with keyword search results can improve relevance. Because Pinecone allows you to create your own sparse vectors, you can use sparse-dense queries to solve the Maximum Inner Product Search (MIPS) problem for sparse-dense vectors of any real values. This includes emerging use-cases such as retrieval over learnt sparse representations for text data using SPLADE. Using sparse-dense vectors involves the following general steps: Pinecone supports dense and sparse embeddings as a single vector. These types of embeddings represent different types of information and enable distinct kinds of search. Dense vectors enable semantic search. Semantic search returns the most similar results according to a specific distance metric even if no exact matches are present. This is possible because dense vectors generated by embedding models such as SBERT are numerical representations of semantic meaning. |
a6154f9f69ad-35 | Sparse vectors have very large number of dimensions, where only a small proportion of values are non-zero. When used for keywords search, each sparse vector represents a document; the dimensions represent words from a dictionary, and the values represent the importance of these words in the document. Keyword search algorithms like the BM25 algorithm compute the relevance of text documents based on the number of keyword matches, their frequency, and other factors. Keyword-aware semantic search requires vector representations of documents. Because Pinecone indexes accept sparse indexes rather than documents, you can control the generation of sparse vectors to represent documents. For examples of sparse vector generation, see SPLADE for Sparse Vector Search Explained, our SPLADE generation notebook, and our BM25 generation notebook. Pinecone supports sparse vector values of sizes up to 1000 non-zero values. In Pinecone, each vector consists of dense vector values and, optionally, sparse vector values as well. Pinecone does not support vectors with only sparse values. Pinecone stores sparse-dense vectors in p1 and s1 indexes. In order to query an index using sparse values, the index must use the dotproduct metric. Attempting to query any other index with sparse values returns an error. Indexes created before February 22, 2023 do not support sparse values. To query your sparse-dense vectors, you provide a query vector containing both sparse and dense values. |
a6154f9f69ad-36 | Pinecone ranks vectors in your index by considering the full dot product over the entire vector; the score of a vector is the sum of the dot product of its dense values with the dense part of the query, together with the dot product of its sparse values with the sparse part of the query. Pinecone represents sparse values as a dictionary of two arrays: indices and values. You can upsert these values inside a vector parameter to upsert a sparse-dense vector. The following example upserts two vectors with sparse and dense values. The following example queries an index using a sparse-dense vector. Because Pinecone's index views your sparse-dense vector as a single vector, it does not offer a built-in parameter to adjust the weight of a query's dense part against its sparse part; the index is agnostic to density or sparsity of coordinates in your vectors. You may, however, incorporate a linear weighting scheme by customizing your query vector, as we demonstrate in the function below. Examples The following example transforms vector values using an alpha parameter. The following example transforms a vector using the above function, then queries a Pinecone index. Updated 11 days ago
Page Title: Query data |
a6154f9f69ad-37 | Page Title: Query data
Paragraphs: After your data is indexed, you can start sending queries to Pinecone. The Query operation searches the index using a query vector. It retrieves the IDs of the most similar vectors in the index, along with their similarity scores. This operation can optionally return the result vectors' values and metadata, too. You specify the number of vectors to retrieve each time you send a query. Result vectors are always ordered by similarity from most similar to least similar. The similarity score for a vector represents its distance to the query vector, calculated according to the distance metric for the index. The significance of the score depends on the similarity metric: for example, for indexes using the euclidean distance metric, scores with lower values are more similar, while for indexes using the dotproduct metric, higher scores are more similar. When you send a query, you provide a vector and retrieve the top-k most similar vectors for each query. For example, this example sends a query vector and retrieves three matching vectors: Depending on your data and your query, you may not get top_k results. This happens when top_k is larger than the number of possible matching vectors for your query. You can add metadata to document embeddings within Pinecone, and then filter for those criteria when sending the query. Pinecone will search for similar vector embeddings only among those items that match the filter. For more information, see: Metadata Filtering.
When querying an index containing sparse and dense vectors, use the query() operation with the sparse_vector parameter present. The following example queries the index example-index with a sparse-dense vector. Avoid returning vector data and metadata when top_k is greater than 1000. This means queries with top_k over 1000 should not contain include_metadata=True or include_data=True. For more limitations, see: Limits. Pinecone is eventually consistent, so queries may not reflect very recent upserts.
Page Title: Filtering with metadata |
a6154f9f69ad-38 | Page Title: Filtering with metadata
Paragraphs: You can limit your vector search based on metadata. Pinecone lets you attach metadata key-value pairs to vectors in an index, and specify filter expressions when you query the index. Searches with metadata filters retrieve exactly the number of nearest-neighbor results that match the filters. For most cases, the search latency will be even lower than unfiltered searches. For more background information on metadata filtering, see: The Missing WHERE Clause in Vector Search. You can associate a metadata payload with each vector in an index, as key-value pairs in a JSON object where keys are strings and values are one of: High cardinality consumes more memory: Pinecone indexes metadata to allow for filtering. If the metadata contains many unique values — such as a unique identifier for each vector — the index will consume significantly more memory. Consider using selective metadata indexing to avoid indexing high-cardinality metadata that is not needed for filtering. Null metadata values are not supported. Instead of setting a key to hold a null value, we recommend you remove that key from the metadata payload. For example, the following would be valid metadata payloads: Pinecone supports 40kb of metadata per vector. Pinecone's filtering query language is based on MongoDB's query and projection operators. We currently support a subset of those selectors. |
a6154f9f69ad-39 | The metadata filters can be combined with AND and OR: A vector with metadata payload... ...means the "genre" takes on both values. For example, queries with the following filters will match the vector: Queries with the following filter will not match the vector: And queries with the following filters will not match the vector because they are invalid. They will result in a query compilation error: Metadata can be included in upsert requests as you insert your vectors. For example, here's how to insert vectors with metadata representing movies into an index: Projects on the gcp-starter environment do not support metadata strings containing the character Δ. Metadata filter expressions can be included with queries to limit the search to only vectors matching the filter expression. For example, we can search the previous movies index for documentaries from the year 2019. This also uses the include_metadata flag so that vector metadata is included in the response. For performance reasons, do not return vector data and metadata when top_k>1000. Queries with top_k over 1000 should not contain include_metadata=True or include_data=True. A comedy, documentary, or drama: A drama from 2020: A drama from 2020 (equivalent to the previous example): A drama or a movie from 2020: To specify vectors to be deleted by metadata values, pass a metadata filter expression to the delete operation. This deletes all vectors matching the metadata filter expression.
Projects in the gcp-starter region do not support deleting by metadata. This example deletes all vectors with genre "documentary" and year 2019 from an index. Updated 6 days ago
Page Title: Manage datasets
Paragraphs: This category contains concept topics and guides for tasks related to Pinecone datasets. Updated 14 days ago
Page Title: Using public Pinecone datasets |
a6154f9f69ad-40 | Page Title: Using public Pinecone datasets
Paragraphs: This document explains how to use existing Pinecone datasets. To learn about creating and listing datasets, see Creating datasets. Pinecone datasets contain rows of dense and sparse vector values and metadata. Pinecone's Python client supports upserting vectors from a dataset. You can also use datasets to iterate over vectors to automate queries. To list available public Pinecone datasets, use the list_datasets() method. The following example retrieves an object containing information about public Pinecone datasets. The example above returns an object like the following: To load a dataset into memory, use the load_dataset method. You can use load a Pinecone public dataset or your own dataset. The following example loads the quora_al-MiniLM-L6-bm25 Pinecone public dataset. The example above prints the following output: You can iterate over vector data in a dataset using the iter_documents method. You can use this method to upsert or update vectors, to automate benchmarking, or other tasks. The following example loads the quora_all-MiniLM-L6-bm25 dataset, then iterates over the documents in the dataset in batches of 100 and upserts the vector data to a Pinecone index named my-index. The following example upserts the dataset as dataframe.
Page Title: Pinecone public datasets
Paragraphs: This document explains and describes Pinecone datasets. To learn about using public Pinecone datasets, see Using public datasets. The following table lists information about public Pinecone datasets that are currently available:
Page Title: Creating and loading private datasets |
a6154f9f69ad-41 | Page Title: Creating and loading private datasets
Paragraphs: This document explains how to create, upload, and list your dataset for use by other Pinecone users. This guide shows how to create your own dataset using your own storage; you cannot upload your own dataset to the Pinecone dataset directory. To learn about using existing Pinecone datasets, see Using public Pinecone datasets. The Pinecone datasets project uses poetry for dependency management and supports python versions 3.8+. To install poetry, run the following command from the project root directory: To create a public dataset, you may need to generate dataset metadata. The following example creates a metadata object meta containing metadata for a dataset test_dataset. If you intend to list your dataset, you can save the dataset metadata using the following command. Write permission to location is needed. To see the complete schema, run the following command: To run tests locally, run the following command: Pinecone datasets can load a dataset from any storage bucket where it has access using the default access controls for s3, gcs or local permissions. |
a6154f9f69ad-42 | Pinecone datasets expects data to be uploaded with the following directory structure: Figure 1: Expected directory structure for Pinecone datasets ├── base_path # path to where all datasets │ ├── dataset_id # name of dataset │ │ ├── metadata.json # dataset metadata (optional, only for listed) │ │ ├── documents # datasets documents │ │ │ ├── file1.parquet │ │ │ └── file2.parquet │ │ ├── queries # dataset queries │ │ │ ├── file1.parquet │ │ │ └── file2.parquet └── ... Pinecone datasets scans storage and lists every dataset with metadata file. The following shows the format of an example s3 bucket address for a datasets metadata file: s3://my-bucket/my-dataset/metadata.json By default, the Pinecone client uses Pinecone's public datasets bucket on GCS. You can use your own bucket by setting the PINECONE_DATASETS_ENDPOINT environment variable. The following export command changes the default dataset storage endpoint to gs://my-bucket. Calling list_datasets or load_dataset now scans that bucket and list all datasets. You can also use s3:// as a prefix to your bucket to access an s3 bucket. Pinecone Datasets supports GCS and S3 storage buckets, using default authentication as provided by the fsspec implementation: gcsfs for GCS and s3fs for AWS.
To authenticate to an AWS s3 bucket using the key/secret method, follow these steps: To access a non-listed dataset, load it directly using the Dataset constructor. The following loads the dataset non-listed-dataset.
Page Title: Understanding multitenancy |
a6154f9f69ad-43 | Page Title: Understanding multitenancy
Paragraphs: This document describes concepts related to multitenancy in Pinecone indexes. This includes information on different approaches to keeping sets of vectors separate within a Pinecone index. To learn how to create or modify an index, see Manage indexes. You may need to segment vectors by customer or otherwise either physically or logically. This document describes different techniques to accomplish this and the advantages and advantages of each approach. One approach to multitenancy is to use namespaces to isolate segments of vectors within a single index. This is a 'pool' model that shares most resources between tenants while keeping them logically separate. This approach to multitenancy stores all segments of vectors in a single index and filters by metadata at query time. This is another 'pool' model; here, you separate tenants on the query level. Another approach to multitenancy is to create a separate index for each segment. This is a 'silo' model that provides dedicated resources to each tenant. For example, if you need to separate vectors for each customer, you can create a separate index for each customer.
Page Title: Monitoring |
a6154f9f69ad-44 | Page Title: Monitoring
Paragraphs: This document describes how to configure monitoring for your Pinecone index using Prometheus or compatible tools. You can ingest performance metrics from Pinecone indexes into your own Prometheus instances, or into Prometheus- and OpenMetrics-compatible monitoring tools. The Prometheus metric endpoint is for users who want to monitor and store system health metrics using their own Prometheus metrics logger. This feature is in public preview and is only available to Enterprise or Enterprise Dedicated users. Metrics are available at a URL like the following: https://metrics.YOUR_ENVIRONMENT.pinecone.io/metrics Your API key must be passed via the Authorization header as a bearer token like the following: Authorization: Bearer \<api-key\> Only the metrics for the project associated with the API key are available at this URL. For Prometheus, configure prometheus.yml as follows: See Prometheus docs for more configuration details. The metrics available are as follows: The following Prometheus queries gather information about your Pinecone index. The following query returns the average latency in seconds for all requests against the Pinecone index example-index. The following query returns the vector count for the Pinecone index example-index. The following query returns the total number of requests against the Pinecone index example-index over one minute.
The following query returns the total number of upsert requests against the Pinecone index example-index over one minute. The following query returns the total errors returned by the Pinecone index example-index over one minute. The following query returns the index fullness metric for the Pinecone index example-index.
Page Title: Performance tuning |
a6154f9f69ad-45 | Page Title: Performance tuning
Paragraphs: This section provides some tips for getting the best performance out of Pinecone. To increase throughput (QPS), increase the number of replicas for your index. The following example increases the number of replicas for example-index to 4. Pinecone has a gRPC flavor of the standard client (installation) that can provide higher upsert speeds for multi-pod indexes. To connect to an index via the gRPC client: The syntax for upsert, query, fetch, and delete with the gRPC client remain the same as the standard client. We recommend you use parallel upserts to get the best performance. We recommend you use the gRPC client for multi-pod indexes only. The performance of the standard and gRPC clients are similar in a single-pod index. It's possible to get write throttled faster when upserting using the gRPC index. If you see this often, we recommend you use a backoff algorithm while upserting. Updated 4 days ago
Page Title: Troubleshooting |
a6154f9f69ad-46 | Page Title: Troubleshooting
Paragraphs: This section describes common issues and how to solve them. Need help? Ask your question in our support forum. Standard, Enterprise, and Dedicated customers can also contact support for help. Version 3 of Python uses pip3. Use the following commands at the command line (the terminal): To minimize latency when accessing Pinecone: If you're batching queries, try reducing the number of queries per call to 1 query vector. You can make these calls in parallel and expect roughly the same performance as with batching. It's possible to get write-throttled sooner when upserting using the gRPC index. If you see this often, then we recommend using a backoff algorithm while upserting. There is a limit to how much vector data a single pod can hold. Create an index with more pods to hold more data. Estimate the right index configuration and scale your index to increase capacity. If your metadata has high cardinality, such as having a unique value for every vector in a large index, the index will take up more memory than estimated. This could result in the pods being full sooner than you expected. Consider only indexing metadata to be used for filtering, and storing the rest in a separate key-value store. See the Manage Indexes documentation for information on how to specify the number of pods for your index. We work hard to earn and maintain trust by treating security and reliability as a cornerstone of our company and product. |
a6154f9f69ad-47 | Pinecone is SOC 2 Type II compliant and GDPR-ready. See the Trust & Security page for more information. Contact us to report any security concerns. When sending requests to Pinecone, you may receive the following error: This error occurs in response to cross-origin requests. Most commonly, it occurs when a user is running a local web server with the hostname 'localhost', which Pinecone's Same Origin Policy (SOP) treats as distinct from the IP address of the local machine. To resolve this issue, host your web server on an external server with a public IP address and DNS name entry.
Page Title: Moving to production
Paragraphs: The goal of this document is to prepare users to begin using their Pinecone indexes in production by anticipating production issues and identifying best practices for production indexes. Because these issues are highly workload-specific, the recommendations here are general. Once you have become familiar with Pinecone and experimented with creating indexes and queries that reflect your intended workload, you may be planning to use your indexes to serve production queries. Before you do, there are several steps you can take that can prepare your project for production workloads, anticipate production issues, and enable reliability and growth. Consider the following areas before moving your indexes to production: One of the first steps towards a production-ready Pinecone index is configuring your project correctly. Consider creating a separate project for your development and production indexes, to allow for testing changes to your index before deploying them to production. Ensure that you have properly configured user access to your production environment so that only those users who need to access the production index can do so. Consider how best to manage the API key associated with your production project. Before you move your index to production, make sure that your index is returning accurate results in the context of your application. Consider identifying the appropriate metrics for evaluating your results. |
a6154f9f69ad-48 | Depending on your data and the types of workloads you intend to run, your project may require a different number and size of pods and replicas. Factors to consider include the number of vectors, the dimensions per vector, the amount and cardinality of metadata, and the acceptable queries per second (QPS). Use the index fullness metric to identify how much of your current resources your indexes are using. You can use collections to create indexes with different pod types and sizes to experiment. Before moving your project to production, consider determining whether your index configuration can serve the load of queries you anticipate from your application. You can write load tests in Python from scratch or using a load testing framework like Locust. In order to enable long-term retention, compliance archiving, and deployment of new indexes, consider backing up your production indexes by creating collections. Before serving production workloads, identify ways to improve latency by making changes to your deployment, project configuration, or client. Prepare to observe production performance and availability by configuring monitoring with Prometheus or OpenMetrics on your production indexes. Before going to production, consider planning ahead for how you might scale your indexes when the need arises. Identify metrics that may indicate the need to scale, such as index fullness and average request latency.
Plan for increasing the number of pods, changing to a more performant pod type, vertically scaling the size of your pods, increasing the number of replicas, or increasing storage capacity with a storage-optimized pod type. If you need help, visit support.pinecone.io, or talk to the Pinecone community. Ensure that your plan tier matches the support and availability SLAs you need. This may require you to upgrade to Enterprise.
Page Title: OpenAI |
a6154f9f69ad-49 | Page Title: OpenAI
Paragraphs: This guide covers the integration of OpenAI's Large Language Models (LLMs) with Pinecone (referred to as the OP stack), enhancing semantic search or 'long-term memory' for LLMs. This combo utilizes LLMs' embedding and completion (or generation) endpoints, alongside Pinecone's vector search capabilities, for nuanced information retrieval. LLMs like OpenAI's text-embedding-ada-002 generate vector embeddings, numerical representations of text semantics. These embeddings facilitate semantic-based rather than literal textual matches. Additionally, LLMs like gpt-4 or gpt-3.5-turbo predict text completions based on previous context. Pinecone is a vector database designed for storing and querying high-dimensional vectors. It provides fast, efficient semantic search over these vector embeddings. By integrating OpenAI's LLMs with Pinecone, we combine deep learning capabilities for embedding generation with efficient vector storage and retrieval. This approach surpasses traditional keyword-based search, offering contextually-aware, precise results. There are many ways of integrating these two tools and we have several guides focusing on specific use-cases. If you already know what you'd like to do you can jump to these specific materials: At the core of the OP stack we have embeddings which are supported via the OpenAI Embedding API. |
a6154f9f69ad-50 | We index those embeddings in the Pinecone vector database for fast and scalable retrieval augmentation of our LLMs or other information retrieval use-cases. This example demonstrates the core OP stack. It is the simplest workflow and is present in each of the other workflows, but is not the only way to use the stack. Please refer to the links above for more advanced usage. The OP stack is built for semantic search, question-answering, threat-detection, and other applications that rely on language models and a large corpus of text data. The basic workflow looks like this: Let's get started... We start by installing the OpenAI and Pinecone clients, we will also need HuggingFace Datasets for downloading the TREC dataset that we will use in this guide. To create embeddings we must first initialize our connection to OpenAI Embeddings, we sign up for an API key at OpenAI. The openai.Engine.list() function should return a list of models that we can use. We will use OpenAI's Ada 002 model. In res we should find a JSON-like object containing two 1536-dimensional embeddings, these are the vector representations of the two inputs provided above. To access the embeddings directly we can write: We will use this logic when creating our embeddings for the Text REtrieval Conference (TREC) question classification dataset later. Next, we initialize an index to store the vector embeddings. For this we need a Pinecone API key, sign up for one here. |
a6154f9f69ad-51 | With both OpenAI and Pinecone connections initialized, we can move onto populating the index. For this, we need the TREC dataset. Then we create a vector embedding for each question using OpenAI (as demonstrated earlier), and upsert the ID, vector embedding, and original text for each phrase to Pinecone. High-cardinality metadata values (like the unique text values we use here) can reduce the number of vectors that fit on a single pod. See Limits for more. With our data indexed, we're now ready to move onto performing searches. This follows a similar process to indexing. We start with a text query, that we would like to use to find similar sentences. As before we encode this with OpenAI's text similarity Babbage model to create a query vector xq. We then use xq to query the Pinecone index. Now we query. The response from Pinecone includes our original text in the metadata field, let's print out the top_k most similar questions and their respective similarity scores. Looks good, let's make it harder and replace "depression" with the incorrect term "recession". Let's perform one final search using the definition of depression rather than the word or related words. It's clear from this example that the semantic search pipeline is clearly able to identify the meaning between each of our queries. Using these embeddings with Pinecone allows us to return the most semantically similar questions from the already indexed TREC dataset. Updated 18 days ago
Page Title: Cohere |
a6154f9f69ad-52 | Page Title: Cohere
Paragraphs: In this guide you will learn how to use the Cohere Embed API endpoint to generate language embeddings, and then index those embeddings in the Pinecone vector database for fast and scalable vector search. This is a powerful and common combination for building semantic search, question-answering, threat-detection, and other applications that rely on NLP and search over a large corpus of text data. https://files.readme.io/fd0ba7b-pinecone-cohere-overview.png We start by installing the Cohere and Pinecone clients, we will also need HuggingFace Datasets for downloading the TREC dataset that we will use in this guide. To create embeddings we must first initialize our connection to Cohere, we sign up for an API key at Cohere. We will load the Text REtrieval Conference (TREC) question classification dataset which contains 5.5K labeled questions. We will take the first 1K samples for this walkthrough, but this can be scaled to millions or even billions of samples. Each sample in trec contains two label features and the text feature, which we will be using. We can pass the questions from the text feature to Cohere to create embeddings. We can check the dimensionality of the returned vectors, for this we will convert it from a list of lists to a Numpy array. We will need to save the embedding dimensionality from this to be used when initializing our Pinecone index later. |
a6154f9f69ad-53 | Here we can see the 1024 embedding dimensionality produced by Cohere's small model, and the 1000 samples we built embeddings for. Now that we have our embeddings we can move on to indexing them in the Pinecone vector database. For this we need a Pinecone API key, sign up for one here. We first initialize our connection to Pinecone, and then create a new index for storing the embeddings (we will call it "cohere-pinecone-trec"). When creating the index we specify that we would like to use the cosine similarity metric to align with Cohere's embeddings, and also pass the embedding dimensionality of 1024. Now we can begin populating the index with our embeddings. Pinecone expects us to provide a list of tuples in the format (id, vector, metadata), where the metadata field is an optional extra field where we can store anything we want in a dictionary format. For this example, we will store the original text of the embeddings. While uploading our data, we will batch everything to avoid pushing too much data in one go. We can see from index.describe_index_stats that we have a 1024-dimensionality index populated with 1000 embeddings. The indexFullness metric tells us how full our index is, at the moment it is empty. Using the default value of one p1 pod we can fit around 750K embeddings before the indexFullness reaches capacity. The Usage Estimator can be used to identify the number of pods required for a given number of n-dimensional embeddings.
Now that we have our indexed vectors we can perform a few search queries. When searching we will first embed our query using Cohere, and then search using the returned vector in Pinecone. Updated 20 days ago
Page Title: Haystack |
a6154f9f69ad-54 | Page Title: Haystack
Paragraphs: In this guide we will see how to integrate Pinecone and the popular Haystack library for Question-Answering. We start by installing the latest version of Haystack with all dependencies required for the PineconeDocumentStore. We initialize a PineconeDocumentStore by providing an API key and environment name. Create an account to get your free API key. Before adding data to the document store, we must download and convert data into the Document format that Haystack uses. We will use the SQuAD dataset available from Hugging Face Datasets. Next, we remove duplicates and unecessary columns. Then convert these records into the Document format. This Document format contains two fields; 'content' for the text content or paragraphs, and 'meta' where we can place any additional information that can later be used to apply metadata filtering in our search. Now we upsert the documents to Pinecone. The next step is to create embeddings from these documents. We will use Haystacks EmbeddingRetriever with a SentenceTransformer model (multi-qa-MiniLM-L6-cos-v1) which has been designed for question-answering. Then we run the PineconeDocumentStore.update_embeddings method with the retriever provided as an argument. GPU acceleration can greatly reduce the time required for this step. We can get documents by their ID with the PineconeDocumentStore.get_documents_by_id method. From here we return can view document content with d.content and the document embedding with d.embedding.
An ExtractiveQAPipeline contains three key components by default: We use the deepset/electra-base-squad2 model from the HuggingFace model hub as our reader model. We are now ready to initialize the ExtractiveQAPipeline. Using our QA pipeline we can begin querying with pipe.run. We can return multiple answers by setting the top_k parameter.
Page Title: Hugging Face Inference Endpoints |
a6154f9f69ad-55 | Page Title: Hugging Face Inference Endpoints
Paragraphs: Hugging Face Inference Endpoints allows access to straightforward model inference. Coupled with Pinecone we can generate and index high-quality vector embeddings with ease. Let's get started by initializing an Inference Endpoint for generating vector embeddings. We start by heading over to the Hugging Face Inference Endpoints homepage and signing up for an account if needed. After, we should find ourselves on this page: We click on Create new endpoint, choose a model repository (eg name of the model), endpoint name (this can be anything), and select a cloud environment. Before moving on it is very important that we set the Task to Sentence Embeddings (found within the Advanced configuration settings). Other important options include the Instance Type, by default this uses CPU which is cheaper but also slower. For faster processing we need a GPU instance. And finally, we set our privacy setting near the end of the page. After setting our options we can click Create Endpoint at the bottom of the page. This action should take use to the next page where we will see the current status of our endpoint. Once the status has moved from Building to Running (this can take some time), we're ready to begin creating embeddings with it. Each endpoint is given an Endpoint URL, it can be found on the endpoint Overview page. We need to assign this endpoint URL to the endpoint_url variable. |
a6154f9f69ad-56 | We will also need the organization API token, we find this via the organization settings on Hugging Face (https://huggingface.co/organizations/<ORG_NAME>/settings/profile). This is assigned to the api_org variable. Now we're ready to create embeddings via Inference Endpoints. Let's start with a toy example. We should see a 200 response. Inside the response we should find two embeddings... We can also see the dimensionality of our embeddings like so: We will need more than two items to search through, so let's download a larger dataset. For this we will use Hugging Face datasets. SNLI contains 550K sentence pairs, many of these include duplicate items so we will take just one set of these (the hypothesis) and deduplicate them. We will drop to 50K sentences so that the example is quick to run, if you have time, feel free to keep the full 480K. With our endpoint and dataset ready, all that we're missing is a vector database. For this, we need to initialize our connection to Pinecone, this requires a free API key. Now we create a new index called 'hf-endpoints', the name isn't important but the dimension must align to our endpoint model output dimensionality (we found this in dim above) and the model metric (typically cosine is okay, but not for all models). Now we have all of our components ready; endpoints, dataset, and Pinecone. Let's go ahead and create our dataset embeddings and index them within Pinecone. With everything indexed we can begin querying. |
a6154f9f69ad-57 | We will take a few examples from the premise column of the dataset. These look good, let's try a couple more examples. And one more... All of these results look excellent. If you are not planning on running your endpoint and vector DB beyond this tutorial, you can shut down both. Once the index is deleted, you cannot use it again. Shut down the endpoint by navigating to the Inference Endpoints Overview page and selecting Delete endpoint. Delete the Pinecone index with:
Page Title: Elasticsearch |
a6154f9f69ad-58 | Page Title: Elasticsearch
Paragraphs: Elasticsearch is a powerful open-source search engine and analytics platform that is widely used as a document store for keyword-based text search. Pinecone is a vector database widely used for production applications — such as semantic search, recommenders, and threat detection — that require fast and fresh vector search at the scale of tens or hundreds of millions (or even billions) of embeddings. Although Pinecone offers hybrid search for keyword-aware semantic search, Pinecone is not a document store and does not replace Elasticsearch for keyword-only retrieval. If you already use Elasticsearch and want to add Pinecone’s low-latency and large-scale vector search to your applications, this guide will show you how. You will see how to: We first need to upload the embedding model to our Elastic instance. To do so, we’ll use the [eland](https://github.com/elastic/eland) Elastic client. We’ll have to clone the "eland" repository and build the docker image before running it: In this example, we’ll use the [sentence-transformers/msmarco-MiniLM-L-12-v3](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L-12-v3) model from Hugging Face — although you could use any model you’d like. To upload the model to your Elasticsearch deployment, run the following command: Note that you’ll have to replace the placeholders with your Elasticsearch instance user, password, host, and port. |
a6154f9f69ad-59 | If you set up your own Elasticsearch instance, you would have already set the username and password when initially setting up the instance. If you’re using the hosted Elastic Stack, you can find the username and password in the "Security" section of the Elastic Stack console. We can quickly test the uploaded model by running the following command in the Elasticsearch developer console: We should get the following result: This is the vector embedding for our query. We’re now ready to upload our dataset and apply the model to produce the vector embeddings. Next, upload a dataset of documents to Elasticsearch. In this example, we’ll use a subset of the MSMacro dataset. You can download the file or run the following command: In this example, we’ll be using the hosted Elastic Stack, which makes it easier to use various integrations. We’ll use the "Upload" integration to load the data into an Elasticsearch index. We’ll drag the unzipped TSV file. The Upload integration will sample the data for us and show the following: We’ll click the "Import" button and continue to name the index: Once the import is complete, you’ll see the following: Clicking "View index in Discover" will reveal the index view where we can look at the uploaded data: We’ve now created an index for our data. Next, we’ll create a pipeline to produce a vector embedding for each document. |
a6154f9f69ad-60 | We’ll head to the Elasticsearch developer console and issue the following command to create the pipeline: The "processor" definition tells Elasticsearch which model to use and which field to read from. The "on_failure" definition defines the failure behavior that Elasticsearch will apply — specifically, which error message to write and which file to write them into. Once the embedding pipeline is created, we’ll re-index our "msmacro-raw" index, applying the embedding pipeline to produce the new embeddings. In the developer console, execute the following command: This will kick off the embedding pipeline. We’ll get a task id which we can track with the following command: Looking at the index, we can see that the embeddings have been created in an object called "text_embeddings" under the field "predicted_value". To make the loading process a bit easier, we’re going to pluck the "predicted_value" field and add it as its own column: Next, we’ll load the embeddings into Pinecone. Since the index size is considerable, we’ll use Apache Spark to parallelize the process. In this example, we’ll be using Databricks to handle the process of loading Elasticsearch index to Pinecone. |
a6154f9f69ad-61 | We’ll add the Elasticsearch Spark from Maven by navigating to the “Libraries” tab in the cluster settings view, and clicking “Install new”: Use the following Maven coordinates: org.elasticsearch:elasticsearch-spark-30_2.12:8.5.2 We’ll add the Pinecone Databricks connectors from S3: s3://pinecone-jars/spark-pinecone-uberjar.jar Restart the cluster if needed. Next, we’ll create a new notebook, attach it to the cluster and import the required dependencies: We’ll initialize the Spark context: Next, we’ll read the index from Elasticsearch: Note that to ensure the index is read correctly into the dataframe, we must specify that the “predicted_value” field is an array with a depth of 1, as shown below: Next, we’ll use the Pinecone Spark connector to load this dataframe into a Pinecone index. We’ll start by creating an index in the Pinecone console. Log in to the console and click “Create Index”. Then, name your index, and configure it to use 384 dimensions. When you’re done configuring the index, click “Create Index”. We have to do some prep work to get the dataframe ready for indexing. In order to index the original document with the embeddings we’ve created, we’ll create the following UDF which will encode the original document as a Base64 string. This will ensure the metadata object will remain a valid JSON object regardless of the content of the document. |
a6154f9f69ad-62 | We’ll apply the UDF and get rid of some unnecessary columns: Next, we’ll use the Pinecone Spark connector: Our vectors have been added to our Pinecone index! To query the index, we’ll need to generate a vector embedding for our query first, using the sentence-transformers/msmarco-MiniLM-L-12-v3 model. Then, we’ll use the Pinecone client to issue the query. We'll do this in a Python notebook. We’ll start by installing the required dependencies: Next, we’ll set up the client: We’ll set up the index: We’ll create a helper function that will decode the encoded documents we get: Next, we’ll create a function that will encode our query, query the index and convert the display the data using Pandas: Finally, we’ll test our index: Should yield the results: In conclusion, by following the steps outlined in this post, you can easily upload an embedding model to Elasticsearch, ingest raw textual data, create the embeddings, and load them into Pinecone. With this approach, you can take advantage of the benefits of integrating Elasticsearch and Pinecone. As mentioned, while Elasticsearch is optimized for indexing documents, Pinecone provides vector storage and search capabilities that can handle hundreds of millions and even billions of vectors.
Page Title: Databricks |
a6154f9f69ad-63 | Page Title: Databricks
Paragraphs: Using Databricks and Pinecone to create and index vector embeddings at scale Databricks, built on top of Apache Spark, is a powerful platform for data processing and analytics, known for its ability to efficiently handle large datasets. In this guide, we will show you how to use Spark (with Databricks) to create vector embeddings and load them into Pinecone. First, let’s discuss why using Databricks and Pinecone is necessary in this context. When you process less than a million records, using a single machine might be sufficient. But when you work with hundreds of millions of records, you have to start thinking about how the operation scales. We need to consider two things: Databricks is a great tool for creating embeddings at scale: it allows us to parallelize the process over multiple machines and leverage GPUs to accelerate the process. Pinecone lets us efficiently ingest, update and query hundreds of millions or even billions of embeddings. As a managed service, Pinecone can guarantee a very high degree of reliability and performance when it comes to datasets of this size. Pinecone provides a specialized connector for Databricks that is optimized to ingest data from Databricks and into Pinecone. That allows the ingestion process to be completed much faster than it would have if we were to use Pinecone’s REST or gRPC APIs on a large-scale dataset. |
a6154f9f69ad-64 | Together, Pinecone and Databricks make a great combination for managing the entire lifecycle of vector embeddings at scale. Databricks is a Unified Analytics Platform on top of Apache Spark. The primary advantage of using Spark is its ability to distribute the workload across a cluster of machines, allowing it to process large amounts of data quickly and efficiently. By adding more machines or increasing the number of cores on each machine, it is easy to horizontally scale the cluster as needed to handle larger workloads. At the core of Spark is the map-reduce pattern, where data is divided into partitions and a series of transformations is applied to each partition in parallel. The results from each partition are then automatically collected and aggregated into the final result. This approach makes Spark both fast and fault-tolerant, as it can retry failed tasks without requiring the entire workload to be reprocessed. In addition to its parallel processing capabilities, Spark allows developers to write code in popular languages like Python and Scala, which are then optimized for parallel execution under the covers. This makes it easier for developers to focus on the data processing itself, rather than worrying about the details of distributed computing. Vector embedding is a computationally intensive task, where parallelization can save many hours of precious computation time and resources. |
a6154f9f69ad-65 | Leveraging GPUs with Spark can produce even better results — enjoying the benefits of the fast computation of a GPU combined with parallelization will ensure optimal performance. Databricks makes it easier to work with Apache Spark: it provides easy set-up and tear-down of clusters, dependency management, compute allocation, storage solution integrations, and more. Pinecone is a vector database that makes it easy to build high-performance vector search applications. It offers a number of key benefits for dealing with vector embeddings at scale, including ultra-low query latency at any scale, live index updates when you add, edit, or delete data, and the ability to combine vector search with metadata filtering or keyword search for more relevant results. As mentioned before, Pinecone can easily handle very large scales of hundreds of millions and even billions of vector embeddings. Additionally, Pinecone is fully managed, so it's easy to use and scale. With Pinecone, you can easily index and search through vector embeddings. It is ideal for a variety of use cases such as semantic text search, question-answering, visual search, recommendation systems, and more. In this example, we'll create embeddings based on the sentence-transformers/all-MiniLM-L6-v2 model from Hugging Face. We'll then use a dataset with a large volume of documents to produce the embeddings and upsert them into Pinecone. Note that the actual model and dataset we'll use are immaterial for this example. |
a6154f9f69ad-66 | This method should work on any embeddings you may want to create, with whatever dataset you may choose. In order to create embeddings at scale, we need to do four things: Let's get started! Using Databricks makes it easy to speed up the creation of our embedding even more by using GPUs instead of CPUs in our cluster. To do this, navigate to the "Compute" section in your Databricks console, and select the following options: Next, we'll add the Pinecone Spark connector to our cluster. Navigate to the "Libraries" tab and click "Instal" new”. Select "DBF"/S3” and paste the following S3 URI: To complete the installation, click "Install". To use the new cluster, create a new notebook and attach it to the newly created cluster. We'll start by installing some dependencies: Next, we'll set up the connection to Pinecone. You'll have to retrieve the following information from your Pinecone console: Your index name will be the same index name used when we initialized the index (in this case, news). Next, we'll create a new index in Pinecone, where our vector embeddings will be saved: In this example, we'll use a collection of news articles as our example dataset. We'll use Hugging Face's datasets library and load the data into our environment: Next, we'll convert the dataset from the Hugging Face format and repartition it: Once the repartition is complete, we get back a DataFrame, which is a distributed collection of the data organized into named columns. |
a6154f9f69ad-67 | It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. As mentioned above, each partition in the dataframe has an equal amount of the original data. The dataset doesn't have identifiers associated with each document, so let's add them: As its name suggests, withColumn adds a column to the dataframe, containing a simple increasing identifier that we cast to a string. Great! Now we have identifiers for each document. Let's move on to creating the embeddings for each document. In this example, we will create a UDF (User Defined Function) to create the embeddings, using the AutoTokenizer and AutoModel classes from the Hugging Face transformers library. The UDF will be applied to each partition in a dataframe. When applied to a partition, a UDF is executed on each row in the partition. The UDF will tokenize the document using AutoTokenzier and then pass the result to the model (in this case we're using sentence-transformers/all-MiniLM-L6-v2). Finally, we'll produce the embeddings themselves by extracting the last hidden layer from the result. Once the UDF is created, it can be applied to a dataframe to transform the data in the specified column. The Python UDF will be sent to the Spark workers, where it will be used to transform the data. After the transformation is complete, the results will be sent back to the driver program and stored in a new column. |
a6154f9f69ad-68 | A dataframe in Spark is a higher-level abstraction built on top of a more fundamental building block called an RDD - or Resilient Distributed Dataset. We're going to use the mapPartitions function that gives us finer control over the execution of our UDF, by explicitly applying it to each partition of the RDD. Next, we’ll convert the resulting RDD back into a dataframe with the schema required by Pinecone: Lastly, we'll use the Pinecone Spark connector to save the embeddings to our index. The process of writing the embeddings to Pinecone should take approximately 15 seconds. When it completes, you’ll see the following: This means the process was completed successfully and the embeddings have been stored in Pinecone. Creating vector embeddings for large datasets can be challenging, but Databricks a great tool to accomplish the task. Databricks makes it easy to set up a GPU cluster and handle the required dependencies, allowing for efficient creation of embeddings at scale. Databricks and Pinecone are the perfect combination for working with very large vector datasets. Pinecone provides a way to efficiently store and retrieve the vectors created by Databricks, making it easy and performant to work with a huge number of vectors. Overall, the combination of Databricks and Pinecone provides a powerful and effective solution for creating embeddings for very large datasets.
By parallelizing the embedding generation and the data ingestion processes, we can create a fast and resilient pipeline that will be able to index and update large volumes of vectors.
Page Title: LangChain |
a6154f9f69ad-69 | Page Title: LangChain
Paragraphs: Welcome to the integration guide for Pinecone and LangChain. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered by large language models (LLMs). Pinecone enables developers to build scalable, real-time recommendation and search systems based on vector similarity search. LangChain, on the other hand, provides modules for managing and optimizing the use of language models in applications. Its core philosophy is to facilitate data-aware applications where the language model interacts with other data sources and its environment. By integrating Pinecone with LangChain, you can develop sophisticated applications that leverage both the platforms' strengths. Allowing us to add "long-term memory" to LLMs, greatly enhancing the capabilities of autonomous agents, chatbots, and question answering systems, among others. There are naturally many ways to use these two tools together. We have covered the process in detail across our many examples and learning material, including: The remainder of this guide will walk you through a simple retrieval augmentation example using Pinecone and LangChain. LLMs have a data freshness problem. The most powerful LLMs in the world, like GPT-4, have no idea about recent world events. The world of LLMs is frozen in time. Their world exists as a static snapshot of the world as it was within their training data. |
a6154f9f69ad-70 | A solution to this problem is retrieval augmentation. The idea behind this is that we retrieve relevant information from an external knowledge base and give that information to our LLM. In this notebook we will learn how to do that. To begin, we must install the prerequisite libraries that we will be using in this notebook. 🚨 Note: the above pip install is formatted for Jupyter notebooks. If running elsewhere you may need to drop the !. Every record contains a lot of text. Our first task is therefore to identify a good preprocessing methodology for chunking these articles into more "concise" chunks to later be embedding and stored in our Pinecone vector database. For this we use LangChain's RecursiveCharacterTextSplitter to split our text into chunks of a specified max length. Using the text_splitter we get much better sized chunks of text. We'll use this functionality during the indexing process later. Now let's take a look at embedding. Building embeddings using LangChain's OpenAI embedding support is fairly straightforward. We first need to add our OpenAI api key by running the next cell: (Note that OpenAI is a paid service and so running the remainder of this notebook may incur some small cost) After initializing the API key we can initialize our text-embedding-ada-002 embedding model like so: Now we embed some text like so: From this we get two (aligning to our two chunks of text) 1536-dimensional embeddings. Now we move on to initializing our Pinecone vector database. |
a6154f9f69ad-71 | To create our vector database we first need a free API key from Pinecone. Then we initialize like so: Then we connect to the new index: We should see that the new Pinecone index has a total_vector_count of 0, as we haven't added any vectors yet. We can perform the indexing task using the LangChain vector store object. But for now it is much faster to do it via the Pinecone python client directly. We will do this in batches of 100 or more. We've now indexed everything. We can check the number of vectors in our index like so: Now that we've build our index we can switch back over to LangChain. We start by initializing a vector store using the same index we just built. We do that like so: All of these are good, relevant results. But what can we do with this? There are many tasks, one of the most interesting (and well supported by LangChain) is called "Generative Question-Answering" or GQA. In GQA we take the query as a question that is to be answered by a LLM, but the LLM must answer the question based on the information it is seeing being returned from the vectorstore. To do this we initialize a RetrievalQA object like so: We can also include the sources of information that the LLM is using to answer our question. We can do this using a slightly different version of RetrievalQA called RetrievalQAWithSourcesChain: Now we answer the question being asked, and return the source of this information being used by the LLM.
Page Title: Amazon SageMaker |
a6154f9f69ad-72 | Page Title: Amazon SageMaker
Paragraphs: Amazon SageMaker and Pinecone can be used together for high-performance, scalable, and reliable Retrieval Augmented Generation (RAG) use cases. The integration allows us to use SageMaker compute and model hosting for Large Language Models (LLMs) and Pinecone as the knowledge base that allows us to keep our LLMs up to date with the latest information and reduce the likelihood of hallucinations. In this example, we see how to use SageMaker to deploy LLM instances for hosting models like BloomZ 7B1, Flan T5 XL, and Flan T5 UL2 to respond to user questions with insightful answers. Updated 13 days ago
Page Title: Datadog
Paragraphs: This topic describes how to use Datadog to monitor your Pinecone vector database. After you install a Datadog agent, follow these steps to set up your Pinecone integration: After you set up your Pinecone integration on Datadog, you can create monitors for your Pinecone vector database. Updated 15 days ago
Page Title: TruLens |
a6154f9f69ad-73 | Page Title: TruLens
Paragraphs: Using TruLens and Pinecone to evaluate grounded LLM applications TruLens is a powerful open source library for evaluating and tracking large language model-based applications. In this guide, we will show you how to use TruLens to evaluate applications built on top of a high performance Pinecone vector database. Systematic evaluation is needed to support reliable, non-hallucinatory LLM-based applications. TruLens contains instrumentation and evaluation tools for large language model (LLM)-based applications. For evaluation, TruLens provides a set of feedback functions, analogous to labeling functions, to programmatically score the input, output and intermediate text of an LLM app. Each LLM application request can be scored on its question-answer relevance, context relevance and groundedness. These feedback functions provide evidence that your LLM-application is non-hallucinatory. In addition to the above, feedback functions also support the evaluation of ground truth agreement, sentiment, model agreement, language match, toxicity, and a full suite of moderation evaluations, including hate, violence and more. TruLens implements feedback functions as an extensible framework that can evaluate your custom needs as well. During the development cycle, TruLens supports the iterative development of a wide range of LLM applications by wrapping your application to log cost, latency, key metadata and evaluations of each application run. |
a6154f9f69ad-74 | This allows you to track and identify failure modes, pinpoint their root cause, and measure improvement across experiments. Large language models alone have a hallucination problem. Several decades of machine learning research have optimized models, including modern LLMs, for generalization, while actively penalizing memorization. However, many of today’s applications require factual, grounded answers. LLMs are also expensive to train, and provided by third party APIs. This means the knowledge of an LLM is fixed. Retrieval-augmented generation (RAG) is a way to reliably ensure models are grounded, with Pinecone as the curated source of real world information, long term memory, application domain knowledge, or whitelisted data. In the RAG paradigm, rather than just passing a user question directly to a language model, the system retrieves any documents that could be relevant in answering the question from the knowledge base, and then passes those documents (along with the original question) to the language model to generate the final response. The most popular method for RAG involves chaining together LLMs with vector databases, such as the widely used Pinecone vector DB. In this process, a numerical vector (an embedding) is calculated for all documents, and those vectors are then stored in a database optimized for storing and querying vectors. Incoming queries are vectorized as well, typically using an encoder LLM to convert the query into an embedding. |
a6154f9f69ad-75 | The query embedding is then matched via embedding similarity against the document embeddings in the vector database to retrieve the documents that are relevant to the query. Pinecone makes it easy to build high-performance vector search applications, including retrieval-augmented question answering. Pinecone can easily handle very large scales of hundreds of millions and even billions of vector embeddings. Pinecone’s large scale allows it to handle long term memory or a large corpus of rich external and domain-appropriate data so that the LLM component of RAG application can focus on tasks like summarization, inference and planning. This setup is optimal for developing a non-hallucinatory application. In addition, Pinecone is fully managed, so it is easy to change configurations and components. Combined with the tracking and evaluation with TruLens, this is a powerful combination that enables fast iteration of your application. To build an effective RAG-style LLM application, it is important to experiment with various configuration choices while setting up the vector database, and study their impact on performance metrics. In this example, we explore the downstream impact of some of these configuration choices on response quality, cost and latency with a sample LLM application built with Pinecone as the vector DB. The evaluation and experiment tracking is done with the TruLens open source library. |
a6154f9f69ad-76 | TruLens offers an extensible set of feedback functions to evaluate LLM apps and enables developers to easily track their LLM app experiments. In each component of this application, different configuration choices can be made that can impact downstream performance. Some of these choices include the following: Constructing the Vector DB Retrieval LLM These configuration choices are useful to keep in mind when constructing your app. In general, there is no optimal choice for all use cases. Rather, we recommend that you experiment with and evaluate a variety of configurations to find the optimal selection as you are building your application. Here we’ll download a pre-embedded dataset from the pinecone-datasets library allowing us to skip the embedding and preprocessing steps. After downloading the data, we can initialize our pinecone environment and create our first index. Here, we have our first potentially important choice, by selecting the distance metric used for our index. Note - since all fields are currently indexed by default, we’ll also pass in an additional empty metadata_config parameter to avoid duplicative (and costly) indexing. Then, we can upsert our documents into the index in batches. Now that we’ve built our index, we can start using LangChain to initialize our vector store. In RAG, we take the query as a question that is to be answered by an LLM, but the LLM must answer the question based on the information it receives from the vectorstore. |
a6154f9f69ad-77 | To do this, we initialize a RetrievalQA as our app: Once we’ve set up our app, we should put together our feedback functions. As a reminder, feedback functions are an extensible method for evaluating LLMs. Here we’ll set up two feedback functions: qs_relevance and qa_relevance. They’re defined as follows: QS Relevance: query-statement relevance is the average of relevance (0 to 1) for each context chunk returned by the semantic search. QA Relevance: question-answer relevance is the relevance (again, 0 to 1) of the final answer to the original question. Our use of selectors here also requires an explanation. QA Relevance is the simpler of the two. Here, we are using .on_input_output() to specify that the feedback function should be applied on both the input and output of the application. For QS Relevance, we use TruLens selectors to locate the context chunks retrieved by our application. Let's break it down into simple parts: The result of these lines is that f_qs_relevance can be now be run on apps/records and will automatically select the specified components of those apps/records To finish up, we just wrap our Retrieval QA app with TruLens along with a list of the feedback functions we will use for eval. After submitting a number of queries to our application, we can track our experiment and evaluations with the TruLens dashboard. |
a6154f9f69ad-78 | Here is a view of our first experiment: Now that we’ve walked through the process of building our tracked RAG application using cosine as the distance metric, all we have to do for the next two experiments is to rebuild the index with euclidean or dotproduct as the metric and follow the rest of the steps above as is. Because we are using OpenAI embeddings, which are normalized to length 1, dot product and cosine distance are equivalent - and Euclidean will also yield the same ranking. See the OpenAI docs for more information. With the same document ranking, we should not expect a difference in response quality, but computation latency may vary across the metrics. Indeed, OpenAI advises that dot product computation may be a bit faster than cosine. We will be able to confirm this expected latency difference with TruLens. After doing so, we can view our evaluations for all three LLM apps sitting on top of the different indexes. All three apps are struggling with query-statement relevance. In other words, the context retrieved is only somewhat relevant to the original query. We can also see that both the Euclidean and dot-product metrics performed at a lower latency than cosine at roughly the same evaluation quality. Digging deeper into the Query Statement Relevance, we notice one problem in particular with a question about famous dental floss brands. The app responds correctly, but is not backed up by the context retrieved, which does not mention any specific brands. |
a6154f9f69ad-79 | Using a less powerful model is a common way to reduce hallucination for some applications. We’ll evaluate ada-001 in our next experiment for this purpose. Changing different components of apps built with frameworks like LangChain is really easy. In this case we just need to call ‘text-ada-001’ from the LangChain LLM store. Adding in easy evaluation with TruLens allows us to quickly iterate through different components to find our optimal app configuration. However, this configuration with a less powerful model struggles to return a relevant answer given the context provided. For example, when asked “Which year was Hawaii’s state song written?”, the app retrieves context that contains the correct answer but fails to respond with that answer, instead simply responding with the name of the song. While our relevance function is not doing a great job here in differentiating which context chunks are relevant, we can manually see that only the one (the 4th chunk) mentions the year the song was written. Narrowing our top_k, or the number of context chunks retrieved by the semantic search, may help. We can do so as follows: The way the top_k is implemented in LangChain’s RetrievalQA is that the documents are still retrieved by semantic search and only the top_k are passed to the LLM. Therefore, TruLens also captures all of the context chunks that are being retrieved. |
a6154f9f69ad-80 | In order to calculate an accurate QS Relevance metric that matches what's being passed to the LLM, we only calculate the relevance of the top context chunk retrieved by slicing the input_documents passed into the TruLens Select function: Once we’ve done so, our final application has much improved qs_relevance, qa_relevance and latency! With that change, our application is successfully retrieving the one piece of context it needs, and successfully forming an answer from that context. Even better, the application now knows what it doesn’t know: In conclusion, we note that exploring the downstream impact of some Pinecone configuration choices on response quality, cost and latency is an important part of the LLM app development process, ensuring that we make the choices that lead to the app performing the best. Overall, TruLens and Pinecone are the perfect combination for building reliable RAG-style applications. Pinecone provides a way to efficiently store and retrieve context used by LLM apps, and TruLens provides a way to track and evaluate each iteration of your application.
Page Title: Python Client |
a6154f9f69ad-81 | Page Title: Python Client
Paragraphs: This page provides installation instructions, usage examples, and a reference for the Pinecone Python client. Use the following shell command to install the Python client for use with Python versions 3.6+: Alternatively, you can install Pinecone in a Jupyter notebook: We strongly recommend installing Pinecone in a virtual environment. For more information on using Python virtual environments, see: There is a gRPC flavor of the client available, which comes with more dependencies in return for faster upload speeds. To install it, use the following command: For the latest development version: For a specific development version: The following example creates an index without a metadata configuration. By default, Pinecone indexes all metadata. The following example creates an index that only indexes the "color" metadata field. Queries against this index cannot filter based on any other metadata field. The following example returns all indexes in your project. The following example returns information about the index example-index. The following example deletes example-index. The following example changes the number of replicas for example-index. The following example returns statistics about the index example-index. The following example upserts dense vectors to example-index. The following example queries the index example-index with metadata filtering. The following example deletes vectors by ID. The following example fetches vectors by ID. |
a6154f9f69ad-82 | The following example updates vectors by ID. The following example creates the collection example-collection from example-index. The following example returns a list of the collections in the current project. The following example returns a description of the collection example-collection. For the REST API or other clients, see the API reference. pinecone.init(**kwargs) Initialize Pinecone. pinecone.configure_index(index_name, **kwargs) Configure an index to change pod type and number of replicas. pinecone.create_collection(**kwargs) Create a collection from an index. pinecone.create_index(**kwargs) Create an index. pinecone.delete_collection('example-collection') Delete an existing collection. pinecone.delete_index(indexName) Delete an existing index. pinecone.describe_collection(collectionName) Get a description of a collection. Returns: pinecone.describe_index(indexName) Get a description of an index. pinecone.list_collections() Return a list of the collections in your project. pinecone.list_indexes() Return a list of your Pinecone indexes. pinecone.Index(indexName) Construct an Index object. Index.delete(**kwargs) Delete items by their ID from a single namespace. Index.describe_index_stats() Returns statistics about the index's contents, including the vector count per namespace and the number of dimensions. Index.fetch(ids, **kwargs) The Fetch operation looks up and returns vectors, by ID, from a single namespace. The returned vectors include the vector data and metadata. |
a6154f9f69ad-83 | Index.query(**kwargs) Search a namespace using a query vector. Retrieves the ids of the most similar items in a namespace, along with their similarity scores. Index.update(**kwargs) Updates vectors in a namespace. If a value is included, it will overwrite the previous value. If set_metadata is included, the values of the fields specified in it will be added or overwrite the previous value. Index.upsert(**kwargs) Writes vectors into a namespace. If a new value is upserted for an existing vector ID, it will overwrite the previous value. The following example upserts vectors with sparse and dense values to example-index.
Page Title: Node.JS Client
Paragraphs: This page provides installation instructions, usage examples, and a reference for the Pinecone Node.JS client. This is a public preview ("Beta") client. Test thoroughly before using this client for production workloads. No SLAs or technical support commitments are provided for this client. Expect potential breaking changes in future releases. Use the following shell command to install the Node.JS client for use with Node.JS versions 17 and above: Alternatively, you can install Pinecone with Yarn: To initialize the client, instantiate the PineconeClient class and call the init method. The init method takes an object with the apiKey and environment properties: The following example logs all indexes in your project. The following example logs information about the index example-index. The following example sets the number of replicas and pod type for example-index. The following example upserts vectors to example-index. pinecone.init(configuration: PineconeClientConfiguration) Initialize the Pinecone client. pinecone.configure_index(indexName: string, patchRequest? : PatchRequest) pinecone.createCollection(requestParameters: CreateCollectionOperationRequest) pinecone.createIndex(requestParameters? : CreateIndexRequest) pinecone.deleteCollection(requestParameters: DeleteCollectionRequest) pinecone.deleteIndex(requestParameters: DeleteIndexRequest) Delete an index. |
a6154f9f69ad-84 | pinecone.describeCollection(requestParameters: DescribeCollectionRequest) Return: pinecone.describeIndex(requestParameters: DescribeIndexRequest) pinecone.listCollections() pinecone.listIndexes() pinecone.Index(indexName: string) index.delete(requestParameters: Delete1Request) index.describeIndexStats(requestParameters: DescribeIndexStatsOperationRequest) Read more about filtering for more detail. index.fetch(requestParameters: FetchRequest) index.query(requestParameters: QueryOperationRequest) index.update(requestParameters: UpdateOperationRequest) Updates vectors in a namespace. If a value is included, it will overwrite the previous value. If setMetadata is included in the updateRequest, the values of the fields specified in it will be added or overwrite the previous value. index.upsert(requestParameters: UpsertOperationRequest) | Parameter | Type | Description | | vectors | Array | An array containing the vectors to upsert. Recommended batch limit is 100 vectors.id (str) - The vector's unique id.values ([float]) - The vector data.metadata (object) - (Optional) Metadata for the vector. | | namespace | string | (Optional) The namespace name to upsert vectors. |
Page Title: Limits |
a6154f9f69ad-85 | Page Title: Limits
Paragraphs: This is a summary of current Pinecone limitations. For many of these, there is a workaround or we're working on increasing the limits. Max vector dimensionality is 20,000. Max size for an upsert request is 2MB. Recommended upsert limit is 100 vectors per request. Vectors may not be visible to queries immediately after upserting. You can check if the vectors were indexed by looking at the total with describe_index_stats(), although this method may not work if the index has multiple replicas. Pinecone is eventually consistent. Max value for top_k, the number of results to return, is 10,000. Max value for top_k for queries with include_metadata=True or include_data=True is 1,000. Max vectors per fetch or delete request is 1,000. There is no limit to the number of namespaces per index. Each p1 pod has enough capacity for 1M vectors with 768 dimensions. Each s1 pod has enough capacity for 5M vectors with 768 dimensions. Max metadata size per vector is 40 KB. Null metadata values are not supported. Instead of setting a key to hold a null value, we recommend you remove that key from the metadata payload. Metadata with high cardinality, such as a unique value for every vector in a large index, uses more memory than expected and can cause the pods to become full.
Page Title: Release notes |
a6154f9f69ad-86 | Page Title: Release notes
Paragraphs: This document contains details about Pinecone releases. For information about using specific features, see our API reference. Pinecone now supports deploying projects to Azure using the new eastus-azure region. This is a public preview environment, so test thoroughly before deploying to production. The new gcp-starter region is now in public preview. This region has distinct limitations from other Starter Plan regions. gcp-starter is the default region for some new users. Indexes in the starter plan now support approximately 100,000 1536-dimensional embeddings with metadata. Capacity is proportional for other dimensionalities. Pinecone now supports new US and EU cloud regions. Pinecone now supports enterprise SSO. Contact us at [email protected] to set up your integration. Pinecone now supports 40kb of metadata per vector. Pinecone now supports vectors with sparse and dense values. To use sparse-dense embeddings in Python, upgrade to Python client version 2.2.0. Python client version 2.2.0 with support for sparse-dense embeddings is now available on GitHub and PYPI. You can now try out our new Node.js client for Pinecone. You can now monitor your current and projected Pinecone usage with the Usage dashboard. You can now sign up for Pinecone billing through Amazon Web Services Marketplace. The latest release of the Python client makes the following changes: You can now sign up for Pinecone billing through Google Cloud Platform Marketplace. |
a6154f9f69ad-87 | Pinecone now features organizations, which allow one or more users to control billing and project settings across multiple projects owned by the same organization. The p2 pod type is now generally available and ready for production workloads. p2 pods are now available in the Starter plan and support the dotproduct distance metric. Bulk vector_deletes are now up to 10x faster in many circumstances. Creating collections is now faster. Pinecone now supports keyword-aware semantic search with the new hybrid search indexes and endpoints. Hybrid search enables improved relevance for semantic search results by combining them with keyword search. This is an early access feature and is available only by signing up. The new Pinecone Status Page displays information about the status of the Pinecone service, including the status of individual cloud regions and a log of recent incidents. You can now create indexes from public collections, which are collections containing public data from real-world data sources. Currently, public collections include the Glue - SSTB collection, the TREC Question classification collection, and the SQuAD collection. You can now make static copies of your index using collections. After you create a collection from an index, you can create a new index from that collection. The new index can use any pod type and any number of pods. Collections only consume storage. This is a public preview feature and is not appropriate for production workloads. |
a6154f9f69ad-88 | You can now change the size of the pods for a live index to accommodate more vectors or queries without interrupting reads or writes. The p1 and s1 pod types are now available in 4 different sizes: 1x, 2x, 4x, and 8x. Capacity and compute per pod double with each size increment. The new p2 pod type provides search speeds of around 5ms and throughput of 200 queries per second per replica, or approximately 10x faster speeds and higher throughput than the p1 pod type, depending on your data and network conditions. The s1 and p1 pod types now offer approximately 50% higher query throughput and 50% lower latency, depending on your workload. You can now specify a metadata filter to get results for a subset of the vectors in your index by calling describe_index_stats with a filter object. The describe_index_stats operation now uses the POST HTTP request type. The filter parameter is only accepted by describe_index_stats calls using the POST request type. Calls to describe_index_stats using the GET request type are now deprecated. You can now choose to follow a guided tour in the Pinecone Console. This interactive tutorial walks you through creating your first index, upserting vectors, and querying your data. The purpose of the tour is to show you all the steps you need to start your first project in Pinecone. The create_index, delete_index, and scale_index operations now use more specific HTTP response codes that describe the type of operation that succeeded. |
a6154f9f69ad-89 | You can now store more metadata and more unique metadata values! Select which metadata fields you want to index for filtering and which fields you only wish to store and retrieve. When you index metadata fields, you can filter vector search queries using those fields. When you store metadata fields without indexing them, you keep memory utilization low, especially when you have many unique metadata values, and therefore can fit more vectors per pod. You can now specify a single query vector using the vector input. We now encourage all users to query using a single vector rather than a batch of vectors, because batching queries can lead to long response messages and query times, and single queries execute just as fast on the server side. You can now query your Pinecone index using only the ID for another vector. This is useful when you want to search for the nearest neighbors of a vector that is already stored in Pinecone. The index fullness metric in describe_index_stats() results is now more accurate. You can now perform a partial update by ID and individual value pairs. This allows you to update individual metadata fields without having to upsert a matching vector or update all metadata fields at once. Users on all plans can now see metrics for the past one (1) week in the Pinecone console. |
a6154f9f69ad-90 | Users on the Enterprise and Enterprise Dedicated plan now have access to the following metrics via the Prometheus metrics endpoint: Note: The accuracy of the pinecone_index_fullness metric is improved. This may result in changes from historic reported values. This metric is in public preview. Spark users who want to manage parallel upserts into Pinecone can now use the official Spark connector for Pinecone to upsert their data from a Spark dataframe. You can now add Boolean and float64 values to metadata JSON objects associated with a Pinecone index. The describe_index operation results now contain a value for state, which describes the state of the index. The possible values for state are Initializing, ScalingUp, ScalingDown, Terminating, and Ready. The Delete operation now supports filtering my metadata.
Page Title: Architecture
Paragraphs: This document describes the basic architecture of the Pinecone database. The Pinecone vector database is a cloud-based service deployed partly on Kubernetes. Pinecone serves control plane requests from an API gateway and routes these requests to user indexes; clients make data plane requests directly to pods. See Fig. 1 below. Figure 1: Pinecone architecture diagram For more information about security and encryption, see Security.
Page Title: Security |
a6154f9f69ad-91 | Page Title: Security
Paragraphs: This document describes the security protocols and practices in use by Pinecone. Each Pinecone organization can assign users roles with respect to the organization and projects within the organization. These roles determine what permissions users have to make changes to the organization's billing, projects, and other users. To learn more, see organization roles. Pinecone provides end-to-end encryption for user data, including encryption in transit and at rest. Pinecone uses standard protocols to encrypt user data in transit. Clients open HTTPS or gRPC connections to the Pinecone API; the Pinecone API gateway uses gRPC connections to user deployments in the cloud. These HTTPS and gRPC connections use the TLS 1.2 protocol with 256-bit Advanced Encryption Standard (AES-256) encryption. See Fig. 1 below. Figure 1: Pinecone encryption in transit Traffic is also encrypted in transit between the Pinecone backend and cloud infrastructure services, such as S3 and GCS. For more information, see Google Cloud Platform and AWS security documentation. Pinecone encrypts stored data using the 256-bit Advanced Encryption Standard (AES-256) encryption algorithm.
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Page Title: Metadata Filtered Search |
a6154f9f69ad-92 | Paragraphs:
Paragraphs:
Paragraphs:
Page Title: Metadata Filtered Search
Paragraphs: Pinecone offers a production-ready vector database for high performance and reliable semantic search at scale. But did you know Pinecone's semantic search can be paired with the more traditional keyword search? Semantic search is a compelling technology allowing us to search using abstract concepts and meaning rather than relying on specific words. However, sometimes a simple keyword search can be just as valuable — especially if we know the exact wording of what we're searching for. Pinecone allows you to pair semantic search with a basic keyword filter. If you know that the document you're looking for contains a specific word or set of words, you simply tell Pinecone to restrict the search to only include documents with those keywords. We even support functionality for keyword search using sets of words with AND, OR, NOT logic. In this article, we will explore these features through a start-to-finish example of basic keyword search in Pinecone. The first thing we need to do is create some data. We will keep things simple with 10 sentences. On the semantic side of our search, we will introduce a new query sentence and search for the most semantically similar. To do this, we will need to create some sentence embeddings using our sentences. We will use a pretrained model from sentence-transformers for this. We now have 10 sentence embeddings, each with a dimensionality of 768. |
a6154f9f69ad-93 | If we just wanted semantic search, we could move onto upserting the data — but there is one more step for keyword search. Keyword search requires keywords, so we make a list of words (or 'tokens') for each sentence. To do this we can use a word-level tokenizer from Hugging Face’s transformers. We have all the data we need for our semantic and keyword search, so we can move on to initializing a connection to our Pinecone instance. All we need here is an API key, and then we can create a new index called keyword-search (you can name it anything you like). You can find your environment in the Pinecone console under API Keys. All we do now is upsert our data — which we reformat into a list of tuples where each tuple is structured as (id, values, metadata). It’s also possible to upsert data to Pinecone using cURL. For this we reformat our data and save it as a JSON file. Here we’ve build a JSON file containing a list of 10 records within the vectors key. Each record contains ID, embeddings, and metadata in the format: To upsert with curl, we first need the index URL — which can be found in your Pinecone dashboard, it should look something like: With that, we upsert: Now that we've upserted the data to our index, we can move on to semantic and keyword search. We'll start with a semantic search without keywords. As we did with our indexed sentences, we need to encode a query sentence. |
a6154f9f69ad-94 | We then find the most semantically similar sentences to this query vector xq with query — we will return all ten sentences by setting top_k=10. The response shows both the most similar sentence IDs and their respective metadata field, which contains the list of tokens we created earlier. We perform a keyword search by filtering records with the tokens metadata field. If we wanted to only return records that contain the token 'bananas' we can like so: Immediately we can see that we return far fewer sentences. This is because there are only four records that contain the word 'bananas'. We can use those ids to see which sentences we've returned. Looks great! We can extend the keyword search filter to include multiple words — specifying whether we'd like to return results that contain all words using $and, or any word using $or/$in. If we wanted to return records that contain either 'bananas' or 'way' with metadata filtering: {'$or': [{'tokens': 'bananas'}, {'tokens': 'way'}]} This filter will return any records that satisfy one or more of these conditions — where the tokens list contains ’bananas’ or the tokens list contains ’way’. Alternatively, we can write these multi-keyword $or queries using the $in condition. This modifier tells Pinecone to filter for records where the tokens list contains any word from the list we define. Both $or and $in produce the same logic above. What if we wanted records that contain both 'bananas' and 'way'? All we do is swap $or for $and. |
a6154f9f69ad-95 | If we have a lot of keywords, including every single one with the $and condition manually would not be fun, so we write something like this instead: And now we're restricting our semantic search to records that contain any word from 'bananas', 'way', or 'green'. If we like we can add negation to our logic too. For example we may want all sentences that do not contain ’bananas’ but do contain ’way’. To do this we add not equals $ne to the ’bananas’ condition. Or if we want to not return sentences that contain any of several words, we use the not in $nin modifier. That's it for this introduction to keyword search in Pinecone. We've set up and upserted our sentence embeddings for semantic search and a token list for keyword search. Then we explored how we restrict our search to records containing a specific keyword, or even set of keywords using the two $and / $or modifiers.
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs: |
a6154f9f69ad-96 | Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs: |
a6154f9f69ad-97 | Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs: |
a6154f9f69ad-98 | Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs: |
a6154f9f69ad-99 | Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs:
Paragraphs: |