text
stringlengths
47
58.8k
source
stringlengths
12
112
--- draft: false title: Food Discovery short_description: Qdrant Food Discovery Demo recommends more similar meals based on how they look description: This demo uses data from Delivery Service. Users may like or dislike the photo of a dish, and the app will recommend more similar meals based on how they look. It's also possible to choose to view results from the restaurants within the delivery radius. preview_image: /demo/food-discovery-demo.png link: https://food-discovery.qdrant.tech/ weight: 2 sitemapExclude: True ---
demo/demo-2.md
--- draft: false title: E-commerce products categorization short_description: E-commerce products categorization demo from Qdrant vector database description: This demo shows how you can use vector database in e-commerce. Enter the name of the product and the application will understand which category it belongs to, based on the multi-language model. The dots represent clusters of products. preview_image: /demo/products_categorization_demo.jpg link: https://qdrant.to/extreme-classification-demo weight: 3 sitemapExclude: True ---
demo/demo-3.md
--- draft: false title: Startup Search short_description: Qdrant Startup Search. This demo uses short descriptions of startups to perform a semantic search description: This demo uses short descriptions of startups to perform a semantic search. Each startup description converted into a vector using a pre-trained SentenceTransformer model and uploaded to the Qdrant vector search engine. Demo service processes text input with the same model and uses its output to query Qdrant for similar vectors. You can turn neural search on and off to compare the result with regular full-text search. preview_image: /demo/startup_search_demo.jpg link: https://qdrant.to/semantic-search-demo weight: 1 sitemapExclude: True ---
demo/demo-1.md
--- page_title: Vector Search Demos and Examples description: Interactive examples and demos of vector search based applications developed with Qdrant vector search engine. title: Vector Search Demos section_title: Interactive Live Examples ---
demo/_index.md
--- title: Examples weight: 25 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: false --- # Sample Use Cases Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example. | Example | Description | Stack | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------| | [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant | | [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs | OpenAI, Qdrant, FastEmbed |
documentation/examples.md
--- title: Release notes weight: 42 type: external-link external_url: https://github.com/qdrant/qdrant/releases sitemapExclude: True ---
documentation/release-notes.md
--- title: Benchmarks weight: 33 draft: true ---
documentation/benchmarks.md
--- title: Community links weight: 42 --- # Community Contributions Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors. | Link | Description | Stack | |------|------------------------------|--------| | [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone | | [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex | | [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
documentation/community-links.md
--- title: Quickstart weight: 11 aliases: - quick_start --- # Quickstart In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query. <aside role="status">Before you start, please make sure Docker is installed and running on your system.</aside> ## Download and run First, download the latest Qdrant image from Dockerhub: ```bash docker pull qdrant/qdrant ``` Then, run the service: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see. Qdrant is now accessible: - REST API: [localhost:6333](http://localhost:6333) - Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard) - GRPC API: [localhost:6334](http://localhost:6334) ## Initialize the client ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); ``` ```rust use qdrant_client::client::QdrantClient; // The Rust client uses Qdrant's GRPC interface let client = QdrantClient::from_url("http://localhost:6334").build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; // The Java client uses Qdrant's GRPC interface QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); ``` ```csharp using Qdrant.Client; // The C# client uses Qdrant's GRPC interface var client = new QdrantClient("localhost", 6334); ``` <aside role="status">By default, Qdrant starts with no encryption or authentication . This means anyone with network access to your machine can access your Qdrant container instance. Please read <a href="https://qdrant.tech/documentation/security/">Security</a> carefully for details on how to secure your instance.</aside> ## Create a collection You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors. ```python from qdrant_client.http.models import Distance, VectorParams client.create_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.DOT), ) ``` ```typescript await client.createCollection("test_collection", { vectors: { size: 4, distance: "Dot" }, }); ``` ```rust use qdrant_client::qdrant::{vectors_config::Config, VectorParams, VectorsConfig}; client .create_collection(&CreateCollection { collection_name: "test_collection".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 4, distance: Distance::Dot.into(), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; client.createCollectionAsync("test_collection", VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get(); ``` ```csharp using Qdrant.Client.Grpc; await client.CreateCollectionAsync( collectionName: "test_collection", vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot } ); ``` <aside role="status">TypeScript, Rust examples use async/await syntax, so should be used in an async block.</aside> <aside role="status">Java examples are enclosed within a try/catch block.</aside> ## Add vectors Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector: ```python from qdrant_client.http.models import PointStruct operation_info = client.upsert( collection_name="test_collection", wait=True, points=[ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}), PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}), PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}), PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}), PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}), PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}), ], ) print(operation_info) ``` ```typescript const operationInfo = await client.upsert("test_collection", { wait: true, points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin" } }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: "London" } }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: "Moscow" } }, { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: "New York" } }, { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: "Beijing" } }, { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: "Mumbai" } }, ], }); console.debug(operationInfo); ``` ```rust use qdrant_client::qdrant::PointStruct; use serde_json::json; let points = vec![ PointStruct::new( 1, vec![0.05, 0.61, 0.76, 0.74], json!( {"city": "Berlin"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.19, 0.81, 0.75, 0.11], json!( {"city": "London"} ) .try_into() .unwrap(), ), // ..truncated ]; let operation_info = client .upsert_points_blocking("test_collection".to_string(), None, points, None) .await?; dbg!(operation_info); ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpdateResult; UpdateResult operationInfo = client .upsertAsync( "test_collection", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("city", value("Berlin"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload(Map.of("city", value("London"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload(Map.of("city", value("Moscow"))) .build())) // Truncated .get(); System.out.println(operationInfo); ``` ```csharp using Qdrant.Client.Grpc; var operationInfo = await client.UpsertAsync( collectionName: "test_collection", points: new List<PointStruct> { new() { Id = 1, Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "Berlin" } }, new() { Id = 2, Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { ["city"] = "London" } }, new() { Id = 3, Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { ["city"] = "Moscow" } }, // Truncated } ); Console.WriteLine(operationInfo); ``` **Response:** ```python operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'> ``` ```typescript { operation_id: 0, status: 'completed' } ``` ```rust PointsOperationResponse { result: Some(UpdateResult { operation_id: 0, status: Completed, }), time: 0.006347708, } ``` ```java operation_id: 0 status: Completed ``` ```csharp { "operationId": "0", "status": "Completed" } ``` ## Run a query Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`? ```python search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], limit=3 ) print(search_result) ``` ```typescript let searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::SearchPoints; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, with_payload: Some(true.into()), ..Default::default() }) .await?; dbg!(search_result); ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.ScoredPoint; import io.qdrant.client.grpc.Points.SearchPoints; import static io.qdrant.client.WithPayloadSelectorFactory.enable; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=4, version=0, score=1.362, payload={"city": "New York"}, vector=None), ScoredPoint(id=1, version=0, score=1.273, payload={"city": "Berlin"}, vector=None), ScoredPoint(id=3, version=0, score=1.208, payload={"city": "Moscow"}, vector=None) ``` ```typescript [ { id: 4, version: 0, score: 1.362, payload: null, vector: null, }, { id: 1, version: 0, score: 1.273, payload: null, vector: null, }, { id: 3, version: 0, score: 1.208, payload: null, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some(PointId { point_id_options: Some(Num(4)), }), payload: {}, score: 1.362, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(1)), }), payload: {}, score: 1.273, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(3)), }), payload: {}, score: 1.208, version: 0, vectors: None, }, ], time: 0.003635125, } ``` ```java [id { num: 4 } payload { key: "city" value { string_value: "New York" } } score: 1.362 version: 1 , id { num: 1 } payload { key: "city" value { string_value: "Berlin" } } score: 1.273 version: 1 , id { num: 3 } payload { key: "city" value { string_value: "Moscow" } } score: 1.208 version: 1 ] ``` ```csharp [ { "id": { "num": "4" }, "payload": { "city": { "stringValue": "New York" } }, "score": 1.362, "version": "7" }, { "id": { "num": "1" }, "payload": { "city": { "stringValue": "Berlin" } }, "score": 1.273, "version": "7" }, { "id": { "num": "3" }, "payload": { "city": { "stringValue": "Moscow" } }, "score": 1.208, "version": "7" } ] ``` The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](../concepts/search#payload-and-vector-in-the-result) on how to enable it. ## Add a filter We can narrow down the results further by filtering by payload. Let's find the closest results that include "London". ```python from qdrant_client.http.models import Filter, FieldCondition, MatchValue search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], query_filter=Filter( must=[FieldCondition(key="city", match=MatchValue(value="London"))] ), with_payload=True, limit=3, ) print(search_result) ``` ```typescript searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: "city", match: { value: "London" } }], }, with_payload: true, limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, SearchPoints}; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], filter: Some(Filter::all([Condition::matches( "city", "London".to_string(), )])), limit: 2, ..Default::default() }) .await?; dbg!(search_result); ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=2, version=0, score=0.871, payload={"city": "London"}, vector=None) ``` ```typescript [ { id: 2, version: 0, score: 0.871, payload: { city: "London" }, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some( PointId { point_id_options: Some( Num( 2, ), ), }, ), payload: { "city": Value { kind: Some( StringValue( "London", ), ), }, }, score: 0.871, version: 0, vectors: None, }, ], time: 0.004001083, } ``` ```java [id { num: 2 } payload { key: "city" value { string_value: "London" } } score: 0.871 version: 1 ] ``` ```csharp [ { "id": { "num": "2" }, "payload": { "city": { "stringValue": "London" } }, "score": 0.871, "version": "7" } ] ``` <aside role="status">To make filtered search fast on real datasets, we highly recommend to create <a href="../concepts/indexing/#payload-index">payload indexes</a>!</aside> You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score. ## Next steps Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates. To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/). **Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
documentation/quick-start.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Getting Started" type: delimiter weight: 8 # Change this weight to change order of sections sitemapExclude: True ---
documentation/0-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Integrations" type: delimiter weight: 30 # Change this weight to change order of sections sitemapExclude: True ---
documentation/2-dl.md
--- title: Roadmap weight: 32 draft: true --- # Qdrant 2023 Roadmap Goals of the release: * **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back. * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version. * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions. * **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable. * **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly. * **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc. ## Milestones * :atom_symbol: Quantization support * [ ] Scalar quantization f32 -> u8 (4x compression) * [ ] Advanced quantization (8x and 16x compression) * [ ] Support for binary vectors --- * :arrow_double_up: Scalability * [ ] Automatic replication factor adjustment * [ ] Automatic shard distribution on cluster scaling * [ ] Repartitioning support --- * :eyes: Search scenarios * [ ] Diversity search - search for vectors that are different from each other * [ ] Sparse vectors search - search for vectors with a small number of non-zero values * [ ] Grouping requests - search within payload-defined groups * [ ] Different scenarios for recommendation API --- * Additionally * [ ] Extend full-text filtering support * [ ] Support for phrase queries * [ ] Support for logical operators * [ ] Simplify update of collection parameters
documentation/roadmap.md
--- title: Interfaces weight: 14 --- # Interfaces Qdrant supports these "official" clients. > **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json) or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions. ## Client Libraries ||Client Repository|Installation|Version| |-|-|-|-| |[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)| |![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)| |![rust](/docs/misc/rust.webp)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)| |![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)| |![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)| |![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)| ## API Reference All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype. |API|Documentation| |-|-| | REST API |[OpenAPI Specification](https://qdrant.github.io/qdrant/redoc/index.html)| | gRPC API| [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md)| ### gRPC Interface The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method. As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port. ```yaml service: grpc_port: 6334 ``` <aside role="status">If you decide to use gRPC, you must expose the port when starting Qdrant.</aside> Running the service inside of Docker will look like this: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` **When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application. ## Qdrant Web UI Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points. In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots. ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Accessing the Web UI First, run the Docker container: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` The GUI is available at `http://localhost:6333/dashboard`
documentation/interfaces.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Support" type: delimiter weight: 40 # Change this weight to change order of sections sitemapExclude: True ---
documentation/3-dl.md
--- title: Practice Datasets weight: 41 --- # Common Datasets in Snapshot Format You may find that creating embeddings from datasets is a very resource-intensive task. If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page. These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance. ## Available datasets Our snapshots are usually generated from publicly available datasets, which are often used for non-commercial or academic purposes. The following datasets are currently available. Please click on a dataset name to see its detailed description. | Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub | |--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) | | [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) | | [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) | Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot) using the Qdrant CLI upon startup or through the API. ## Qdrant on Hugging Face <p align="center"> <a href="https://huggingface.co/Qdrant"> <img style="width: 500px; max-width: 100%;" src="/content/images/hf-logo-with-title.svg" alt="HuggingFace" title="HuggingFace"> </a> </p> [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to provide you with datasets containing neural embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please let us know if you'd like to see a specific dataset!** If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index), or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/). ## Arxiv.org [Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple fields. Operated by Cornell University, arXiv allows researchers to share their findings with the scientific community and receive feedback before they undergo peer review for formal publication. Its archives host millions of scholarly articles, making it an invaluable resource for those looking to explore the cutting edge of scientific research. With a high frequency of daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset that is ripe for mining, analysis, and the development of future innovations. <aside role="status"> Arxiv.org snapshots were created using precomputed embeddings exposed by <a href="https://alex.macrocosm.so/download">the Alexandria Index</a>. </aside> ### Arxiv.org titles This dataset contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { "title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper title for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Research Paper title for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot" } ``` ### Arxiv.org abstracts This dataset contains embeddings generated from the paper abstracts. Each vector has a payload with the abstract used to create it, along with the DOI (Digital Object Identifier). ```json { "abstract": "Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper abstract for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train." instruction = "Represent the Research Paper abstract for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot" } ``` ## Wolt food Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers. There is also a JSON payload attached to each point, which looks similar to this: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet: ```python from PIL import Image from sentence_transformers import SentenceTransformer image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg" model = SentenceTransformer("clip-ViT-B-32") embedding = model.encode(Image.open(image_path)) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot" } ```
documentation/datasets.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "User Manual" type: delimiter weight: 20 # Change this weight to change order of sections sitemapExclude: True ---
documentation/1-dl.md
--- title: Qdrant Documentation weight: 10 --- # Documentation **Qdrant (read: quadrant)** is a vector similarity search engine. Use our documentation to develop a production-ready service with a convenient API to store, search, and manage vectors with an additional payload. Qdrant's expanding features allow for all sorts of neural network or semantic-based matching, faceted search, and other applications. ## First-Time Users: There are three ways to use Qdrant: 1. [**Run a Docker image**](quick-start/) if you don't have a Python development environment. Setup a local Qdrant server and storage in a few moments. 2. [**Get the Python client**](https://github.com/qdrant/qdrant-client) if you're familiar with Python. Just `pip install qdrant-client`. The client also supports an in-memory database. 3. [**Spin up a Qdrant Cloud cluster:**](cloud/) the recommended method to run Qdrant in production. Read [Quickstart](cloud/quickstart-cloud/) to setup your first instance. ### Recommended Workflow: ![Local mode workflow](https://raw.githubusercontent.com/qdrant/qdrant-client/master/docs/images/try-develop-deploy.png) First, try Qdrant locally using the [Qdrant Client](https://github.com/qdrant/qdrant-client) and with the help of our [Tutorials](tutorials/) and Guides. Develop a sample app from our [Examples](examples/) list and try it using a [Qdrant Docker](guides/installation/) container. Then, when you are ready for production, deploy to a Free Tier [Qdrant Cloud](cloud/) cluster. ### Try Qdrant with Practice Data: You may always use our [Practice Datasets](datasets/) to build with Qdrant. This page will be regularly updated with dataset snapshots you can use to bootstrap complete projects. ## Popular Topics: | Tutorial | Description | Tutorial| Description | |----------------------------------------------------|----------------------------------------------|---------|------------------| | [Installation](guides/installation/) | Different ways to install Qdrant. | [Collections](concepts/collections/) | Learn about the central concept behind Qdrant. | | [Configuration](guides/configuration/) | Update the default configuration. | [Bulk Upload](tutorials/bulk-upload/) | Efficiently upload a large number of vectors. | | [Optimization](tutorials/optimize/) | Optimize Qdrant's resource usage. | [Multitenancy](tutorials/multiple-partitions/) | Setup Qdrant for multiple independent users. | ## Common Use Cases: Qdrant is ideal for deploying applications based on the matching of embeddings produced by neural network encoders. Check out the [Examples](examples/) section to learn more about common use cases. Also, you can visit the [Tutorials](tutorials/) page to learn how to work with Qdrant in different ways. | Use Case | Description | Stack | |-----------------------|----------------------------------------------|--------| | [Semantic Search for Beginners](tutorials/search-beginners/) | Build a search engine locally with our most basic instruction set. | Qdrant | | [Build a Simple Neural Search](tutorials/neural-search/) | Build and deploy a neural search. [Check out the live demo app.](https://demo.qdrant.tech/#/) | Qdrant, BERT, FastAPI | | [Build a Search with Aleph Alpha](tutorials/aleph-alpha-search/) | Build a simple semantic search that combines text and image data. | Qdrant, Aleph Alpha | | [Developing Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
documentation/_index.md
--- title: Contribution Guidelines weight: 35 draft: true --- # How to contribute If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant. Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation. You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h). If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community. For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md). If you have problems with code or architecture understanding - reach us at any time. Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!
documentation/contribution-guidelines.md
--- title: API Reference weight: 20 type: external-link external_url: https://qdrant.github.io/qdrant/redoc/index.html sitemapExclude: True ---
documentation/api-reference.md
--- title: OpenAI weight: 800 aliases: [ ../integrations/openai/ ] --- # OpenAI Qdrant can also easily work with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings). There is an official OpenAI Python package that simplifies obtaining them, and it might be installed with pip: ```bash pip install openai ``` Once installed, the package exposes the method allowing to retrieve the embedding for given text. OpenAI requires an API key that has to be provided either as an environmental variable `OPENAI_API_KEY` or set in the source code directly, as presented below: ```python import openai import qdrant_client from qdrant_client.http.models import Batch # Choose one of the available models: # https://platform.openai.com/docs/models/embeddings embedding_model = "text-embedding-ada-002" openai_client = openai.Client( api_key="<< your_api_key >>" ) response = openai_client.embeddings.create( input="The best vector database", model=embedding_model, ) qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[response.data[0].embedding], ), ) ```
documentation/embeddings/openai.md
--- title: AWS Bedrock weight: 1000 --- # Bedrock Embeddings You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). You'll need the following information from your AWS account: - Region - Access key ID - Secret key To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key). With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536. ```python # Install the required dependencies # pip install boto3 qdrant_client import json import boto3 from qdrant_client import QdrantClient, models session = boto3.Session() bedrock_client = session.client( "bedrock-runtime", region_name="<YOUR_AWS_REGION>", aws_access_key_id="<YOUR_AWS_ACCESS_KEY_ID>", aws_secret_access_key="<YOUR_AWS_SECRET_KEY>", ) qdrant_client = QdrantClient(location="http://localhost:6333") qdrant_client.create_collection( "{collection_name}", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), ) body = json.dumps({"inputText": "Some text to generate embeddings for"}) response = bedrock_client.invoke_model( body=body, modelId="amazon.titan-embed-text-v1", accept="application/json", contentType="application/json", ) response_body = json.loads(response.get("body").read()) qdrant_client.upsert( "{collection_name}", points=[models.PointStruct(id=1, vector=response_body["embedding"])], ) ``` ```javascript // Install the required dependencies // npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; import { QdrantClient } from '@qdrant/js-client-rest'; const main = async () => { const bedrockClient = new BedrockRuntimeClient({ region: "<YOUR_AWS_REGION>", credentials: { accessKeyId: "<YOUR_AWS_ACCESS_KEY_ID>",, secretAccessKey: "<YOUR_AWS_SECRET_KEY>", }, }); const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' }); await qdrantClient.createCollection("{collection_name}", { vectors: { size: 1536, distance: 'Cosine', } }); const response = await bedrockClient.send( new InvokeModelCommand({ modelId: "amazon.titan-embed-text-v1", body: JSON.stringify({ inputText: "Some text to generate embeddings for", }), contentType: "application/json", accept: "application/json", }) ); const body = new TextDecoder().decode(response.body); await qdrantClient.upsert("{collection_name}", { points: [ { id: 1, vector: JSON.parse(body).embedding, }, ], }); } main(); ```
documentation/embeddings/bedrock.md
--- title: Aleph Alpha weight: 900 aliases: [ ../integrations/aleph-alpha/ ] --- Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be installed with pip: ```bash pip install aleph-alpha-client ``` There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might be done in the following way: ```python import qdrant_client from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, ImagePrompt ) from qdrant_client.http.models import Batch aa_token = "<< your_token >>" model = "luminous-base" qdrant_client = qdrant_client.QdrantClient() async with AsyncClient(token=aa_token) as client: prompt = ImagePrompt.from_file("./path/to/the/image.jpg") prompt = Prompt.from_image(prompt) query_params = { "prompt": prompt, "representation": SemanticRepresentation.Symmetric, "compress_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed( request=query_request, model=model ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[query_response.embedding], ) ) ``` If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input text into the `Prompt.from_text` method.
documentation/embeddings/aleph-alpha.md
--- title: Cohere weight: 700 aliases: [ ../integrations/cohere/ ] --- # Cohere Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that might be installed as any other package: ```bash pip install cohere ``` The embeddings returned by co.embed API might be used directly in the Qdrant client's calls: ```python import cohere import qdrant_client from qdrant_client.http.models import Batch cohere_client = cohere.Client("<< your_api_key >>") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="large", texts=["The best vector database"], ).embeddings, ), ) ``` If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the "[Question Answering as a Service with Cohere and Qdrant](https://qdrant.tech/articles/qa-with-cohere-and-qdrant/)" article. ## Embed v3 Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for. - `input_type="search_document"` - for documents to store in Qdrant - `input_type="search_query"` - for search queries to find the most relevant documents - `input_type="classification"` - for classification tasks - `input_type="clustering"` - for text clustering While implementing semantic search applications, such as RAG, you should use `input_type="search_document"` for the indexed documents and `input_type="search_query"` for the search queries. The following example shows how to index documents with the Embed v3 model: ```python import cohere import qdrant_client from qdrant_client.http.models import Batch cohere_client = cohere.Client("<< your_api_key >>") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_document", # Input type for documents texts=["Qdrant is the a vector database written in Rust"], ).embeddings, ), ) ``` Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model: ```python qdrant_client.search( collection_name="MyCollection", query=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_query", # Input type for search queries texts=["The best vector database"], ).embeddings[0], ) ``` <aside role="status"> According to Cohere's documentation, all v3 models can use dot product, cosine similarity, and Euclidean distance as the similarity metric, as all metrics return identical rankings. </aside>
documentation/embeddings/cohere.md
--- title: "Nomic" weight: 1100 --- # Nomic The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder. While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1), you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text). Once installed, you can configure it with the official Python client or through direct HTTP requests. <aside role="status">Using Nomic Text Embeddings requires configuring the Nomic API token</aside> You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings are obtained for documents and queries. The `task_type` parameter defines the embeddings that you get. For documents, set the `task_type` to `search_document`: ```python from qdrant_client import QdrantClient, models from nomic import embed output = embed.text( texts=["Qdrant is the best vector database!"], model="nomic-embed-text-v1", task_type="search_document", ) qdrant_client = QdrantClient() qdrant_client.upsert( collection_name="my-collection", points=models.Batch( ids=[1], vectors=output["embeddings"], ), ) ``` To query the collection, set the `task_type` to `search_query`: ```python output = embed.text( texts=["What is the best vector database?"], model="nomic-embed-text-v1", task_type="search_query", ) qdrant_client.search( collection_name="my-collection", query=output["embeddings"][0], ) ``` For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
documentation/embeddings/nomic.md
--- title: Gemini weight: 700 --- # Gemini Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package: Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model. In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized. The Embedding Model API supports various task types, outlined as follows: 1. `retrieval_query`: Specifies the given text is a query in a search/retrieval setting. 2. `retrieval_document`: Specifies the given text is a document from the corpus being searched. 3. `semantic_similarity`: Specifies the given text will be used for Semantic Text Similarity. 4. `classification`: Specifies that the given text will be classified. 5. `clustering`: Specifies that the embeddings will be used for clustering. 6. `task_type_unspecified`: Unset value, which will default to one of the other values. If you're building a semantic search application, such as RAG, you should use `task_type="retrieval_document"` for the indexed documents and `task_type="retrieval_query"` for the search queries. The following example shows how to do this with Qdrant: ## Setup ```bash pip install google-generativeai ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python import pathlib import google.generativeai as genai import qdrant_client GEMINI_API_KEY = "YOUR GEMINI API KEY" # add your key here genai.configure(api_key=GEMINI_API_KEY) result = genai.embed_content( model="models/embedding-001", content="Qdrant is the best vector search engine to use with Gemini", task_type="retrieval_document", title="Qdrant x Gemini", ) ``` The returned result is a dictionary with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document. ## Indexing documents with Qdrant ```python from qdrant_client.http.models import Batch qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="GeminiCollection", points=Batch( ids=[1], vectors=genai.embed_content( model="models/embedding-001", content="Qdrant is the best vector search engine to use with Gemini", task_type="retrieval_document", title="Qdrant x Gemini", )["embedding"], ), ) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python qdrant_client.search( collection_name="GeminiCollection", query=genai.embed_content( model="models/embedding-001", content="What is the best vector database to use with Gemini?", task_type="retrieval_query", )["embedding"], ) ``` ## Using Gemini Embedding Models with Binary Quantization You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model: At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|----------| | limit | | | | | | | | | | rescore | False | True | False | True | False | True | | 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 | | 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 | | 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 | | 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** | That's it! You can now use Gemini Embedding Models with Qdrant!
documentation/embeddings/gemini.md
--- title: Jina Embeddings weight: 800 aliases: [ ../integrations/jina-embeddings/ ] --- # Jina Embeddings Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens. To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production. ```python import qdrant_client import requests from qdrant_client.http.models import Distance, VectorParams from qdrant_client.http.models import Batch # Provide Jina API key and choose one of the available models. # You can get a free trial key here: https://jina.ai/embeddings/ JINA_API_KEY = "jina_xxxxxxxxxxx" MODEL = "jina-embeddings-v2-base-en" # or "jina-embeddings-v2-base-en" EMBEDDING_SIZE = 768 # 512 for small variant # Get embeddings from the API url = "https://api.jina.ai/v1/embeddings" headers = { "Content-Type": "application/json", "Authorization": f"Bearer {JINA_API_KEY}", } data = { "input": ["Your text string goes here", "You can send multiple texts"], "model": MODEL, } response = requests.post(url, headers=headers, json=data) embeddings = [d["embedding"] for d in response.json()["data"]] # Index the embeddings into Qdrant qdrant_client = qdrant_client.QdrantClient(":memory:") qdrant_client.create_collection( collection_name="MyCollection", vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT), ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=list(range(len(embeddings))), vectors=embeddings, ), ) ```
documentation/embeddings/jina-embeddings.md
--- title: Embeddings weight: 33 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: true --- | Embedding | |---| | [Gemini](./gemini/) | | [Aleph Alpha](./aleph-alpha/) | | [Cohere](./cohere/) | | [Jina](./jina-emebddngs/) | | [OpenAI](./openai/) |
documentation/embeddings/_index.md
--- title: Database Optimization weight: 3 --- ## Database Optimization Strategies ### How do I reduce memory usage? The primary source of memory usage vector data. There are several ways to address that: - Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors. - Configure on-disk vector storage The choice of the approach depends on your requirements. Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant. ### How do you choose machine configuration? There are two main scenarios of Qdrant usage in terms of resource consumption: - **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM. - **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/). ### I configured on-disk vector storage, but memory usage is still high. Why? Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service. If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM. Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM. As a result, the Qdrant process might use more memory than the minimum required to run the service. > Unused RAM is wasted RAM If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes. ### My requests are very slow or time out. What should I do? There are several possible reasons for that: - **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index). - **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/). - **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.
documentation/faq/database-optimization.md
--- title: Fundamentals weight: 1 --- ## Qdrant Fundamentals ### How many collections can I create? As much as you want, but be aware that each collection requires additional resources. It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead. We consider creating a collection for each user/dialog/document as an antipattern. Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial. ### My search results contain vectors with null values. Why? By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results. But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`. If you're still seeing `"vector": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method. ### How can I search without a vector? You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection. ### Does Qdrant support a full-text search or a hybrid search? Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case. That includes both the interface and the performance. What Qdrant can do: - Search with full-text filters - Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases) - Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/) What Qdrant plans to introduce in the future: - Support for sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models What Qdrant doesn't plan to support: - BM25 or other non-vector-based retrieval or ranking functions - Built-in ontologies or knowledge graphs - Query analyzers and other NLP tools Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines. Read more about [our approach](../../../articles/hybrid-search/) to hybrid search. ### How do I upload a large number of vectors into a Qdrant collection? Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial. ### Can I only store quantized vectors and discard full precision vectors? No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc. ## Qdrant Cloud ### Is it possible to scale down a Qdrant Cloud cluster? In general, no. There's no way to scale down the underlying disk storage. But in some cases, we might be able to help you with that through manual intervention, but it's not guaranteed. ## Versioning ### How do I avoid issues when updating to the latest version? We only guarantee compatibility if you update between consequent versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`. ### Do you guarantee compatibility across versions? In case your version is older, we guarantee only compatibility between two consecutive minor versions. While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code.
documentation/faq/qdrant-fundamentals.md
--- title: FAQ weight: 41 is_empty: true ---
documentation/faq/_index.md
--- title: Multitenancy weight: 12 aliases: - ../tutorials/multiple-partitions --- # Configure Multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up. **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. ## Partition by payload When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users. 1. Add a `group_id` field to each vector in the collection. ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "payload": {"group_id": "user_1"}, "vector": [0.9, 0.1, 0.1] }, { "id": 2, "payload": {"group_id": "user_1"}, "vector": [0.1, 0.9, 0.1] }, { "id": 3, "payload": {"group_id": "user_2"}, "vector": [0.1, 0.1, 0.9] }, ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, payload={"group_id": "user_1"}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={"group_id": "user_1"}, vector=[0.1, 0.9, 0.1], ), models.PointStruct( id=3, payload={"group_id": "user_2"}, vector=[0.1, 0.1, 0.9], ), ], ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.upsert("{collection_name}", { points: [ { id: 1, payload: { group_id: "user_1" }, vector: [0.9, 0.1, 0.1], }, { id: 2, payload: { group_id: "user_1" }, vector: [0.1, 0.9, 0.1], }, { id: 3, payload: { group_id: "user_2" }, vector: [0.1, 0.1, 0.9], }, ], }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::PointStruct}; use serde_json::json; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .upsert_points_blocking( "{collection_name}".to_string(), None, vec![ PointStruct::new( 1, vec![0.9, 0.1, 0.1], json!( {"group_id": "user_1"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.1, 0.9, 0.1], json!( {"group_id": "user_1"} ) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.1, 0.1, 0.9], json!( {"group_id": "user_2"} ) .try_into() .unwrap(), ), ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of("group_id", value("user_1"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of("group_id", value("user_1"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.9f)) .putAllPayload(Map.of("group_id", value("user_2"))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { ["group_id"] = "user_1" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { ["group_id"] = "user_1" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { ["group_id"] = "user_2" } } } ); ``` 2. Use a filter along with `group_id` to filter vectors for each user. ```http POST /collections/{collection_name}/points/search { "filter": { "must": [ { "key": "group_id", "match": { "value": "user_1" } } ] }, "vector": [0.1, 0.1, 0.9], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_filter=models.Filter( must=[ models.FieldCondition( key="group_id", match=models.MatchValue( value="user_1", ), ) ] ), query_vector=[0.1, 0.1, 0.9], limit=10, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { filter: { must: [{ key: "group_id", match: { value: "user_1" } }], }, vector: [0.1, 0.1, 0.9], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::matches( "group_id", "user_1".to_string(), )])), vector: vec![0.1, 0.1, 0.9], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder().addMust(matchKeyword("group_id", "user_1")).build()) .addAllVector(List.of(0.1f, 0.1f, 0.9f)) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.1f, 0.1f, 0.9f }, filter: MatchKeyword("group_id", "user_1"), limit: 10 ); ``` ## Calibrate performance The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "hnsw_config": { "payload_m": 16, "m": 0 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, hnsw_config: { payload_m: 16, m: 0, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), hnsw_config: Some(HnswConfigDiff { payload_m: Some(16), m: Some(0), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 } ); ``` 3. Create keyword payload index for `group_id` field. ```http PUT /collections/{collection_name}/index { "field_name": "group_id", "field_schema": "keyword" } ``` ```python client.create_payload_index( collection_name="{collection_name}", field_name="group_id", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` ```typescript client.createPayloadIndex("{collection_name}", { field_name: "group_id", field_schema: "keyword", }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::FieldType}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "group_id", FieldType::Keyword, None, None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "group_id", PayloadSchsemaType.Keyword, null, null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync(collectionName: "{collection_name}", fieldName: "group_id"); ``` ## Limitations One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
documentation/guides/multiple-partitions.md
--- title: Administration weight: 10 aliases: - ../administration --- # Administration Qdrant exposes administration tools which enable to modify at runtime the behavior of a qdrant instance without changing its configuration manually. ## Locking A locking API enables users to restrict the possible operations on a qdrant process. It is important to mention that: - The configuration is not persistent therefore it is necessary to lock again following a restart. - Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup. Lock request sample: ```http POST /locks { "error_message": "write is forbidden", "write": true } ``` Write flags enables/disables write lock. If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage. However, deletion operations or updates are not forbidden under the write lock. This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data. You can optionally provide the error message that should be used for error responses to users. ## Recovery mode *Available as of v1.2.0* Recovery mode can help in situations where Qdrant fails to start repeatedly. When starting in recovery mode, Qdrant only loads collection metadata to prevent going out of memory. This allows you to resolve out of memory situations, for example, by deleting a collection. After resolving Qdrant can be restarted normally to continue operation. In recovery mode, collection operations are limited to [deleting](../../concepts/collections/#delete-collection) a collection. That is because only collection metadata is loaded during recovery. To enable recovery mode with the Qdrant Docker image you must set the environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try to start normally first, and restarts in recovery mode if initialisation fails due to an out of memory error. This behavior is disabled by default. If using a Qdrant binary, recovery mode can be enabled by setting a recovery message in an environment variable, such as `QDRANT__STORAGE__RECOVERY_MODE="My recovery message"`.
documentation/guides/administration.md
--- title: Troubleshooting weight: 170 aliases: - ../tutorials/common-errors --- # Solving common errors ## Too many files open (OS error 24) Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log: ```text Error: Too many files open (OS error 24) ``` In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container: ```bash docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest ``` The command above will set both soft and hard limits to `10000`. If you are not using Docker, the following command will change the limit for the current user session: ```bash ulimit -n 10000 ``` Please note, the command should be executed before you run Qdrant server.
documentation/guides/common-errors.md
--- title: Configuration weight: 160 aliases: - ../configuration --- # Configuration To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files. The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). To change the default configuration, add a new configuration file and specify the path with `--config-path path/to/custom_config.yaml`. If running in production mode, you could also choose to overwrite `config/production.yaml`. See [ordering](#order-and-priority) for details on how configurations are loaded. The [Installation](../installation) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods. ## Order and priority *Effective as of v1.2.1* Multiple configurations may be loaded on startup. All of them are merged into a single effective configuration that is used by Qdrant. Configurations are loaded in the following order, if present: 1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml)) 2. File `config/config.yaml` 3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`) 4. File `config/local.yaml` 5. Config provided with `--config-path PATH` (if set) 6. [Environment variables](#environment-variables) This list is from least to most significant. Properties in later configurations will overwrite those loaded before it. For example, a property set with `--config-path` will overwrite those in other files. Most of these files are included by default in the Docker container. But it is likely that they are absent on your local machine if you run the `qdrant` binary manually. If file 2 or 3 are not found, a warning is shown on startup. If file 5 is provided but not found, an error is shown on startup. Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`. ## Environment variables It is possible to set configuration properties using environment variables. Environment variables are always the most significant and cannot be overwritten (see [ordering](#order-and-priority)). All environment variables are prefixed with `QDRANT__` and are separated with `__`. These variables: ```bash QDRANT__LOG_LEVEL=INFO QDRANT__SERVICE__HTTP_PORT=6333 QDRANT__SERVICE__ENABLE_TLS=1 QDRANT__TLS__CERT=./tls/cert.pem QDRANT__TLS__CERT_TTL=3600 ``` result in this configuration: ```yaml log_level: INFO service: http_port: 6333 enable_tls: true tls: cert: ./tls/cert.pem cert_ttl: 3600 ``` To run Qdrant locally with a different HTTP port you could use: ```bash QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant ``` ## Configuration file example ```yaml log_level: INFO storage: # Where to store all the data storage_path: ./storage # Where to store snapshots snapshots_path: ./snapshots # Where to store temporary files # If null, temporary snapshot are stored in: storage/snapshots_temp/ temp_path: null # If true - point's payload will not be stored in memory. # It will be read from the disk every time it is requested. # This setting saves RAM by (slightly) increasing the response time. # Note: those payload values that are involved in filtering and are indexed - remain in RAM. on_disk_payload: true # Maximum number of concurrent updates to shard replicas # If `null` - maximum concurrency is used. update_concurrency: null # Write-ahead-log related configuration wal: # Size of a single WAL segment wal_capacity_mb: 32 # Number of WAL segments to create ahead of actual data requirement wal_segments_ahead: 0 # Normal node - receives all updates and answers all queries node_type: "Normal" # Listener node - receives all updates, but does not answer search/read queries # Useful for setting up a dedicated backup node # node_type: "Listener" performance: # Number of parallel threads used for search operations. If 0 - auto selection. max_search_threads: 0 # Max total number of threads, which can be used for running optimization processes across all collections. # Note: Each optimization thread will also use `max_indexing_threads` for index building. # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads` max_optimization_threads: 1 # Prevent DDoS of too many concurrent updates in distributed mode. # One external update usually triggers multiple internal updates, which breaks internal # timings. For example, the health check timing and consensus timing. # If null - auto selection. update_rate_limit: null optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 # Target amount of segments optimizer will try to keep. # Real amount of segments may vary depending on multiple parameters: # - Amount of stored points # - Current write RPS # # It is recommended to select default number of segments as a factor of the number of search threads, # so that each segment would be handled evenly by one of the threads. # If `default_segment_number = 0`, will be automatically selected by the number of available CPUs default_segment_number: 0 # Do not create segments larger this size (in KiloBytes). # Large segments might require disproportionately long indexation times, # therefore it makes sense to limit the size of segments. # # If indexation speed have more priority for your - make this parameter lower. # If search speed is more important - make this parameter higher. # Note: 1Kb = 1 vector of size 256 # If not set, will be automatically selected considering the number of available CPUs. max_segment_size_kb: null # Maximum size (in KiloBytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # To enable memmap storage, lower the threshold # Note: 1Kb = 1 vector of size 256 # To explicitly disable mmap optimization, set to `0`. # If not set, will be disabled by default. memmap_threshold_kb: null # Maximum size (in KiloBytes) of vectors allowed for plain index. # Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md # Note: 1Kb = 1 vector of size 256 # To explicitly disable vector indexing, set to `0`. # If not set, the default value will be used. indexing_threshold_kb: 20000 # Interval between forced flushes. flush_interval_sec: 5 # Max number of threads, which can be used for optimization per collection. # Note: Each optimization thread will also use `max_indexing_threads` for index building. # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads` # If `max_optimization_threads = 0`, optimization will be disabled. max_optimization_threads: 1 # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold_kb: 10000 # Number of parallel threads used for background index building. If 0 - auto selection. max_indexing_threads: 0 # Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false on_disk: false # Custom M param for hnsw graph built for payload index. If not set, default M will be used. payload_m: null service: # Maximum size of POST data in a single request in megabytes max_request_size_mb: 32 # Number of parallel workers used for serving the api. If 0 - equal to the number of available cores. # If missing - Same as storage.max_search_threads max_workers: 0 # Host to bind the service on host: 0.0.0.0 # HTTP(S) port to bind the service on http_port: 6333 # gRPC port to bind the service on. # If `null` - gRPC is disabled. Default: null # Comment to disable gRPC: grpc_port: 6334 # Enable CORS headers in REST API. # If enabled, browsers would be allowed to query REST endpoints regardless of query origin. # More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS # Default: true enable_cors: true # Enable HTTPS for the REST and gRPC API enable_tls: false # Check user HTTPS client certificate against CA file specified in tls config verify_https_client_certificate: false # Set an api-key. # If set, all requests must include a header with the api-key. # example header: `api-key: <API-KEY>` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # api_key: your_secret_api_key_here # Set an api-key for read-only operations. # If set, all requests must include a header with the api-key. # example header: `api-key: <API-KEY>` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # read_only_api_key: your_secret_read_only_api_key_here cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: false # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Use TLS for communication between peers enable_tls: false # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected nodes earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 # Set to true to prevent service from sending usage statistics to the developers. # Read more: https://qdrant.tech/documentation/guides/telemetry telemetry_disabled: false # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Server certificate chain file cert: ./tls/cert.pem # Server private key file key: ./tls/key.pem # Certificate authority certificate file. # This certificate will be used to validate the certificates # presented by other nodes during inter-cluster communication. # # If verify_https_client_certificate is true, it will verify # HTTPS client certificate # # Required if cluster.p2p.enable_tls is true. ca_cert: ./tls/cacert.pem # TTL in seconds to reload certificate from disk, useful for certificate rotations. # Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication). # If `null` - TTL is disabled. cert_ttl: 3600 ``` ## Validation *Available since v1.1.1* The configuration is validated on startup. If a configuration is loaded but validation fails, a warning is logged. E.g.: ```text WARN Settings configuration file has validation errors: WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000 ``` The server will continue to operate. Any validation errors should be fixed as soon as possible though to prevent problematic behavior.
documentation/guides/configuration.md
--- title: Optimize Resources weight: 11 aliases: - ../tutorials/optimize --- # Optimize Qdrant Different use cases have different requirements for balancing between memory, speed, and precision. Qdrant is designed to be flexible and customizable so you can tune it to your needs. ![Trafeoff](/docs/tradeoff.png) Let's look deeper into each of those possible optimization scenarios. ## Prefer low memory footprint with high speed search The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads. Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads. To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration: ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` `mmmap_threshold` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM. Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision. ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "rescore": false } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 10 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams(rescore=False) ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { quantization: { rescore: false, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{QuantizationSearchParams, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { quantization: Some(QuantizationSearchParams { rescore: Some(false), ..Default::default() }), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setRescore(false).build()) .build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Rescore = false } }, limit: 3 ); ``` ## Prefer high precision with low memory footprint In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "hnsw_config": { "on_disk": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), hnsw_config=models.HnswConfigDiff(on_disk=True), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, hnsw_config: { on_disk: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), hnsw_config: Some(HnswConfigDiff { on_disk: Some(true), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, hnswConfig: new HnswConfigDiff { OnDisk = true } ); ``` In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM. ```json ... "hnsw_config": { "m": 64, "ef_construct": 512, "on_disk": true } ... ``` The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search. You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS. ## Prefer high precision with high speed search For high speed and high precision search it is critical to keep as much data in RAM as possible. By default, Qdrant follows this approach, but you can tune it to your needs. It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` There are also some search-time parameters you can use to tune the search accuracy and speed: ```http POST /collections/{collection_name}/points/search { "params": { "hnsw_ef": 128, "exact": false }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", search_params=models.SearchParams(hnsw_ef=128, exact=False), query_vector=[0.2, 0.1, 0.9, 0.7], limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { hnsw_ef: 128, exact: false, }, limit: 3, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { hnsw_ef: Some(128), exact: Some(false), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { HnswEf = 128, Exact = false }, limit: 3 ); ``` - `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512. - `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth. ## Latency vs Throughput - There are two main approaches to measure the speed of search: - latency of the request - the time from the moment request is submitted to the moment a response is received - throughput - the number of requests per second the system can handle Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another. To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\. You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "default_segment_number": 16 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=16), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { default_segment_number: 16, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { default_segment_number: Some(16), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 } ); ``` To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel. To do that, you can configure qdrant to use minimal number of segments, which is usually 2. Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "default_segment_number": 2 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=2), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { default_segment_number: 2, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { default_segment_number: Some(2), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 } ); ```
documentation/guides/optimize.md
--- title: Telemetry weight: 150 aliases: - ../telemetry --- # Telemetry Qdrant collects anonymized usage statistics from users in order to improve the engine. You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion). ## Why do we collect telemetry? We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios. We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations. In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used. Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance. To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code. With this information, we can make Qdrant faster for everyone. ## What information is collected? There are 3 types of information that we collect: * System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance. * Performance - information about timings and counters of various pieces of code. * Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us. ### We **never** collect the following information: - User's IP address - Any data that can be used to identify the user or the user's organization - Any data, stored in the collections - Any names of the collections - Any URLs ## How do we anonymize data? We understand that some users may be concerned about the privacy of their data. That is why we make an extra effort to ensure your privacy. There are several different techniques that we use to anonymize the data: - We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances. - We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000. - We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry. - All urls are hashed as well. You can see exact version of anomymized collected data by accessing the [telemetry API](https://qdrant.github.io/qdrant/redoc/index.html#tag/service/operation/telemetry) with `anonymize=true` parameter. For example, <http://localhost:6333/telemetry?details_level=6&anonymize=true> ## Deactivate telemetry You can deactivate telemetry by: - setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true` - setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files - using cli option `--disable-telemetry` Any of these options will prevent Qdrant from sending any telemetry data. If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions) ## Request information deletion We provide an email address so that users can request the complete removal of their data from all of our tools. To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation. You can find this identifier in the telemetry API response (`"id"` field), or in the logs of your Qdrant instance. Any questions regarding the management of the data we collect can also be sent to this email address.
documentation/guides/telemetry.md
--- title: Distributed Deployment weight: 100 aliases: - ../distributed_deployment --- # Distributed deployment Since version v0.8.0 Qdrant supports a distributed deployment mode. In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability. To enable distributed deployment - enable the cluster mode in the [configuration](../configuration) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`. ```yaml cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: true # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected node earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 ``` By default, Qdrant will use port `6335` for its internal communication. All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations. Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached: ```bash ./qdrant --uri 'http://qdrant_node_1:6335' ``` Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster. To do this, they need to be provided with a bootstrap URL: ```bash ./qdrant --bootstrap 'http://qdrant_node_1:6335' ``` The URL of the new peers themselves will be calculated automatically from the IP address of their request. But it is also possible to provide them individually using the `--uri` argument. ```text USAGE: qdrant [OPTIONS] OPTIONS: --bootstrap <URI> Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified - this peer will be considered as a first in a new deployment --uri <URI> Uri of this peer. Other peers should be able to reach it by this uri. This value has to be supplied if this is the first peer in a new deployment. In case this is not the first peer and it bootstraps the value is optional. If not supplied then qdrant will take internal grpc port from config and derive the IP address of this peer on bootstrap peer (receiving side) ``` After a successful synchronization you can observe the state of the cluster through the [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster): ```http GET /cluster ``` Example result: ```json { "result": { "status": "enabled", "peer_id": 11532566549086892000, "peers": { "9834046559507417430": { "uri": "http://172.18.0.3:6335/" }, "11532566549086892528": { "uri": "http://qdrant_node_1:6335/" } }, "raft_info": { "term": 1, "commit": 4, "pending_operations": 1, "leader": 11532566549086892000, "role": "Leader" } }, "status": "ok", "time": 5.731e-06 } ``` ## Raft Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure. Operations on points, on the other hand, do not go through the consensus infrastructure. Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead. In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes. Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes. In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them. Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied. You may use the cluster [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster) to check the state of the consensus. ## Sharding A Collection in Qdrant is made of one or more shards. A shard is an independent store of points which is able to perform all operations provided by collections. There are two methods of distributing points across shards: - **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior. - **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding) Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result. When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster: ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6 ); ``` We recommend setting the number of shards to be a multiple of the number of nodes you are currently running in your cluster. For example, if you have 3 nodes, 6 shards could be a good option. Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations. ### Moving shards *Available as of v0.9.0* Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime. Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/collection_cluster_info). Use the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to initiate the shard transfer: ```http POST /collections/{collection_name}/cluster { "move_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995 } } ``` <aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside> After the transfer is initiated, the service will process it based on the used [transfer method](#shard-transfer-method) keeping both shards in sync. Once the transfer is completed, the old shard is deleted from the source node. In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer). ```http DELETE /cluster/peer/{peer_id} ``` After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown. ### User-defined sharding *Available as of v1.7.0* Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned. A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. To enable user-defined sharding, set `sharding_method` to `custom` during collection creation: ```http PUT /collections/{collection_name} { "shard_number": 1, "sharding_method": "custom" // ... other collection parameters } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", shard_number=1, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key("{collection_name}", "user_1") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { shard_number: 1, sharding_method: "custom", // ... other collection parameters }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{CreateCollection, ShardingMethod}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), shard_number: Some(1), sharding_method: Some(ShardingMethod::Custom), // ... other collection parameters ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.ShardingMethod; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") // ... other collection parameters .setShardNumber(1) .setShardingMethod(ShardingMethod.Custom) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", // ... other collection parameters shardNumber: 1, shardingMethod: ShardingMethod.Custom ); ``` In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings: ```json { "shard_number": 1, "sharding_method": "custom", "replication_factor": 2 } ``` Then you will have `1 * 10 * 2 = 20` total physical shards in the collection. To specify the shard for each point, you need to provide the `shard_key` field in the upsert request: ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1111, "vector": [0.1, 0.2, 0.3] }, ] "shard_key": "user_1" } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1111, vector=[0.1, 0.2, 0.3], ), ], shard_key_selector="user_1", ) ``` ```typescript client.upsertPoints("{collection_name}", { points: [ { id: 1111, vector: [0.1, 0.2, 0.3], }, ], shard_key: "user_1", }); ``` ```rust use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType}; client .upsert_points_blocking( "{collection_name}", Some(vec![shard_key::Key::String("user_1".into())]), vec![ PointStruct::new( 1111, vec![0.1, 0.2, 0.3], Default::default(), ), ], None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(111)) .setVectors(vectors(0.1f, 0.2f, 0.3f)) .build())) .setShardKeySelector(shardKeySelector("user_1")) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } } }, shardKeySelector: new ShardKeySelector { ShardKeys = { new List<ShardKey> { "user_id" } } } ); ``` <aside role="alert"> Using the same point ID across multiple shard keys is <strong>not supported<sup>*</sup></strong> and should be avoided. </aside> <sup> <strong>*</strong> When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys. This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check. </sup> Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards. Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed. <img src="/docs/sharding-per-day.png" alt="Sharding per day" width="500" height="600"> ### Shard transfer method *Available as of v1.7.0* There are different methods for transferring, such as moving or replicating, a shard to another node. Depending on what performance and guarantees you'd like to have and how you'd like to manage your cluster, you likely want to choose a specific method. Each method has its own pros and cons. Which is fastest depends on the size and state of a shard. Available shard transfer methods are: - `stream_records`: _(default)_ transfer shard by streaming just its records to the target node in batches. - `snapshot`: transfer shard including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots) automatically. Each has pros, cons and specific requirements, which are: | Method: | Stream records | Snapshot | |:---|:---|:---| | **Connection** | <ul><li>Requires internal gRPC API <small>(port 6335)</small></li></ul> | <ul><li>Requires internal gRPC API <small>(port 6335)</small></li><li>Requires REST API <small>(port 6333)</small></li></ul> | | **HNSW index** | <ul><li>Doesn't transfer index</li><li>Will reindex on target node</li></ul> | <ul><li>Index is transferred with a snapshot</li><li>Immediately ready on target node</li></ul> | | **Quantization** | <ul><li>Doesn't transfer quantized data</li><li>Will re-quantize on target node</li></ul> | <ul><li>Quantized data is transferred with a snapshot</li><li>Immediately ready on target node</li></ul> | | **Consistency** | <ul><li>Weak data consistency</li><li>Unordered updates on target node[^unordered]</li></ul> | <ul><li>Strong data consistency</li><li>Ordered updates on target node[^ordered]</li></ul> | | **Disk space** | <ul><li>No extra disk space required</li></ul> | <ul><li>Extra disk space required for snapshot on both nodes</li></ul> | [^unordered]: Weak data consistency and unordered updates: All records are streamed to the target node in order. New updates are received on the target node in parallel, while the transfer of records is still happening. We therefore have `weak` ordering, regardless of what [ordering](#write-ordering) is used for updates. [^ordered]: Strong data consistency and ordered updates: A snapshot of the shard is created, it is transferred and recovered on the target node. That ensures the state of the shard is kept consistent. New updates are queued on the source node, and transferred in order to the target node. Updates therefore have the same [ordering](#write-ordering) as the user selects, making `strong` ordering possible. To select a shard transfer method, specify the `method` like: ```http POST /collections/{collection_name}/cluster { "move_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995, "method": "snapshot" } } ``` The `stream_records` transfer method is the simplest available. It simply transfers all shard records in batches to the target node until it has transferred all of them, keeping both shards in sync. It will also make sure the transferred shard indexing process is keeping up before performing a final switch. The method has two common disadvantages: 1. It does not transfer index or quantization data, meaning that the shard has to be optimized again on the new node, which can be very expensive. 2. The consistency and ordering guarantees are `weak`[^unordered], which is not suitable for some applications. Because it is so simple, it's also very robust, making it a reliable choice if the above cons are acceptable in your use case. If your cluster is unstable and out of resources, it's probably best to use the `stream_records` transfer method, because it is unlikely to fail. The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots) to transfer a shard. A snapshot is created automatically. It is then transferred and restored on the target node. After this is done, the snapshot is removed from both nodes. While the snapshot/transfer/restore operation is happening, the source node queues up all new operations. All queued updates are then sent in order to the target shard to bring it into the same state as the source. There are two important benefits: 1. It transfers index and quantization data, so that the shard does not have to be optimized again on the target node, making them immediately available. This way, Qdrant ensures that there will be no degradation in performance at the end of the transfer. Especially on large shards, this can give a huge performance improvement. 2. The consistency and ordering guarantees can be `strong`[^ordered], required for some applications. The `stream_records` method is currently used as default. This may change in the future. ## Replication *Available as of v0.11.0* Qdrant allows you to replicate shards between nodes in the cluster. Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster. This ensures the availability of the data in case of node failures, except if all replicas are lost. ### Replication factor When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to "1", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection. Currently, the replication factor of a collection can only be configured at creation time. ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6, "replication_factor": 2, } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, replication_factor: 2, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), replication_factor: Some(2), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2 ); ``` This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards. Since a replication factor of "2" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand. ### Creating new shard replicas It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html?v=v0.11.0#tag/cluster/operation/update_collection_cluster). A replica can be added on a specific peer by specifying the peer from which to replicate. ```http POST /collections/{collection_name}/cluster { "replicate_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995 } } ``` <aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside> And a replica can be removed on a specific peer. ```http POST /collections/{collection_name}/cluster { "drop_replica": { "shard_id": 0, "peer_id": 381894127 } } ``` Keep in mind that a collection must contain at least one active replica of a shard. ### Error handling Replicas can be in different states: - Active: healthy and ready to serve traffic - Dead: unhealthy and not ready to serve traffic - Partial: currently under resynchronization before activation A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic. A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically. This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation. ### Node Failure Recovery Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable. No system is immune to this. But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation. Let's walk through them from best to worst. **Recover with replicated collection** If the number of failed nodes is less than the replication factor of the collection, then no data is lost. Your cluster should still be able to perform read, search and update queries. Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed. **Recreate node with replicated collections** If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node. To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API. Apply the `force` flag if necessary. When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation. Use the [Replicate Shard Operation](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to create another copy of the shard on the newly connected node. It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery. Building a completely automatic process of collection scaling would require control over the cluster machines themself. Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that. **Recover from snapshot** If there are no copies of data in the cluster, it is still possible to recover from a snapshot. Follow the same steps to detach failed node and create a new one in the cluster: * To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API. Apply the `force` flag if necessary. * Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Snapshot recovery, used in single-node deployment, is different from cluster one. Consensus manages all metadata about all collections and does not require snapshots to recover it. But you can use snapshots to recover missing shards of the collections. Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it. The service will download the specified snapshot of the collection and recover shards with data from it. Once all shards of the collection are recovered, the collection will become operational again. ## Consistency guarantees By default, Qdrant focuses on availability and maximum throughput of search operations. For the majority of use cases, this is a preferable trade-off. During the normal state of operation, it is possible to search and modify data from any peers in the cluster. Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster. - reads are using a partial fan-out strategy to optimize latency and availability - writes are executed in parallel on all active sharded replicas ![Embeddings](/docs/concurrent-operations-replicas.png) However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc. Qdrant provides a few options to control consistency guarantees: - `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations. - Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low. - Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical. ### Write consistency factor The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default. It can be configured at the collection's creation time. ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6, "replication_factor": 2, "write_consistency_factor": 2, } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, write_consistency_factor=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, replication_factor: 2, write_consistency_factor: 2, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), replication_factor: Some(2), write_consistency_factor: Some(2), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .setWriteConsistencyFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2, writeConsistencyFactor: 2 ); ``` Write operations will fail if the number of active replicas is less than the `write_consistency_factor`. ### Read consistency Read `consistency` can be specified for most read requests and will ensure that the returned result is consistent across cluster nodes. - `all` will query all nodes and return points, which present on all of them - `majority` will query all nodes and return points, which present on the majority of them - `quorum` will query randomly selected majority of nodes and return points, which present on all of them - `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them - default `consistency` is `1` ```http POST /collections/{collection_name}/points/search?consistency=majority { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "params": { "hnsw_ef": 128, "exact": false }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 } ``` ```python client.search( collection_name="{collection_name}", query_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ), search_params=models.SearchParams(hnsw_ef=128, exact=False), query_vector=[0.2, 0.1, 0.9, 0.7], limit=3, consistency="majority", ) ``` ```typescript client.search("{collection_name}", { filter: { must: [{ key: "city", match: { value: "London" } }], }, params: { hnsw_ef: 128, exact: false, }, vector: [0.2, 0.1, 0.9, 0.7], limit: 3, consistency: "majority", }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ read_consistency::Value, Condition, Filter, ReadConsistency, ReadConsistencyType, SearchParams, SearchPoints, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".into(), filter: Some(Filter::must([Condition::matches( "city", "London".into(), )])), params: Some(SearchParams { hnsw_ef: Some(128), exact: Some(false), ..Default::default() }), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, read_consistency: Some(ReadConsistency { value: Some(Value::Type(ReadConsistencyType::Majority.into())), }), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ReadConsistency; import io.qdrant.client.grpc.Points.ReadConsistencyType; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")).build()) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(true).build()) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setLimit(3) .setReadConsistency( ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), searchParams: new SearchParams { HnswEf = 128, Exact = true }, limit: 3, readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority } ); ``` ### Write ordering Write `ordering` can be specified for any write request to serialize it through a single "leader" node, which ensures that all write operations (issued with the same `ordering`) are performed and observed sequentially. - `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered. - `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change. - `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down. ```http PUT /collections/{collection_name}/points?ordering=strong { "batch": { "ids": [1, 2, 3], "payloads": [ {"color": "red"}, {"color": "green"}, {"color": "blue"} ], "vectors": [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9] ] } } ``` ```python client.upsert( collection_name="{collection_name}", points=models.Batch( ids=[1, 2, 3], payloads=[ {"color": "red"}, {"color": "green"}, {"color": "blue"}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], ), ordering="strong", ) ``` ```typescript client.upsert("{collection_name}", { batch: { ids: [1, 2, 3], payloads: [{ color: "red" }, { color: "green" }, { color: "blue" }], vectors: [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], }, ordering: "strong", }); ``` ```rust use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType}; use serde_json::json; client .upsert_points_blocking( "{collection_name}", None, vec![ PointStruct::new( 1, vec![0.9, 0.1, 0.1], json!({ "color": "red" }) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.1, 0.9, 0.1], json!({ "color": "green" }) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.1, 0.1, 0.9], json!({ "color": "blue" }) .try_into() .unwrap(), ), ], Some(WriteOrdering { r#type: WriteOrderingType::Strong.into(), }), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; import io.qdrant.client.grpc.Points.WriteOrdering; import io.qdrant.client.grpc.Points.WriteOrderingType; client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of("color", value("red"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of("color", value("green"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.94f)) .putAllPayload(Map.of("color", value("blue"))) .build())) .setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { ["city"] = "red" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { ["city"] = "green" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { ["city"] = "blue" } } }, ordering: WriteOrderingType.Strong ); ``` ## Listener mode <aside role="alert">This is an experimental feature, its behavior may change in the future.</aside> In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations. There are several scenarios where this can be useful: - Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time. - Listener node can be used to syncronize data into another region, while still performing search operations in the local region. To enable listener mode, set `node_type` to `Listener` in the config file: ```yaml storage: node_type: "Listener" ``` Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage. All shards, stored on the listener node, will be converted to the `Listener` state. Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL. This mechanism should allow to minimize upsert latency in case of parallel snapshotting. ## Consensus Checkpointing Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state. This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes. For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state. In long-running clusters, this can take a long time, and the log can grow very large. To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state. To use this feature, simply call the `/cluster/recover` API on required node: ```http POST /cluster/recover ``` This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application. In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation.
documentation/guides/distributed_deployment.md
--- title: Installation weight: 10 aliases: - ../install - ../installation --- ## Installation requirements The following sections describe the requirements for deploying Qdrant. ### CPU and memory The CPU and RAM that you need depends on: - Number of vectors - Vector dimensions - [Payloads](/documentation/concepts/payload/) and their indexes - Storage - Replication - How you configure quantization Our [Cloud Pricing Calculator](https://cloud.qdrant.io/calculator) can help you estimate required resources without payload or index data. ### Storage For persistent storage, Qdrant requires block-level access to storage devices with a [POSIX-compatible file system](https://www.quobyte.com/storage-explained/posix-filesystem/). Network systems such as [iSCSI](https://en.wikipedia.org/wiki/ISCSI) that provide block-level access are also acceptable. Qdrant won't work with [Network file systems](https://en.wikipedia.org/wiki/File_system#Network_file_systems) such as NFS, or [Object storage](https://en.wikipedia.org/wiki/Object_storage) systems such as S3. If you offload vectors to a local disk, we recommend you use a solid-state (SSD or NVMe) drive. ### Networking Each Qdrant instance requires three open ports: * `6333` - For the HTTP API, for the [Monitoring](/documentation/guides/monitoring/) health and metrics endpoints * `6334` - For the [gRPC](/documentation/interfaces/#grpc-interface) API * `6335` - For [Distributed deployment](/documentation/guides/distributed_deployment/) All Qdrant instances in a cluster must be able to: - Communicate with each other over these ports - Allow incoming connections to ports `6333` and `6334` from clients that use Qdrant. ## Installation options Qdrant can be installed in different ways depending on your needs: For production, you can use our Qdrant Cloud to run Qdrant either fully managed in our infrastructure or with Hybrid SaaS in yours. For testing or development setups, you can run the Qdrant container or as a binary executable. If you want to run Qdrant in your own infrastructure, without any cloud connection, we recommend to install Qdrant in a Kubernetes cluster with our Helm chart, or to use our Qdrant Enterprise Operator ## Production For production, we recommend that you configure Qdrant in the cloud, with Kubernetes, or with a Qdrant Enterprise Operator. ### Qdrant Cloud You can set up production with the [Qdrant Cloud](https://qdrant.to/cloud), which provides fully managed Qdrant databases. It provides horizontal and vertical scaling, one click installation and upgrades, monitoring, logging, as well as backup and disaster recovery. For more information, see the [Qdrant Cloud documentation](/documentation/cloud). ### Kubernetes You can use a ready-made [Helm Chart](https://helm.sh/docs/) to run Qdrant in your Kubernetes cluster: ```bash helm repo add qdrant https://qdrant.to/helm helm install qdrant qdrant/qdrant ``` For more information, see the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant) README. ### Qdrant Kubernetes Operator We provide a Qdrant Enterprise Operator for Kubernetes installations. For more information, [use this form](https://qdrant.to/contact-us) to contact us. ### Docker and Docker Compose Usually, we recommend to run Qdrant in Kubernetes, or use the Qdrant Cloud for production setups. This makes setting up highly available and scalable Qdrant clusters with backups and disaster recovery a lot easier. However, you can also use Docker and Docker Compose to run Qdrant in production, by following the setup instructions in the [Docker](#docker) and [Docker Compose](#docker-compose) Development sections. In addition, you have to make sure: * To use a performant [persistent storage](#storage) for your data * To configure the [security settings](/documentation/guides/security/) for your deployment * To set up and configure Qdrant on multiple nodes for a highly available [distributed deployment](/documentation/guides/distributed_deployment/) * To set up a load balancer for your Qdrant cluster * To create a [backup and disaster recovery strategy](/documentation/concepts/snapshots/) for your data * To integrate Qdrant with your [monitoring](/documentation/guides/monitoring/) and logging solutions ## Development For development and testing, we recommend that you set up Qdrant in Docker. We also have different client libraries. ### Docker The easiest way to start using Qdrant for testing or development is to run the Qdrant container image. The latest versions are always available on [DockerHub](https://hub.docker.com/r/qdrant/qdrant/tags?page=1&ordering=last_updated). Make sure that [Docker](https://docs.docker.com/engine/install/), [Podman](https://podman.io/docs/installation) or the container runtime of your choice is installed and running. The following instructions use Docker. Pull the image: ```bash docker pull qdrant/qdrant ``` In the following command, revise `$(pwd)/path/to/data` for your Docker configuration. Then use the updated command to run the container: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ qdrant/qdrant ``` With this command, you start a Qdrant instance with the default configuration. It stores all data in the `./path/to/data` directory. By default, Qdrant uses port 6333, so at [localhost:6333](http://localhost:6333) you should see the welcome message. To change the Qdrant configuration, you can overwrite the production configuration: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \ qdrant/qdrant ``` Alternatively, you can use your own `custom_config.yaml` configuration file: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/custom_config.yaml \ qdrant/qdrant \ ./qdrant --config-path config/custom_config.yaml ``` For more information, see the [Configuration](/documentation/guides/configuration/) documentation. ### Docker Compose You can also use [Docker Compose](https://docs.docker.com/compose/) to run Qdrant. Here is an example customized compose file for a single node Qdrant cluster: ```yaml services: qdrant: image: qdrant/qdrant:latest restart: always container_name: qdrant ports: - 6333:6333 - 6334:6334 expose: - 6333 - 6334 - 6335 configs: - source: qdrant_config target: /qdrant/config/production.yaml volumes: - ./qdrant_data:/qdrant_data configs: qdrant_config: content: | log_level: INFO ``` <aside role="status">Proving the inline <code>content</code> in the <a href="https://docs.docker.com/compose/compose-file/08-configs/">configs top-level element</a> requires <a href="https://docs.docker.com/compose/release-notes/#2231">Docker Compose v2.23.1</a> or above. This functionality is supported starting <a href="https://docs.docker.com/engine/release-notes/25.0/#2500">Docker Engine v25.0.0</a> and <a href="https://docs.docker.com/desktop/release-notes/#4260">Docker Desktop v4.26.0</a> onwards.</aside> ### From source Qdrant is written in Rust and can be compiled into a binary executable. This installation method can be helpful if you want to compile Qdrant for a specific processor architecture or if you do not want to use Docker. Before compiling, make sure that the necessary libraries and the [rust toolchain](https://www.rust-lang.org/tools/install) are installed. The current list of required libraries can be found in the [Dockerfile](https://github.com/qdrant/qdrant/blob/master/Dockerfile). Build Qdrant with Cargo: ```bash cargo build --release --bin qdrant ``` After a successful build, you can find the binary in the following subdirectory `./target/release/qdrant`. ## Client libraries In addition to the service, Qdrant provides a variety of client libraries for different programming languages. For a full list, see our [Client libraries](../../interfaces/#client-libraries) documentation.
documentation/guides/installation.md
--- title: Quantization weight: 120 aliases: - ../quantization --- # Quantization Quantization is an optional feature in Qdrant that enables efficient storage and search of high-dimensional vectors. By transforming original vectors into a new representations, quantization compresses data while preserving close to original relative distances between vectors. Different quantization methods have different mechanics and tradeoffs. We will cover them in this section. Quantization is primarily used to reduce the memory footprint and accelerate the search process in high-dimensional vector spaces. In the context of the Qdrant, quantization allows you to optimize the search engine for specific use cases, striking a balance between accuracy, storage efficiency, and search speed. There are tradeoffs associated with quantization. On the one hand, quantization allows for significant reductions in storage requirements and faster search times. This can be particularly beneficial in large-scale applications where minimizing the use of resources is a top priority. On the other hand, quantization introduces an approximation error, which can lead to a slight decrease in search quality. The level of this tradeoff depends on the quantization method and its parameters, as well as the characteristics of the data. ## Scalar Quantization *Available as of v1.1.0* Scalar quantization, in the context of vector search engines, is a compression technique that compresses vectors by reducing the number of bits used to represent each vector component. For instance, Qdrant uses 32-bit floating numbers to represent the original vector components. Scalar quantization allows you to reduce the number of bits used to 8. In other words, Qdrant performs `float32 -> uint8` conversion for each vector component. Effectively, this means that the amount of memory required to store a vector is reduced by a factor of 4. In addition to reducing the memory footprint, scalar quantization also speeds up the search process. Qdrant uses a special SIMD CPU instruction to perform fast vector comparison. This instruction works with 8-bit integers, so the conversion to `uint8` allows Qdrant to perform the comparison faster. The main drawback of scalar quantization is the loss of accuracy. The `float32 -> uint8` conversion introduces an error that can lead to a slight decrease in search quality. However, this error is usually negligible, and tends to be less significant for high-dimensional vectors. In our experiments, we found that the error introduced by scalar quantization is usually less than 1%. However, this value depends on the data and the quantization parameters. Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case. ## Binary Quantization *Available as of v1.5.0* Binary quantization is an extreme case of scalar quantization. This feature lets you represent each vector component as a single bit, effectively reducing the memory footprint by a **factor of 32**. This is the fastest quantization method, since it lets you perform a vector comparison with a few CPU instructions. Binary quantization can achieve up to a **40x** speedup compared to the original vectors. However, binary quantization is only efficient for high-dimensional vectors and require a centered distribution of vector components. At the moment, binary quantization shows good accuracy results with the following models: - OpenAI `text-embedding-ada-002` - 1536d tested with [dbpedia dataset](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) achieving 0.98 recall@100 with 4x oversampling - Cohere AI `embed-english-v2.0` - 4096d tested on [wikipedia embeddings](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) - 0.98 recall@50 with 2x oversampling Models with a lower dimensionality or a different distribution of vector components may require additional experiments to find the optimal quantization parameters. We recommend using binary quantization only with rescoring enabled, as it can significantly improve the search quality with just a minor performance impact. Additionally, oversampling can be used to tune the tradeoff between search speed and search quality in the query time. ### Binary Quantization as Hamming Distance The additional benefit of this method is that you can efficiently emulate Hamming distance with dot product. Specifically, if original vectors contain `{-1, 1}` as possible values, then the dot product of two vectors is equal to the Hamming distance by simply replacing `-1` with `0` and `1` with `1`. <!-- hidden section --> <details> <summary><b>Sample truth table</b></summary> | Vector 1 | Vector 2 | Dot product | |----------|----------|-------------| | 1 | 1 | 1 | | 1 | -1 | -1 | | -1 | 1 | -1 | | -1 | -1 | 1 | | Vector 1 | Vector 2 | Hamming distance | |----------|----------|------------------| | 1 | 1 | 0 | | 1 | 0 | 1 | | 0 | 1 | 1 | | 0 | 0 | 0 | </details> As you can see, both functions are equal up to a constant factor, which makes similarity search equivalent. Binary quantization makes it efficient to compare vectors using this representation. ## Product Quantization *Available as of v1.2.0* Product quantization is a method of compressing vectors to minimize their memory usage by dividing them into chunks and quantizing each segment individually. Each chunk is approximated by a centroid index that represents the original vector component. The positions of the centroids are determined through the utilization of a clustering algorithm such as k-means. For now, Qdrant uses only 256 centroids, so each centroid index can be represented by a single byte. Product quantization can compress by a more prominent factor than a scalar one. But there are some tradeoffs. Product quantization distance calculations are not SIMD-friendly, so it is slower than scalar quantization. Also, product quantization has a loss of accuracy, so it is recommended to use it only for high-dimensional vectors. Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case. ## How to choose the right quantization method Here is a brief table of the pros and cons of each quantization method: | Quantization method | Accuracy | Speed | Compression | |---------------------|----------|--------------|-------------| | Scalar | 0.99 | up to x2 | 4 | | Product | 0.7 | 0.5 | up to 64 | | Binary | 0.95* | up to x40 | 32 | `*` - for compatible models * **Binary Quantization** is the fastest method and the most memory-efficient, but it requires a centered distribution of vector components. It is recommended to use with tested models only. * **Scalar Quantization** is the most universal method, as it provides a good balance between accuracy, speed, and compression. It is recommended as default quantization if binary quantization is not applicable. * **Product Quantization** may provide a better compression ratio, but it has a significant loss of accuracy and is slower than scalar quantization. It is recommended if the memory footprint is the top priority and the search speed is not critical. ## Setting up Quantization in Qdrant You can configure quantization for a collection by specifying the quantization parameters in the `quantization_config` section of the collection configuration. Quantization will be automatically applied to all vectors during the indexation process. Quantized vectors are stored alongside the original vectors in the collection, so you will still have access to the original vectors if you need them. *Available as of v1.1.1* The `quantization_config` can also be set on a per vector basis by specifying it in a named vector. ### Setting up Scalar Quantization To enable scalar quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "quantization_config": { "scalar": { "type": "int8", "quantile": 0.99, "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.99, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, quantization_config: { scalar: { type: "int8", quantile: 0.99, always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), quantile: Some(0.99), always_ram: Some(true), })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setQuantile(0.99f) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, Quantile = 0.99f, AlwaysRam = true } } ); ``` There are 3 parameters that you can specify in the `quantization_config` section: `type` - the type of the quantized vector components. Currently, Qdrant supports only `int8`. `quantile` - the quantile of the quantized vector components. The quantile is used to calculate the quantization bounds. For instance, if you specify `0.99` as the quantile, 1% of extreme values will be excluded from the quantization bounds. Using quantiles lower than `1.0` might be useful if there are outliers in your vector components. This parameter only affects the resulting precision and not the memory footprint. It might be worth tuning this parameter if you experience a significant decrease in search quality. `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. In this case, you can set `always_ram` to `true` to store quantized vectors in RAM. ### Setting up Binary Quantization To enable binary quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { "vectors": { "size": 1536, "distance": "Cosine" }, "quantization_config": { "binary": { "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig( always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 1536, distance: "Cosine", }, quantization_config: { binary: { always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, BinaryQuantization, CreateCollection, Distance, QuantizationConfig, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 1536, distance: Distance::Cosine.into(), ..Default::default() })), }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Binary(BinaryQuantization { always_ram: Some(true), })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.BinaryQuantization; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(1536) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setBinary(BinaryQuantization.newBuilder().setAlwaysRam(true).build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 1536, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Binary = new BinaryQuantization { AlwaysRam = true } } ); ``` `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. In this case, you can set `always_ram` to `true` to store quantized vectors in RAM. ### Setting up Product Quantization To enable product quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "quantization_config": { "product": { "compression": "x16", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), quantization_config=models.ProductQuantization( product=models.ProductQuantizationConfig( compression=models.CompressionRatio.X16, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, quantization_config: { product: { compression: "x16", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CompressionRatio, CreateCollection, Distance, ProductQuantization, QuantizationConfig, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Product(ProductQuantization { compression: CompressionRatio::X16.into(), always_ram: Some(true), })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CompressionRatio; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.ProductQuantization; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setProduct( ProductQuantization.newBuilder() .setCompression(CompressionRatio.x16) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Product = new ProductQuantization { Compression = CompressionRatio.X16, AlwaysRam = true } } ); ``` There are two parameters that you can specify in the `quantization_config` section: `compression` - compression ratio. Compression ratio represents the size of the quantized vector in bytes divided by the size of the original vector in bytes. In this case, the quantized vector will be 16 times smaller than the original vector. `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. Then set `always_ram` to `true`. ### Searching with Quantization Once you have configured quantization for a collection, you don't need to do anything extra to search with quantization. Qdrant will automatically use quantized vectors if they are available. However, there are a few options that you can use to control the search process: ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "ignore": false, "rescore": true, "oversampling": 2.0 } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 10 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.0, ) ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { quantization: { ignore: false, rescore: true, oversampling: 2.0, }, }, limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{QuantizationSearchParams, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { quantization: Some(QuantizationSearchParams { ignore: Some(false), rescore: Some(true), oversampling: Some(2.0), ..Default::default() }), ..Default::default() }), limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder() .setIgnore(false) .setRescore(true) .setOversampling(2.0) .build()) .build()) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Ignore = false, Rescore = true, Oversampling = 2.0 } }, limit: 10 ); ``` `ignore` - Toggle whether to ignore quantized vectors during the search process. By default, Qdrant will use quantized vectors if they are available. `rescore` - Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. This can improve the search quality, but may slightly decrease the search speed, compared to the search without rescore. It is recommended to disable rescore only if the original vectors are stored on a slow storage (e.g. HDD or network storage). By default, rescore is enabled. **Available as of v1.3.0** `oversampling` - Defines how many extra vectors should be pre-selected using quantized index, and then re-scored using original vectors. For example, if oversampling is 2.4 and limit is 100, then 240 vectors will be pre-selected using quantized index, and then top-100 will be returned after re-scoring. Oversampling is useful if you want to tune the tradeoff between search speed and search quality in the query time. ## Quantization tips #### Accuracy tuning In this section, we will discuss how to tune the search precision. The fastest way to understand the impact of quantization on the search quality is to compare the search results with and without quantization. In order to disable quantization, you can set `ignore` to `true` in the search request: ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "ignore": true } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=True, ) ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { quantization: { ignore: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{QuantizationSearchParams, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { quantization: Some(QuantizationSearchParams { ignore: Some(true), ..Default::default() }), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setIgnore(true).build()) .build()) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Ignore = true } }, limit: 10 ); ``` - **Adjust the quantile parameter**: The quantile parameter in scalar quantization determines the quantization bounds. By setting it to a value lower than 1.0, you can exclude extreme values (outliers) from the quantization bounds. For example, if you set the quantile to 0.99, 1% of the extreme values will be excluded. By adjusting the quantile, you find an optimal value that will provide the best search quality for your collection. - **Enable rescore**: Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. On large collections, this can improve the search quality, with just minor performance impact. #### Memory and speed tuning In this section, we will discuss how to tune the memory and speed of the search process with quantization. There are 3 possible modes to place storage of vectors within the qdrant collection: - **All in RAM** - all vector, original and quantized, are loaded and kept in RAM. This is the fastest mode, but requires a lot of RAM. Enabled by default. - **Original on Disk, quantized in RAM** - this is a hybrid mode, allows to obtain a good balance between speed and memory usage. Recommended scenario if you are aiming to shrink the memory footprint while keeping the search speed. This mode is enabled by setting `always_ram` to `true` in the quantization config while using memmap storage: ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` In this scenario, the number of disk reads may play a significant role in the search speed. In a system with high disk latency, the re-scoring step may become a bottleneck. Consider disabling `rescore` to improve the search speed: ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "rescore": false } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams(rescore=False) ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { quantization: { rescore: false, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{QuantizationSearchParams, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { quantization: Some(QuantizationSearchParams { rescore: Some(false), ..Default::default() }), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setRescore(false).build()) .build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Rescore = false } }, limit: 3 ); ``` - **All on Disk** - all vectors, original and quantized, are stored on disk. This mode allows to achieve the smallest memory footprint, but at the cost of the search speed. It is recommended to use this mode if you have a large collection and fast storage (e.g. SSD or NVMe). This mode is enabled by setting `always_ram` to `false` in the quantization config while using mmap storage: ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": false } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=False, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: false, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(false), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(false) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = false } } ); ```
documentation/guides/quantization.md
--- title: Monitoring weight: 155 aliases: - ../monitoring --- # Monitoring Qdrant exposes its metrics in a Prometheus format, so you can integrate them easily with the compatible tools and monitor Qdrant with your own monitoring system. You can use the `/metrics` endpoint and configure it as a scrape target. Metrics endpoint: <http://localhost:6333/metrics> The integration with Qdrant is easy to [configure](https://prometheus.io/docs/prometheus/latest/getting_started/#configure-prometheus-to-monitor-the-sample-targets) with Prometheus and Grafana. ## Exposed metric Each Qdrant server will expose the following metrics. | Name | Type | Meaning | |-------------------------------------|---------|---------------------------------------------------| | app_info | counter | Information about Qdrant server | | app_status_recovery_mode | counter | If Qdrant is currently started in recovery mode | | collections_total | gauge | Number of collections | | collections_vector_total | gauge | Total number of vectors in all collections | | collections_full_total | gauge | Number of full collections | | collections_aggregated_total | gauge | Number of aggregated collections | | rest_responses_total | counter | Total number of responses through REST API | | rest_responses_fail_total | counter | Total number of failed responses through REST API | | rest_responses_avg_duration_seconds | gauge | Average response duration in REST API | | rest_responses_min_duration_seconds | gauge | Minimum response duration in REST API | | rest_responses_max_duration_seconds | gauge | Maximum response duration in REST API | | grpc_responses_total | counter | Total number of responses through gRPC API | | grpc_responses_fail_total | counter | Total number of failed responses through REST API | | grpc_responses_avg_duration_seconds | gauge | Average response duration in gRPC API | | grpc_responses_min_duration_seconds | gauge | Minimum response duration in gRPC API | | grpc_responses_max_duration_seconds | gauge | Maximum response duration in gRPC API | | cluster_enabled | gauge | Whether the cluster support is enabled | ### Cluster related metrics There are also some metrics which are exposed in distributed mode only. | Name | Type | Meaning | |----------------------------------|---------|------------------------------------------------------------------------| | cluster_peers_total | gauge | Total number of cluster peers | | cluster_term | counter | Current cluster term | | cluster_commit | counter | Index of last committed (finalized) operation cluster peer is aware of | | cluster_pending_operations_total | gauge | Total number of pending operations for cluster peer | | cluster_voter | gauge | Whether the cluster peer is a voter or learner | ## Kubernetes health endpoints *Available as of v1.5.0* Qdrant exposes three endpoints, namely [`/healthz`](http://localhost:6333/healthz), [`/livez`](http://localhost:6333/livez) and [`/readyz`](http://localhost:6333/readyz), to indicate the current status of the Qdrant server. These currently provide the most basic status response, returning HTTP 200 if Qdrant is started and ready to be used. Regardless of whether an [API key](../security#authentication) is configured, the endpoints are always accessible. You can read more about Kubernetes health endpoints [here](https://kubernetes.io/docs/reference/using-api/health-checks/).
documentation/guides/monitoring.md
--- title: Guides weight: 22 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: true ---
documentation/guides/_index.md
--- title: Security weight: 165 aliases: - ../security --- # Security Please read this page carefully. Although there are various ways to secure your Qdrant instances, **they are unsecured by default**. You need to enable security measures before production use. Otherwise, they are completely open to anyone ## Authentication *Available as of v1.2.0* Qdrant supports a simple form of client authentication using a static API key. This can be used to secure your instance. To enable API key based authentication in your own Qdrant instance you must specify a key in the configuration: ```yaml service: # Set an api-key. # If set, all requests must include a header with the api-key. # example header: `api-key: <API-KEY>` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. api_key: your_secret_api_key_here ``` Or alternatively, you can use the environment variable: ```bash export QDRANT__SERVICE__API_KEY=your_secret_api_key_here ``` <aside role="alert"><a href="#tls">TLS</a> must be used to prevent leaking the API key over an unencrypted connection.</aside> For using API key based authentication in Qdrant cloud see the cloud [Authentication](https://qdrant.tech/documentation/cloud/authentication) section. The API key then needs to be present in all REST or gRPC requests to your instance. All official Qdrant clients for Python, Go, Rust, .NET and Java support the API key parameter. <!--- Examples with clients --> ```bash curl \ -X GET https://localhost:6333 \ --header 'api-key: your_secret_api_key_here' ``` ```python from qdrant_client import QdrantClient client = QdrantClient( url="https://localhost", port=6333, api_key="your_secret_api_key_here", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ url: "http://localhost", port: 6333, apiKey: "your_secret_api_key_here", }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("https://xyz-example.eu-central.aws.cloud.qdrant.io:6334") .with_api_key("<paste-your-api-key-here>") .build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( "xyz-example.eu-central.aws.cloud.qdrant.io", 6334, true) .withApiKey("<paste-your-api-key-here>") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: "xyz-example.eu-central.aws.cloud.qdrant.io", https: true, apiKey: "<paste-your-api-key-here>" ); ``` <aside role="alert">Internal communication channels are <strong>never</strong> protected by an API key. Internal gRPC uses port 6335 by default if running in distributed mode. You must ensure that this port is not publicly reachable and can only be used for node communication. By default, this setting is disabled for Qdrant Cloud and the Qdrant Helm chart.</aside> ### Read-only API key *Available as of v1.7.0* In addition to the regular API key, Qdrant also supports a read-only API key. This key can be used to access read-only operations on the instance. ```yaml service: read_only_api_key: your_secret_read_only_api_key_here ``` Or with the environment variable: ```bash export QDRANT__SERVICE__READ_ONLY_API_KEY=your_secret_read_only_api_key_here ``` Both API keys can be used simultaneously. ## TLS *Available as of v1.2.0* TLS for encrypted connections can be enabled on your Qdrant instance to secure connections. <aside role="alert">Connections are unencrypted by default. This allows sniffing and <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">MitM</a> attacks.</aside> First make sure you have a certificate and private key for TLS, usually in `.pem` format. On your local machine you may use [mkcert](https://github.com/FiloSottile/mkcert#readme) to generate a self signed certificate. To enable TLS, set the following properties in the Qdrant configuration with the correct paths and restart: ```yaml service: # Enable HTTPS for the REST and gRPC API enable_tls: true # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Server certificate chain file cert: ./tls/cert.pem # Server private key file key: ./tls/key.pem ``` For internal communication when running cluster mode, TLS can be enabled with: ```yaml cluster: # Configuration of the inter-cluster communication p2p: # Use TLS for communication between peers enable_tls: true ``` With TLS enabled, you must start using HTTPS connections. For example: ```bash curl -X GET https://localhost:6333 ``` ```python from qdrant_client import QdrantClient client = QdrantClient( url="https://localhost", port=6333, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ url: "https://localhost", port: 6333 }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("https://localhost:6334").build()?; ``` Certificate rotation is enabled with a default refresh time of one hour. This reloads certificate files every hour while Qdrant is running. This way changed certificates are picked up when they get updated externally. The refresh time can be tuned by changing the `tls.cert_ttl` setting. You can leave this on, even if you don't plan to update your certificates. Currently this is only supported for the REST API. Optionally, you can enable client certificate validation on the server against a local certificate authority. Set the following properties and restart: ```yaml service: # Check user HTTPS client certificate against CA file specified in tls config verify_https_client_certificate: false # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Certificate authority certificate file. # This certificate will be used to validate the certificates # presented by other nodes during inter-cluster communication. # # If verify_https_client_certificate is true, it will verify # HTTPS client certificate # # Required if cluster.p2p.enable_tls is true. ca_cert: ./tls/cacert.pem ```
documentation/guides/security.md
--- title: Quickstart weight: 10 aliases: - ../cloud-quick-start - cloud-quick-start --- # Quickstart This page shows you how to use the Qdrant Cloud Console to create a free tier cluster and then connect to it with Qdrant Client. ## Step 1: Create a Free Tier cluster 1. Start in the **Overview** section of the [Cloud Dashboard](https://cloud.qdrant.io). 2. Under **Set a Cluster Up** enter a **Cluster name**. 3. Click **Create Free Tier** and then **Continue**. 4. Under **Get an API Key**, select the cluster and click **Get API Key**. 5. Save the API key, as you won't be able to request it again. Click **Continue**. 6. Save the code snippet provided to access your cluster. Click **Complete** to finish setup. ![Embeddings](/docs/cloud/quickstart-cloud.png) ## Step 2: Test cluster access After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one: ```bash curl \ -X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \ --header 'api-key: <paste-your-api-key-here>' ``` Open Terminal and run the request. You should get a response that looks like this: ```bash {"title":"qdrant - vector search engine","version":"1.4.1"} ``` > **Note:** The API key needs to be present in the request header every time you make a request via Rest or gRPC interface. ## Step 3: Authenticate via SDK Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application. Our official Qdrant clients for Python, TypeScript, Go, Rust, and .NET all support the API key parameter. ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "xyz-example.eu-central.aws.cloud.qdrant.io", api_key="<paste-your-api-key-here>", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "xyz-example.eu-central.aws.cloud.qdrant.io", apiKey: "<paste-your-api-key-here>", }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("xyz-example.eu-central.aws.cloud.qdrant.io:6334") .with_api_key("<paste-your-api-key-here>") .build() .unwrap(); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( "xyz-example.eu-central.aws.cloud.qdrant.io", 6334, true) .withApiKey("<paste-your-api-key-here>") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: "xyz-example.eu-central.aws.cloud.qdrant.io", https: true, apiKey: "<paste-your-api-key-here>" ); ```
documentation/cloud/quickstart-cloud.md
--- title: Authentication weight: 30 --- # Authentication This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key. ## Create API keys The API key is only shown once after creation. If you lose it, you will need to create a new one. However, we recommend rotating the keys from time to time. To create additional API keys do the following. 1. Go to the [Cloud Dashboard](https://qdrant.to/cloud). 2. Select **Access Management** to display available API keys. 3. Click **Create** and choose a cluster name from the dropdown menu. > **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box. 4. Click **OK** and retrieve your API key. ## Authenticate via SDK Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application. Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter. ```bash curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'api-key: <provide-your-own-key>' # Alternatively, you can use the `Authorization` header with the `Bearer` prefix curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'Authorization: Bearer <provide-your-own-key>' ``` ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "xyz-example.eu-central.aws.cloud.qdrant.io", api_key="<paste-your-api-key-here>", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "xyz-example.eu-central.aws.cloud.qdrant.io", apiKey: "<paste-your-api-key-here>", }); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("xyz-example.eu-central.aws.cloud.qdrant.io:6334") .with_api_key("<paste-your-api-key-here>") .build() .unwrap(); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( "xyz-example.eu-central.aws.cloud.qdrant.io", 6334, true) .withApiKey("<paste-your-api-key-here>") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: "xyz-example.eu-central.aws.cloud.qdrant.io", https: true, apiKey: "<paste-your-api-key-here>" ); ```
documentation/cloud/authentication.md
--- title: AWS Marketplace weight: 60 --- # Qdrant Cloud on AWS Marketplace ## Overview Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development. Please note that, while Qdrant's clusters run on AWS, you will still use the Qdrant Cloud infrastructure. ## Billing You don't need to use a credit card to sign up for Qdrant Cloud. Instead, all billing is processed through the AWS Marketplace and the usage of Qdrant is added to your existing billing for AWS services. It is common for AWS to abstract usage based pricing in the AWS marketplace, as there are too many factors to model when calculating billing from the AWS side. ![pricing](/docs/cloud/pricing.png) The payment is carried out via your AWS Account. To get a clearer idea for the pricing structure, please use our [Billing Calculator](https://cloud.qdrant.io/calculator). ## How to subscribe 1. Go to [Qdrant's AWS Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg). 2. Click the bright orange button - **View purchase options**. 3. On the next screen, under Purchase, click **Subscribe**. 4. Up top, on the green banner, click **Set up your account**. ![setup](/docs/cloud/setup.png) You will be transferred outside of AWS to [Qdrant Cloud](https://qdrant.to/cloud) via your unique AWS Offer ID. The Billing Details screen will open in Qdrant Cloud Console. Stay in this console if you want to create your first Qdrant Cluster hosted on AWS. > **Note:** You do not have to return to the AWS Control Panel. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console. ## Next steps Now that you have signed up via AWS Marketplace, please read our instructions to get started: 1. Learn more about [cluster creation and basic config](../../cloud/create-cluster/) in Qdrant Cloud. 2. Learn how to [authenticate and access your cluster](../../cloud/authentication/). 3. Additional open source [documentation](../../troubleshooting/).
documentation/cloud/aws-marketplace.md
--- title: Create a cluster weight: 20 --- # Create a cluster This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster. > **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster. 1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io). 2. Select **Clusters** and then click **+ Create**. 3. A window will open. Enter a cluster **Name**. 4. Currently, you can deploy to AWS, GCP, or Azure. 5. Choose your data center region. If you have latency concerns or other topology-related requirements, [**let us know**](mailto:cloud@qdrant.io). 6. Configure RAM size for each node (1GB to 64GB). > Please read [**Capacity and Sizing**](../../cloud/capacity-sizing/) to make the right choice. If you need more capacity per node, [**let us know**](mailto:cloud@qdrant.io). 7. Choose the number of CPUs per node (0.5 core to 16 cores). The max/min number of CPUs is coupled to the chosen RAM size. 8. Select the number of nodes you want the cluster to be deployed on. > Each node is automatically attached with a disk space offering enough space for your data if you decide to put the metadata or even the index on the disk storage. 9. Click **Create** and wait for your cluster to be provisioned. Your cluster will be reachable on port 443 and 6333 (Rest) and 6334 (gRPC). ![Embeddings](/docs/cloud/create-cluster.png) ## Next steps You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](../../cloud/authentication/) to create one or more API keys. Your new cluster is highly available and responsive to your application requirements and resource load. Read more in [**Cluster Scaling**](../../cloud/cluster-scaling/).
documentation/cloud/create-cluster.md
--- title: Backups weight: 70 --- # Cloud Backups Qdrant organizes cloud instances as clusters. On occasion, you may need to restore your cluster because of application or system failure. You may already have a source of truth for your data in a regular database. If you have a problem, you could reindex the data into your Qdrant vector search cluster. However, this process can take time. For high availability critical projects we recommend replication. It guarantees the proper cluster functionality as long as at least one replica is running. For other use-cases such as disaster recovery, you can set up automatic or self-service backups. ## Prerequisites You can back up your Qdrant clusters though the Qdrant Cloud Dashboard at https://cloud.qdrant.io. This section assumes that you've already set up your cluster, as described in the following sections: - [Create a cluster](/documentation/cloud/create-cluster/) - Set up [Authentication](/documentation/cloud/authentication/) - Configure one or more [Collections](/documentation/concepts/collections/) ## Automatic backups You can set up automatic backups of your clusters with our Cloud UI. With the procedures listed in this page, you can set up snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you need. You can restore a cluster from the snapshot of your choice. > Note: When you restore a snapshot, consider the following: > - The affected cluster is not available while a snapshot is being restored. > - If you changed the cluster setup after the copy was created, the cluster resets to the previous configuration. > - The previous configuration includes: > - CPU > - Memory > - Node count > - Qdrant version ### Configure a backup After you have taken the prerequisite steps, you can configure a backup with the [Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps: 1. Sign in to the dashboard 1. Select Clusters. 1. Select the cluster that you want to back up. ![Select a cluster](/documentation/cloud/select-cluster.png) 1. Find and select the **Backups** tab. 1. Now you can set up a backup schedule. The **Days of Retention** is the number of days after a backup snapshot is deleted. 1. Alternatively, you can select **Backup now** to take an immediate snapshot. ![Configure a cluster backup](/documentation/cloud/backup-schedule.png) ### Restore a backup If you have a backup, it appears in the list of **Available Backups**. You can choose to restore or delete the backups of your choice. ![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png) <!-- I think we should move this to the Snapshot page, but I'll do it later --> ## Backups with a snapshot Qdrant also offers a snapshot API which allows you to create a snapshot of a specific collection or your entire cluster. For more information, see our [snapshot documentation](/documentation/concepts/snapshots/). Here is how you can take a snapshot and recover a collection: 1. Take a snapshot: - For a single node cluster, call the snapshot endpoint on the exposed URL. - For a multi node cluster call a snapshot on each node of the collection. Specifically, prepend `node-{num}-` to your cluster URL. Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0. - In the response, you'll see the name of the snapshot. 2. Delete and recreate the collection. 3. Recover the snapshot: - Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host. ## Backup considerations Backups are incremental. For example, if you have two backups, backup number 2 contains only the data that changed since backup number 1. This reduces the total cost of your backups. You can create multiple backup schedules. When you restore a snapshot, any changes made after the date of the snapshot are lost.
documentation/cloud/backups.md
--- title: Capacity and sizing weight: 40 aliases: - capacity --- # Capacity and sizing We have been asked a lot about the optimal cluster configuration to serve a number of vectors. The only right answer is “It depends”. It depends on a number of factors and options you can choose for your collections. ## Basic configuration If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this: ```text memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5 ``` Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process. If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM. Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section. ## Storage focused configuration If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage). In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM. The amount of available RAM will significantly affect the performance of the search. As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower. The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search. ## Sub-groups oriented configuration If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values, it is recommended to configure memory-map storage. For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently. In this scenario only the active subset of vectors will be kept in RAM, which allows the fast search for the most active and recent users. In this case you can estimate required memory size as follows: ```text memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5 ```
documentation/cloud/capacity-sizing.md
--- title: GCP Marketplace weight: 60 --- # Qdrant Cloud on GCP Marketplace Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant) listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for hosting and application development. While Qdrant's clusters run on GCP, you are using the Qdrant Cloud infrastructure. ## Billing You don't need a credit card to sign up for Qdrant Cloud. Instead, all billing is processed through the GCP Marketplace. Usage is added to your existing billing for GCP. Payment is made through your GCP Account. Our [Billing Calculator](https://cloud.qdrant.io/calculator) can provide more information about costs. Costs from cloud providers are based on usage. You can subscribe to Qdrant on the GCP Marketplace without paying more. ## How to subscribe 1. Go to the [GCP Marketplace listing for Qdrant](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant). 1. Select **Subscribe**. (If you have already subscribed, select **Manage on Provider**.) 1. On the next screen, choose options as required, and select **Subscribe**. 1. On the pop-up window that appers, select **Sign up with Qdrant**. GCP transfers you to the [Qdrant Cloud](https://cloud.qdrant.io/). The Billing Details screen opens in the Qdrant Cloud Console. If you do not already see a menu, select the "hamburger" icon (with three short horizontal lines) in the upper-left corner of the window. > **Note:** You do not have to return to GCP. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console. ## Next steps Now that you have signed up through GCP, please read our instructions to get started: 1. Learn more about how you can [Create a cluster](/documentation/cloud/create-cluster/). 1. Learn how to [Authenticate](/documentation/cloud/authentication/) and access your cluster.
documentation/cloud/gcp-marketplace.md
--- title: Cluster scaling weight: 50 --- # Cluster scaling The amount of data is always growing and at some point you might need to upgrade the capacity of your cluster. There are different options for how it can be done. ## Vertical scaling Vertical scaling, also known as vertical expansion, is the process of increasing the capacity of a cluster by adding more resources, such as memory, storage, or processing power. You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application. If your cluster consists of several nodes each node will need to be scaled to the same size. Please note that vertical cluster scaling will require a short downtime period to restart your cluster. In order to avoid a downtime you can make use of data replication, which can be configured on the collection level. Vertical scaling can be initiated on the cluster detail page via the button "scale up". ## Horizontal scaling Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations. The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded. At some point, adding more resources to a cluster can become impractical or cost-prohibitive. In such cases, horizontal scaling may be a more effective solution. Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them. The horizontal scaling at Qdrant starts on the collection level. You have to choose the number of shards you want to distribute your collection around while creating the collection. Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details. Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node. With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling). We will be glad to consult you on an optimal strategy for scaling. [Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution. We plan to introduce an auto-scaling functionality. Since it is one of most desired features, it has a high priority on our Cloud roadmap.
documentation/cloud/cluster-scaling.md
--- title: Qdrant Cloud weight: 20 --- # About Qdrant Cloud Qdrant Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant instances on the cloud. We provide you with the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure. Transitioning from on-premise to the cloud version of Qdrant does not require changing anything in the way you interact with the service. All you have to do is [create a Qdrant Cloud account](https://qdrant.to/cloud) and [provide a new API key]({{< ref "/documentation/cloud/authentication" >}}) to each request. The transition is even easier if you use the official client libraries. For example, the [Python Client](https://github.com/qdrant/qdrant-client) has the support of the API key already built-in, so you only need to provide it once, when the QdrantClient instance is created. ### Cluster configuration Each instance comes pre-configured with the following tools, features and support services: - Automatically created with the latest available version of Qdrant. - Upgradeable to later versions of Qdrant as they are released. - Equipped with monitoring and logging to observe the health of each cluster. - Accessible through the Qdrant Cloud Console. - Vertically scalable. - Offered on AWS and GCP, with Azure currently in development. ### Getting started with Qdrant Cloud To use Qdrant Cloud, you will need to create at least one cluster. There are two ways to start: 1. [**Create a Free Tier cluster**]({{< ref "/documentation/cloud/quickstart-cloud" >}}) with 1 node and a default configuration (1GB RAM, 0.5 CPU and 4GB Disk). This option is perfect for prototyping and you don't need a credit card to join. 2. [**Configure a custom cluster**]({{< ref "/documentation/cloud/create-cluster" >}}) with additional nodes and more resources. For this option, you will have to provide billing information. We recommend that you use the Free Tier cluster for testing purposes. The capacity should be enough to serve up to 1M vectors of 768dim. To calculate your needs, refer to [capacity planning]({{< ref "/documentation/cloud/capacity-sizing" >}}). ### Support & Troubleshooting All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord). Our Support Engineers are available to help you anytime. Additionally, paid customers can also contact support via channels provided during cluster creation and/or on-boarding.
documentation/cloud/_index.md
--- title: Storage weight: 80 aliases: - ../storage --- # Storage All data within one collection is divided into segments. Each segment has its independent vector and payload storage as well as indexes. Data stored in segments usually do not overlap. However, storing the same point in different segments will not cause problems since the search contains a deduplication mechanism. The segments consist of vector and payload storages, vector and payload [indexes](../indexing), and id mapper, which stores the relationship between internal and external ids. A segment can be `appendable` or `non-appendable` depending on the type of storage and index used. You can freely add, delete and query data in the `appendable` segment. With `non-appendable` segment can only read and delete data. The configuration of the segments in the collection can be different and independent of one another, but at least one `appendable' segment must be present in a collection. ## Vector storage Depending on the requirements of the application, Qdrant can use one of the data storage options. The choice has to be made between the search speed and the size of the RAM used. **In-memory storage** - Stores all vectors in RAM, has the highest speed since disk access is required only for persistence. **Memmap storage** - Creates a virtual address space associated with the file on disk. [Wiki](https://en.wikipedia.org/wiki/Memory-mapped_file). Mmapped files are not directly loaded into RAM. Instead, they use page cache to access the contents of the file. This scheme allows flexible use of available memory. With sufficient RAM, it is almost as fast as in-memory storage. ### Configuring Memmap storage There are two ways to configure the usage of memmap(also known as on-disk) storage: - Set up `on_disk` option for the vectors in the collection create API: *Available as of v1.2.0* ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine", "on_disk": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams( size=768, distance=models.Distance.COSINE, on_disk=True ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", on_disk: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), on_disk: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( "{collection_name}", VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( "{collection_name}", new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true } ); ``` This will create a collection with all vectors immediately stored in memmap storage. This is the recommended way, in case your Qdrant instance operates with fast disks and you are working with large collections. - Set up `memmap_threshold_kb` option. This option will set the threshold after which the segment will be converted to memmap storage. There are two ways to do this: 1. You can set the threshold globally in the [configuration file](../../guides/configuration/). The parameter is called `memmap_threshold_kb`. 2. You can set the threshold for each collection separately during [creation](../collections/#create-collection) or [update](../collections/#update-collection-parameters). ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 } ); ``` The rule of thumb to set the memmap threshold parameter is simple: - if you have a balanced use scenario - set memmap threshold the same as `indexing_threshold` (default is 20000). In this case the optimizer will not make any extra runs and will optimize all thresholds at once. - if you have a high write load and low RAM - set memmap threshold lower than `indexing_threshold` to e.g. 10000. In this case the optimizer will convert the segments to memmap storage first and will only apply indexing after that. In addition, you can use memmap storage not only for vectors, but also for HNSW index. To enable this, you need to set the `hnsw_config.on_disk` parameter to `true` during collection [creation](../collections/#create-a-collection) or [updating](../collections/#update-collection-parameters). ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "hnsw_config": { "on_disk": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), hnsw_config=models.HnswConfigDiff(on_disk=True), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, hnsw_config: { on_disk: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), hnsw_config: Some(HnswConfigDiff { on_disk: Some(true), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, hnswConfig: new HnswConfigDiff { OnDisk = true } ); ``` ## Payload storage Qdrant supports two types of payload storages: InMemory and OnDisk. InMemory payload storage is organized in the same way as in-memory vectors. The payload data is loaded into RAM at service startup while disk and [RocksDB](https://rocksdb.org/) are used for persistence only. This type of storage works quite fast, but it may require a lot of space to keep all the data in RAM, especially if the payload has large values attached - abstracts of text or even images. In the case of large payload values, it might be better to use OnDisk payload storage. This type of storage will read and write payload directly to RocksDB, so it won't require any significant amount of RAM to store. The downside, however, is the access latency. If you need to query vectors with some payload-based conditions - checking values stored on disk might take too much time. In this scenario, we recommend creating a payload index for each field used in filtering conditions to avoid disk access. Once you create the field index, Qdrant will preserve all values of the indexed field in RAM regardless of the payload storage type. You can specify the desired type of payload storage with [configuration file](../../guides/configuration/) or with collection parameter `on_disk_payload` during [creation](../collections/#create-collection) of the collection. ## Versioning To ensure data integrity, Qdrant performs all data changes in 2 stages. In the first step, the data is written to the Write-ahead-log(WAL), which orders all operations and assigns them a sequential number. Once a change has been added to the WAL, it will not be lost even if a power loss occurs. Then the changes go into the segments. Each segment stores the last version of the change applied to it as well as the version of each individual point. If the new change has a sequential number less than the current version of the point, the updater will ignore the change. This mechanism allows Qdrant to safely and efficiently restore the storage from the WAL in case of an abnormal shutdown.
documentation/concepts/storage.md
--- title: Explore weight: 55 aliases: - ../explore --- # Explore the data After mastering the concepts in [search](../search), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning. ## Recommendation API In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points. REST API - API Schema definition is available [here](https://qdrant.github.io/qdrant/redoc/index.html#operation/recommend_points) ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718, [0.2, 0.3, 0.4, 0.5]], "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "strategy": "average_vector", "limit": 3 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718, [0.2, 0.3, 0.4, 0.5]], strategy=models.RecommendStrategy.AVERAGE_VECTOR, query_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ), limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.recommend("{collection_name}", { positive: [100, 231], negative: [718, [0.2, 0.3, 0.4, 0.5]], strategy: "average_vector", filter: { must: [ { key: "city", match: { value: "London", }, }, ], }, limit: 3, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, RecommendPoints, RecommendStrategy}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 200.into()], positive_vectors: vec![vec![100.0, 231.0].into()], negative: vec![718.into()], negative_vectors: vec![vec![0.2, 0.3, 0.4, 0.5].into()], strategy: Some(RecommendStrategy::AverageVector.into()), filter: Some(Filter::must([Condition::matches( "city", "London".to_string(), )])), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.RecommendPoints; import io.qdrant.client.grpc.Points.RecommendStrategy; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(200))) .addAllPositiveVectors(List.of(vector(100.0f, 231.0f))) .addAllNegative(List.of(id(718))) .addAllPositiveVectors(List.of(vector(0.2f, 0.3f, 0.4f, 0.5f))) .setStrategy(RecommendStrategy.AverageVector) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .setLimit(3) .build()) .get(); ``` Example result of this API would be ```json { "result": [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], "status": "ok", "time": 0.001 } ``` The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case. ### Average vector strategy The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation. The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula: ```rust avg_positive + avg_positive - avg_negative ``` In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`. This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `"strategy": "average_vector"` in the recommendation request. ### Best score strategy *Available as of v1.6.0* A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one. The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula: ```rust let score = if best_positive_score > best_negative_score { best_positive_score; } else { -(best_negative_score * best_negative_score); }; ``` <aside role="alert"> The performance of <code>best_score</code> strategy will be linearly impacted by the amount of examples. </aside> Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic. <aside role="status"> Accuracy may be impacted with this strategy. To improve it, increasing the <code>ef</code> search parameter to something above 32 will already be much better than the default 16, e.g: <code>"params": { "ef": 64 }</code> </aside> To use this algorithm, you need to set `"strategy": "best_score"` in the recommendation request. #### Using only negative examples A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one. Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning. ### Multiple vectors *Available as of v0.10.0* If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request: ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718], "using": "image", "limit": 10 } ``` ```python client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718], using="image", limit=10, ) ``` ```typescript client.recommend("{collection_name}", { positive: [100, 231], negative: [718], using: "image", limit: 10, }); ``` ```rust use qdrant_client::qdrant::RecommendPoints; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], using: Some("image".to_string()), limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.RecommendPoints; client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setUsing("image") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.RecommendAsync( collectionName: "{collection_name}", positive: new ulong[] { 100, 231 }, negative: new ulong[] { 718 }, usingVector: "image", limit: 10 ); ``` Parameter `using` specifies which stored vectors to use for the recommendation. ### Lookup vectors from another collection *Available as of v0.11.6* If you have collections with vectors of the same dimensionality, and you want to look for recommendations in one collection based on the vectors of another collection, you can use the `lookup_from` parameter. It might be useful, e.g. in the item-to-user recommendations scenario. Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections. ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718], "using": "image", "limit": 10, "lookup_from": { "collection":"{external_collection_name}", "vector":"{external_vector_name}" } } ``` ```python client.recommend( collection_name="{collection_name}", positive=[100, 231], negative=[718], using="image", limit=10, lookup_from=models.LookupLocation( collection="{external_collection_name}", vector="{external_vector_name}" ), ) ``` ```typescript client.recommend("{collection_name}", { positive: [100, 231], negative: [718], using: "image", limit: 10, lookup_from: { "collection" : "{external_collection_name}", "vector" : "{external_vector_name}" }, }); ``` ```rust use qdrant_client::qdrant::{LookupLocation, RecommendPoints}; client .recommend(&RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], using: Some("image".to_string()), limit: 10, lookup_from: Some(LookupLocation { collection_name: "{external_collection_name}".to_string(), vector_name: Some("{external_vector_name}".to_string()), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.LookupLocation; import io.qdrant.client.grpc.Points.RecommendPoints; client .recommendAsync( RecommendPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setUsing("image") .setLimit(10) .setLookupFrom( LookupLocation.newBuilder() .setCollectionName("{external_collection_name}") .setVectorName("{external_vector_name}") .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.RecommendAsync( collectionName: "{collection_name}", positive: new ulong[] { 100, 231 }, negative: new ulong[] { 718 }, usingVector: "image", limit: 10, lookupFrom: new LookupLocation { CollectionName = "{external_collection_name}", VectorName = "{external_vector_name}", } ); ``` Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists. These vectors then used to perform the recommendation in the current collection, comparing against the "using" or default vector. ## Batch recommendation API *Available as of v0.10.0* Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests. ```http POST /collections/{collection_name}/points/recommend/batch { "searches": [ { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "negative": [718], "positive": [100, 231], "limit": 10 }, { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "negative": [300], "positive": [200, 67], "limit": 10 } ] } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) filter = models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ) recommend_queries = [ models.RecommendRequest( positive=[100, 231], negative=[718], filter=filter, limit=3 ), models.RecommendRequest(positive=[200, 67], negative=[300], filter=filter, limit=3), ] client.recommend_batch(collection_name="{collection_name}", requests=recommend_queries) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); const filter = { must: [ { key: "city", match: { value: "London", }, }, ], }; const searches = [ { positive: [100, 231], negative: [718], filter, limit: 3, }, { positive: [200, 67], negative: [300], filter, limit: 3, }, ]; client.recommend_batch("{collection_name}", { searches, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, RecommendBatchPoints, RecommendPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; let filter = Filter::must([Condition::matches("city", "London".to_string())]); let recommend_queries = vec![ RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![100.into(), 231.into()], negative: vec![718.into()], filter: Some(filter.clone()), limit: 3, ..Default::default() }, RecommendPoints { collection_name: "{collection_name}".to_string(), positive: vec![200.into(), 67.into()], negative: vec![300.into()], filter: Some(filter), limit: 3, ..Default::default() }, ]; client .recommend_batch(&RecommendBatchPoints { collection_name: "{collection_name}".to_string(), recommend_points: recommend_queries, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.RecommendPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); Filter filter = Filter.newBuilder().addMust(matchKeyword("city", "London")).build(); List<RecommendPoints> recommendQueries = List.of( RecommendPoints.newBuilder() .addAllPositive(List.of(id(100), id(231))) .addAllNegative(List.of(id(718))) .setFilter(filter) .setLimit(3) .build(), RecommendPoints.newBuilder() .addAllPositive(List.of(id(200), id(67))) .addAllNegative(List.of(id(300))) .setFilter(filter) .setLimit(3) .build()); client.recommendBatchAsync("{collection_name}", recommendQueries, null).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); var filter = MatchKeyword("city", "london"); await client.RecommendBatchAsync( collectionName: "{collection_name}", recommendSearches: [ new() { CollectionName = "{collection_name}", Positive = { new PointId[] { 100, 231 } }, Negative = { new PointId[] { 718 } }, Limit = 3, Filter = filter, }, new() { CollectionName = "{collection_name}", Positive = { new PointId[] { 200, 67 } }, Negative = { new PointId[] { 300 } }, Limit = 3, Filter = filter, } ] ); ``` The result of this API contains one array per recommendation requests. ```json { "result": [ [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], [ { "id": 1, "score": 0.92 }, { "id": 3, "score": 0.89 }, { "id": 9, "score": 0.75 } ] ], "status": "ok", "time": 0.001 } ``` ## Discovery API *Available as of v1.7* REST API Schema definition available [here](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/discover_points) In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones). The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs. Discovery API lets you do two new types of search: - **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context. - **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data. <aside role="alert">The speed of search is linearly related to the amount of examples you provide in the query.</aside> ### Discovery search This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed. ![Discovery search](/docs/discovery-search.png) The formula for the discovery score can be expressed as: $$ \text{rank}(v^+, v^-) = \begin{cases} 1, &\quad s(v^+) \geq s(v^-) \\\\ -1, &\quad s(v^+) < s(v^-) \end{cases} $$ where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as: $$ \text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-), $$ where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second. Example: ```http POST /collections/{collection_name}/points/discover { "target": [0.2, 0.1, 0.9, 0.7], "context": [ { "positive": 100, "negative": 718 }, { "positive": 200, "negative": 300 } ], "limit": 10 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) discover_queries = [ models.DiscoverRequest( target=[0.2, 0.1, 0.9, 0.7], context=[ models.ContextExamplePair( positive=100, negative=718, ), models.ContextExamplePair( positive=200, negative=300, ), ], limit=10, ), ] ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.discover("{collection_name}", { target: [0.2, 0.1, 0.9, 0.7], context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ target_vector::Target, vector_example::Example, ContextExamplePair, DiscoverPoints, TargetVector, VectorExample, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .discover(&DiscoverPoints { collection_name: "{collection_name}".to_string(), target: Some(TargetVector { target: Some(Target::Single(VectorExample { example: Some(Example::Vector(vec![0.2, 0.1, 0.9, 0.7].into())), })), }), context: vec![ ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(100.into())), }), negative: Some(VectorExample { example: Some(Example::Id(718.into())), }), }, ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(200.into())), }), negative: Some(VectorExample { example: Some(Example::Id(300.into())), }), }, ], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextExamplePair; import io.qdrant.client.grpc.Points.DiscoverPoints; import io.qdrant.client.grpc.Points.TargetVector; import io.qdrant.client.grpc.Points.VectorExample; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .discoverAsync( DiscoverPoints.newBuilder() .setCollectionName("{collection_name}") .setTarget( TargetVector.newBuilder() .setSingle( VectorExample.newBuilder() .setVector(vector(0.2f, 0.1f, 0.9f, 0.7f)) .build())) .addAllContext( List.of( ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(100))) .setNegative(VectorExample.newBuilder().setId(id(718))) .build(), ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(200))) .setNegative(VectorExample.newBuilder().setId(id(300))) .build())) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.DiscoverAsync( collectionName: "{collection_name}", target: new TargetVector { Single = new VectorExample { Vector = new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, } }, context: [ new() { Positive = new VectorExample { Id = 100 }, Negative = new VectorExample { Id = 718 } }, new() { Positive = new VectorExample { Id = 200 }, Negative = new VectorExample { Id = 300 } } ], limit: 10 ); ``` <aside role="status"> Notes about discovery search: * When providing ids as examples, they will be excluded from the results. * Score is always in descending order (larger is better), regardless of the metric used. * Since the space is hard-constrained by the context, accuracy is normal to drop when using default settings. To mitigate this, increasing the `ef` search parameter to something above 64 will already be much better than the default 16, e.g: `"params": { "ef": 128 }` </aside> ### Context search Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples. ![Context search](/docs/context-search.png) We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities. $$ \text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0) $$ Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function. Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases. Example: ```http POST /collections/{collection_name}/points/discover { "context": [ { "positive": 100, "negative": 718 }, { "positive": 200, "negative": 300 } ], "limit": 10 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) discover_queries = [ models.DiscoverRequest( context=[ models.ContextExamplePair( positive=100, negative=718, ), models.ContextExamplePair( positive=200, negative=300, ), ], limit=10, ), ] ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.discover("{collection_name}", { context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vector_example::Example, ContextExamplePair, DiscoverPoints, VectorExample}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .discover(&DiscoverPoints { collection_name: "{collection_name}".to_string(), context: vec![ ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(100.into())), }), negative: Some(VectorExample { example: Some(Example::Id(718.into())), }), }, ContextExamplePair { positive: Some(VectorExample { example: Some(Example::Id(200.into())), }), negative: Some(VectorExample { example: Some(Example::Id(300.into())), }), }, ], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextExamplePair; import io.qdrant.client.grpc.Points.DiscoverPoints; import io.qdrant.client.grpc.Points.VectorExample; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .discoverAsync( DiscoverPoints.newBuilder() .setCollectionName("{collection_name}") .addAllContext( List.of( ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(100))) .setNegative(VectorExample.newBuilder().setId(id(718))) .build(), ContextExamplePair.newBuilder() .setPositive(VectorExample.newBuilder().setId(id(200))) .setNegative(VectorExample.newBuilder().setId(id(300))) .build())) .setLimit(10) .build()) .get(); ``` <aside role="status"> Notes about context search: * When providing ids as examples, they will be excluded from the results. * Score is always in descending order (larger is better), regardless of the metric used. * Best possible score is `0.0`, and it is normal that many points get this score. </aside>
documentation/concepts/explore.md
--- title: Optimizer weight: 70 aliases: - ../optimizer --- # Optimizer It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely. Storage optimization in Qdrant occurs at the segment level (see [storage](../storage)). In this case, the segment to be optimized remains readable for the time of the rebuild. ![Segment optimization](/docs/optimization.svg) The availability is achieved by wrapping the segment into a proxy that transparently handles data changes. Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates. ## Vacuum Optimizer The simplest example of a case where you need to rebuild a segment repository is to remove points. Like many other databases, Qdrant does not delete entries immediately after a query. Instead, it marks records as deleted and ignores them for future queries. This strategy allows us to minimize disk access - one of the slowest operations. However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system. To avoid these adverse effects, Vacuum Optimizer is used. It is used if the segment has accumulated too many deleted records. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 ``` ## Merge Optimizer The service may require the creation of temporary segments. Such segments, for example, are created as copy-on-write segments during optimization itself. It is also essential to have at least one small segment that Qdrant will use to store frequently updated data. On the other hand, too many small segments lead to suboptimal search performance. There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # If the number of segments exceeds this value, the optimizer will merge the smallest segments. max_segment_number: 5 ``` ## Indexing Optimizer Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records. So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan. The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # Maximum size (in kilobytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value. # To disable memmap storage, set this to `0`. # Note: 1Kb = 1 vector of size 256 memmap_threshold_kb: 200000 # Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing # Default value is 20,000, based on <https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md>. # To disable vector indexing, set to `0`. # Note: 1kB = 1 vector of size 256. indexing_threshold_kb: 20000 ``` In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections). Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index.
documentation/concepts/optimizer.md
--- title: Search weight: 50 aliases: - ../search --- # Similarity search Searching for the nearest vectors is at the core of many representational learning applications. Modern neural networks are trained to transform objects into vectors so that objects close in the real world appear close in vector space. It could be, for example, texts with similar meanings, visually similar pictures, or songs of the same genre. ![Embeddings](/docs/encoders.png) ## Metrics There are many ways to estimate the similarity of vectors with each other. In Qdrant terms, these ways are called metrics. The choice of metric depends on vectors obtaining and, in particular, on the method of neural network encoder training. Qdrant supports these most popular types of metrics: * Dot product: `Dot` - https://en.wikipedia.org/wiki/Dot_product * Cosine similarity: `Cosine` - https://en.wikipedia.org/wiki/Cosine_similarity * Euclidean distance: `Euclid` - https://en.wikipedia.org/wiki/Euclidean_distance * Manhattan distance: `Manhattan`* - https://en.wikipedia.org/wiki/Taxicab_geometry <i><sup>*Available as of v1.7</sup></i> The most typical metric used in similarity learning models is the cosine metric. ![Embeddings](/docs/cos.png) Qdrant counts this metric in 2 steps, due to which a higher search speed is achieved. The first step is to normalize the vector when adding it to the collection. It happens only once for each vector. The second step is the comparison of vectors. In this case, it becomes equivalent to dot production - a very fast operation due to SIMD. ## Query planning Depending on the filter used in the search - there are several possible scenarios for query execution. Qdrant chooses one of the query execution options depending on the available indexes, the complexity of the conditions and the cardinality of the filtering result. This process is called query planning. The strategy selection process relies heavily on heuristics and can vary from release to release. However, the general principles are: * planning is performed for each segment independently (see [storage](../storage) for more information about segments) * prefer a full scan if the amount of points is below a threshold * estimate the cardinality of a filtered result before selecting a strategy * retrieve points using payload index (see [indexing](../indexing)) if cardinality is below threshold * use filterable vector index if the cardinality is above a threshold You can adjust the threshold using a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), as well as independently for each collection. ## Search API Let's look at an example of a search query. REST API - API Schema definition is available [here](https://qdrant.github.io/qdrant/redoc/index.html#operation/search_points) ```http POST /collections/{collection_name}/points/search { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "params": { "hnsw_ef": 128, "exact": false }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ), search_params=models.SearchParams(hnsw_ef=128, exact=False), query_vector=[0.2, 0.1, 0.9, 0.7], limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { filter: { must: [ { key: "city", match: { value: "London", }, }, ], }, params: { hnsw_ef: 128, exact: false, }, vector: [0.2, 0.1, 0.9, 0.7], limit: 3, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::matches( "city", "London".to_string(), )])), params: Some(SearchParams { hnsw_ef: Some(128), exact: Some(false), ..Default::default() }), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")).build()) .setParams(SearchParams.newBuilder().setExact(false).setHnswEf(128).build()) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), searchParams: new SearchParams { Exact = false, HnswEf = 128 }, limit: 3 ); ``` In this example, we are looking for vectors similar to vector `[0.2, 0.1, 0.9, 0.7]`. Parameter `limit` (or its alias - `top`) specifies the amount of most similar results we would like to retrieve. Values under the key `params` specify custom parameters for the search. Currently, it could be: * `hnsw_ef` - value that specifies `ef` parameter of the HNSW algorithm. * `exact` - option to not use the approximate search (ANN). If set to true, the search may run for a long as it performs a full scan to retrieve exact results. * `indexed_only` - With this option you can disable the search in those segments where vector index is not built yet. This may be useful if you want to minimize the impact to the search performance whilst the collection is also being updated. Using this option may lead to a partial result if the collection is not fully indexed yet, consider using it only if eventual consistency is acceptable for your use case. Since the `filter` parameter is specified, the search is performed only among those points that satisfy the filter condition. See details of possible filters and their work in the [filtering](../filtering) section. Example result of this API would be ```json { "result": [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], "status": "ok", "time": 0.001 } ``` The `result` contains ordered by `score` list of found point ids. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](#payload-and-vector-in-the-result) on how to include it. *Available as of v0.10.0* If the collection was created with multiple vectors, the name of the vector to use for searching should be provided: ```http POST /collections/{collection_name}/points/search { "vector": { "name": "image", "vector": [0.2, 0.1, 0.9, 0.7] }, "limit": 3 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=("image", [0.2, 0.1, 0.9, 0.7]), limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: { name: "image", vector: [0.2, 0.1, 0.9, 0.7], }, limit: 3, }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::SearchPoints}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], vector_name: Some("image".to_string()), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setVectorName("image") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, vectorName: "image", limit: 3 ); ``` Search is processing only among vectors with the same name. *Available as of v1.7.0* If the collection was created with sparse vectors, the name of the sparse vector to use for searching should be provided: You can still use payload filtering and other features of the search API with sparse vectors. There are however important differences between dense and sparse vector search: | Index| Sparse Query | Dense Query | | --- | --- | --- | | Scoring Metric | Default is `Dot product`, no need to specify it | `Distance` has supported metrics e.g. Dot, Cosine | | Search Type | Always exact in Qdrant | HNSW is an approximate NN | | Return Behaviour | Returns only vectors with non-zero values in the same indices as the query vector | Returns `limit` vectors | In general, the speed of the search is proportional to the number of non-zero values in the query vector. ```http POST /collections/{collection_name}/points/search { "vector": { "name": "text", "vector": { "indices": [6, 7], "values": [1.0, 2.0] } }, "limit": 3 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=models.NamedSparseVector( name="text", vector=models.SparseVector( indices=[1, 7], values=[2.0, 1.0], ), ), limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: { name: "text", vector: { indices: [1, 7], values: [2.0, 1.0] }, }, limit: 3, }); ``` ```rust use qdrant_client::{client::QdrantClient, client::Vector, qdrant::SearchPoints}; let client = QdrantClient::from_url("http://localhost:6334").build()?; let sparse_vector: Vector = vec![(1, 2.0), (7, 1.0)].into(); client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector_name: Some("text".to_string()), sparse_indices: sparse_vector.indices, vector: sparse_vector.data, limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchPoints; import io.qdrant.client.grpc.Points.SparseIndices; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setVectorName("text") .addAllVector(List.of(2.0f, 1.0f)) .setSparseIndices(SparseIndices.newBuilder().addAllData(List.of(1, 7)).build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 2.0f, 1.0f }, vectorName: "text", limit: 3, sparseIndices: new uint[] { 1, 7 } ); ``` ### Filtering results by score In addition to payload filtering, it might be useful to filter out results with a low similarity score. For example, if you know the minimal acceptance score for your model and do not want any results which are less similar than the threshold. In this case, you can use `score_threshold` parameter of the search query. It will exclude all results with a score worse than the given. <aside role="status">This parameter may exclude lower or higher scores depending on the used metric. For example, higher scores of Euclidean metric are considered more distant and, therefore, will be excluded.</aside> ### Payload and vector in the result By default, retrieval methods do not return any stored information such as payload and vectors. Additional parameters `with_vectors` and `with_payload` alter this behavior. Example: ```http POST /collections/{collection_name}/points/search { "vector": [0.2, 0.1, 0.9, 0.7], "with_vectors": true, "with_payload": true } ``` ```python client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], with_vectors=True, with_payload=True, ) ``` ```typescript client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], with_vector: true, with_payload: true, }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::SearchPoints}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], with_payload: Some(true.into()), with_vectors: Some(true.into()), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.WithPayloadSelectorFactory.enable; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.WithVectorsSelectorFactory; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .setWithVectors(WithVectorsSelectorFactory.enable(true)) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: true, vectorsSelector: true, limit: 3 ); ``` You can use `with_payload` to scope to or filter a specific payload subset. You can even specify an array of items to include, such as `city`, `village`, and `town`: ```http POST /collections/{collection_name}/points/search { "vector": [0.2, 0.1, 0.9, 0.7], "with_payload": ["city", "village", "town"] } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], with_payload=["city", "village", "town"], ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], with_payload: ["city", "village", "town"], }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::SearchPoints}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], with_payload: Some(vec!["city", "village", "town"].into()), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.WithPayloadSelectorFactory.include; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(include(List.of("city", "village", "town"))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: new WithPayloadSelector { Include = new PayloadIncludeSelector { Fields = { new string[] { "city", "village", "town" } } } }, limit: 3 ); ``` Or use `include` or `exclude` explicitly. For example, to exclude `city`: ```http POST /collections/{collection_name}/points/search { "vector": [0.2, 0.1, 0.9, 0.7], "with_payload": { "exclude": ["city"] } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], with_payload=models.PayloadSelectorExclude( exclude=["city"], ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], with_payload: { exclude: ["city"], }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ with_payload_selector::SelectorOptions, PayloadExcludeSelector, SearchPoints, WithPayloadSelector, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], with_payload: Some(WithPayloadSelector { selector_options: Some(SelectorOptions::Exclude(PayloadExcludeSelector { fields: vec!["city".to_string()], })), }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.WithPayloadSelectorFactory.exclude; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(exclude(List.of("city"))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: new WithPayloadSelector { Exclude = new PayloadExcludeSelector { Fields = { new string[] { "city" } } } }, limit: 3 ); ``` It is possible to target nested fields using a dot notation: - `payload.nested_field` - for a nested field - `payload.nested_array[].sub_field` - for projecting nested fields within an array Accessing array elements by index is currently not supported. ## Batch search API *Available as of v0.10.0* The batch search API enables to perform multiple search requests via a single request. Its semantic is straightforward, `n` batched search requests are equivalent to `n` singular search requests. This approach has several advantages. Logically, fewer network connections are required which can be very beneficial on its own. More importantly, batched requests will be efficiently processed via the query planner which can detect and optimize requests if they have the same `filter`. This can have a great effect on latency for non trivial filters as the intermediary results can be shared among the request. In order to use it, simply pack together your search requests. All the regular attributes of a search request are of course available. ```http POST /collections/{collection_name}/points/search/batch { "searches": [ { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 }, { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "vector": [0.5, 0.3, 0.2, 0.3], "limit": 3 } ] } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) filter = models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ) search_queries = [ models.SearchRequest(vector=[0.2, 0.1, 0.9, 0.7], filter=filter, limit=3), models.SearchRequest(vector=[0.5, 0.3, 0.2, 0.3], filter=filter, limit=3), ] client.search_batch(collection_name="{collection_name}", requests=search_queries) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); const filter = { must: [ { key: "city", match: { value: "London", }, }, ], }; const searches = [ { vector: [0.2, 0.1, 0.9, 0.7], filter, limit: 3, }, { vector: [0.5, 0.3, 0.2, 0.3], filter, limit: 3, }, ]; client.searchBatch("{collection_name}", { searches, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, SearchBatchPoints, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; let filter = Filter::must([Condition::matches("city", "London".to_string())]); let searches = vec![ SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], filter: Some(filter.clone()), limit: 3, ..Default::default() }, SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.5, 0.3, 0.2, 0.3], filter: Some(filter), limit: 3, ..Default::default() }, ]; client .search_batch_points(&SearchBatchPoints { collection_name: "{collection_name}".to_string(), search_points: searches, read_consistency: None, ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); Filter filter = Filter.newBuilder().addMust(matchKeyword("city", "London")).build(); List<SearchPoints> searches = List.of( SearchPoints.newBuilder() .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setFilter(filter) .setLimit(3) .build(), SearchPoints.newBuilder() .addAllVector(List.of(0.5f, 0.3f, 0.2f, 0.3f)) .setFilter(filter) .setLimit(3) .build()); client.searchBatchAsync("{collection_name}", searches, null).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); var filter = MatchKeyword("city", "London"); var searches = new List<SearchPoints> { new() { Vector = { new float[] { 0.2f, 0.1f, 0.9f, 0.7f } }, Filter = filter, Limit = 3 }, new() { Vector = { new float[] { 0.5f, 0.3f, 0.2f, 0.3f } }, Filter = filter, Limit = 3 } }; await client.SearchBatchAsync(collectionName: "{collection_name}", searches: searches); ``` The result of this API contains one array per search requests. ```json { "result": [ [ { "id": 10, "score": 0.81 }, { "id": 14, "score": 0.75 }, { "id": 11, "score": 0.73 } ], [ { "id": 1, "score": 0.92 }, { "id": 3, "score": 0.89 }, { "id": 9, "score": 0.75 } ] ], "status": "ok", "time": 0.001 } ``` ## Pagination *Available as of v0.8.3* Search and [recommendation](../explore/#recommendation-api) APIs allow to skip first results of the search and return only the result starting from some specified offset: Example: ```http POST /collections/{collection_name}/points/search { "vector": [0.2, 0.1, 0.9, 0.7], "with_vectors": true, "with_payload": true, "limit": 10, "offset": 100 } ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], with_vectors=True, with_payload=True, limit=10, offset=100, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], with_vector: true, with_payload: true, limit: 10, offset: 100, }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::SearchPoints}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], with_vectors: Some(true.into()), with_payload: Some(true.into()), limit: 10, offset: Some(100), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.WithPayloadSelectorFactory.enable; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.WithVectorsSelectorFactory; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .setWithVectors(WithVectorsSelectorFactory.enable(true)) .setLimit(10) .setOffset(100) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( "{collection_name}", new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: true, vectorsSelector: true, limit: 10, offset: 100 ); ``` Is equivalent to retrieving the 11th page with 10 records per page. <aside role="alert">Large offset values may cause performance issues</aside> Vector-based retrieval in general and HNSW index in particular, are not designed to be paginated. It is impossible to retrieve Nth closest vector without retrieving the first N vectors first. However, using the offset parameter saves the resources by reducing network traffic and the number of times the storage is accessed. Using an `offset` parameter, will require to internally retrieve `offset + limit` points, but only access payload and vector from the storage those points which are going to be actually returned. ## Grouping API *Available as of v1.2.0* It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results. For example, if you have a large document split into multiple chunks, and you want to search or [recommend](../explore/#recommendation-api) on a per-document basis, you can group the results by the document ID. Consider having points with the following payloads: ```json [ { "id": 0, "payload": { "chunk_part": 0, "document_id": "a" }, "vector": [0.91] }, { "id": 1, "payload": { "chunk_part": 1, "document_id": ["a", "b"] }, "vector": [0.8] }, { "id": 2, "payload": { "chunk_part": 2, "document_id": "a" }, "vector": [0.2] }, { "id": 3, "payload": { "chunk_part": 0, "document_id": 123 }, "vector": [0.79] }, { "id": 4, "payload": { "chunk_part": 1, "document_id": 123 }, "vector": [0.75] }, { "id": 5, "payload": { "chunk_part": 0, "document_id": -10 }, "vector": [0.6] } ] ``` With the ***groups*** API, you will be able to get the best *N* points for each document, assuming that the payload of the points contains the document ID. Of course there will be times where the best *N* points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, akin to the `limit` parameter. ### Search groups REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/search_point_groups)): ```http POST /collections/{collection_name}/points/search/groups { // Same as in the regular search API "vector": [1.1], // Grouping parameters "group_by": "document_id", // Path of the field to group by "limit": 4, // Max amount of groups "group_size": 2, // Max amount of points per group } ``` ```python client.search_groups( collection_name="{collection_name}", # Same as in the regular search() API query_vector=g, # Grouping parameters group_by="document_id", # Path of the field to group by limit=4, # Max amount of groups group_size=2, # Max amount of points per group ) ``` ```typescript client.searchPointGroups("{collection_name}", { vector: [1.1], group_by: "document_id", limit: 4, group_size: 2, }); ``` ```rust use qdrant_client::qdrant::SearchPointGroups; client .search_groups(&SearchPointGroups { collection_name: "{collection_name}".to_string(), vector: vec![1.1], group_by: "document_id".to_string(), limit: 4, group_size: 2, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.SearchPointGroups; client .searchGroupsAsync( SearchPointGroups.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(1.1f)) .setGroupBy("document_id") .setLimit(4) .setGroupSize(2) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.SearchGroupsAsync( collectionName: "{collection_name}", vector: new float[] { 1.1f }, groupBy: "document_id", limit: 4, groupSize: 2 ); ``` The output of a ***groups*** call looks like this: ```json { "result": { "groups": [ { "id": "a", "hits": [ { "id": 0, "score": 0.91 }, { "id": 1, "score": 0.85 } ] }, { "id": "b", "hits": [ { "id": 1, "score": 0.85 } ] }, { "id": 123, "hits": [ { "id": 3, "score": 0.79 }, { "id": 4, "score": 0.75 } ] }, { "id": -10, "hits": [ { "id": 5, "score": 0.6 } ] } ] }, "status": "ok", "time": 0.001 } ``` The groups are ordered by the score of the top point in the group. Inside each group the points are sorted too. If the `group_by` field of a point is an array (e.g. `"document_id": ["a", "b"]`), the point can be included in multiple groups (e.g. `"document_id": "a"` and `document_id: "b"`). <aside role="status">This feature relies heavily on the `group_by` key provided. To improve performance, make sure to create a dedicated index for it.</aside> **Limitations**: * Only [keyword](../payload/#keyword) and [integer](../payload/#integer) payload values are supported for the `group_by` parameter. Payload values with other types will be ignored. * At the moment, pagination is not enabled when using **groups**, so the `offset` parameter is not allowed. ### Lookup in groups *Available as of v1.3.0* Having multiple points for parts of the same item often introduces redundancy in the stored data. Which may be fine if the information shared by the points is small, but it can become a problem if the payload is large, because it multiplies the storage space needed to store the points by a factor of the amount of points we have per group. One way of optimizing storage when using groups is to store the information shared by the points with the same group id in a single point in another collection. Then, when using the [**groups** API](#grouping-api), add the `with_lookup` parameter to bring the information from those points into each group. ![Group id matches point id](/docs/lookup_id_linking.png) This has the extra benefit of having a single point to update when the information shared by the points in a group changes. For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point. In this case, to bring the information from the documents into the chunks grouped by the document id, you can use the `with_lookup` parameter: ```http POST /collections/chunks/points/search/groups { // Same as in the regular search API "vector": [1.1], // Grouping parameters "group_by": "document_id", "limit": 2, "group_size": 2, // Lookup parameters "with_lookup": { // Name of the collection to look up points in "collection": "documents", // Options for specifying what to bring from the payload // of the looked up point, true by default "with_payload": ["title", "text"], // Options for specifying what to bring from the vector(s) // of the looked up point, true by default "with_vectors: false } } ``` ```python client.search_groups( collection_name="chunks", # Same as in the regular search() API query_vector=[1.1], # Grouping parameters group_by="document_id", # Path of the field to group by limit=2, # Max amount of groups group_size=2, # Max amount of points per group # Lookup parameters with_lookup=models.WithLookup( # Name of the collection to look up points in collection="documents", # Options for specifying what to bring from the payload # of the looked up point, True by default with_payload=["title", "text"], # Options for specifying what to bring from the vector(s) # of the looked up point, True by default with_vectors=False, ), ) ``` ```typescript client.searchPointGroups("{collection_name}", { vector: [1.1], group_by: "document_id", limit: 2, group_size: 2, with_lookup: { collection: w, with_payload: ["title", "text"], with_vectors: false, }, }); ``` ```rust use qdrant_client::qdrant::{SearchPointGroups, WithLookup}; client .search_groups(&SearchPointGroups { collection_name: "{collection_name}".to_string(), vector: vec![1.1], group_by: "document_id".to_string(), limit: 2, group_size: 2, with_lookup: Some(WithLookup { collection: "documents".to_string(), with_payload: Some(vec!["title", "text"].into()), with_vectors: Some(false.into()), }), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.WithPayloadSelectorFactory.include; import static io.qdrant.client.WithVectorsSelectorFactory.enable; import io.qdrant.client.grpc.Points.SearchPointGroups; import io.qdrant.client.grpc.Points.WithLookup; client .searchGroupsAsync( SearchPointGroups.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(1.0f)) .setGroupBy("document_id") .setLimit(2) .setGroupSize(2) .setWithLookup( WithLookup.newBuilder() .setCollection("documents") .setWithPayload(include(List.of("title", "text"))) .setWithVectors(enable(false)) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchGroupsAsync( collectionName: "{collection_name}", vector: new float[] { 1.0f }, groupBy: "document_id", limit: 2, groupSize: 2, withLookup: new WithLookup { Collection = "documents", WithPayload = new WithPayloadSelector { Include = new PayloadIncludeSelector { Fields = { new string[] { "title", "text" } } } }, WithVectors = false } ); ``` For the `with_lookup` parameter, you can also use the shorthand `with_lookup="documents"` to bring the whole payload and vector(s) without explicitly specifying it. The looked up result will show up under `lookup` in each group. ```json { "result": { "groups": [ { "id": 1, "hits": [ { "id": 0, "score": 0.91 }, { "id": 1, "score": 0.85 } ], "lookup": { "id": 1, "payload": { "title": "Document A", "text": "This is document A" } } }, { "id": 2, "hits": [ { "id": 1, "score": 0.85 } ], "lookup": { "id": 2, "payload": { "title": "Document B", "text": "This is document B" } } } ] }, "status": "ok", "time": 0.001 } ``` Since the lookup is done by matching directly with the point id, any group id that is not an existing (and valid) point id in the lookup collection will be ignored, and the `lookup` field will be empty.
documentation/concepts/search.md
--- title: Payload weight: 40 aliases: - ../payload --- # Payload One of the significant features of Qdrant is the ability to store additional information along with vectors. This information is called `payload` in Qdrant terminology. Qdrant allows you to store any information that can be represented using JSON. Here is an example of a typical payload: ```json { "name": "jacket", "colors": ["red", "blue"], "count": 10, "price": 11.99, "locations": [ { "lon": 52.5200, "lat": 13.4050 } ], "reviews": [ { "user": "alice", "score": 4 }, { "user": "bob", "score": 5 } ] } ``` ## Payload types In addition to storing payloads, Qdrant also allows you search based on certain kinds of values. This feature is implemented as additional filters during the search and will enable you to incorporate custom logic on top of semantic similarity. During the filtering, Qdrant will check the conditions over those values that match the type of the filtering condition. If the stored value type does not fit the filtering condition - it will be considered not satisfied. For example, you will get an empty output if you apply the [range condition](../filtering/#range) on the string data. However, arrays (multiple values of the same type) are treated a little bit different. When we apply a filter to an array, it will succeed if at least one of the values inside the array meets the condition. The filtering process is discussed in detail in the section [Filtering](../filtering). Let's look at the data types that Qdrant supports for searching: ### Integer `integer` - 64-bit integer in the range from `-9223372036854775808` to `9223372036854775807`. Example of single and multiple `integer` values: ```json { "count": 10, "sizes": [35, 36, 38] } ``` ### Float `float` - 64-bit floating point number. Example of single and multiple `float` values: ```json { "price": 11.99, "ratings": [9.1, 9.2, 9.4] } ``` ### Bool Bool - binary value. Equals to `true` or `false`. Example of single and multiple `bool` values: ```json { "is_delivered": true, "responses": [false, false, true, false] } ``` ### Keyword `keyword` - string value. Example of single and multiple `keyword` values: ```json { "name": "Alice", "friends": [ "bob", "eva", "jack" ] } ``` ### Geo `geo` is used to represent geographical coordinates. Example of single and multiple `geo` values: ```json { "location": { "lon": 52.5200, "lat": 13.4050 }, "cities": [ { "lon": 51.5072, "lat": 0.1276 }, { "lon": 40.7128, "lat": 74.0060 } ] } ``` Coordinate should be described as an object containing two fields: `lon` - for longitude, and `lat` - for latitude. ## Create point with payload REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/upsert_points)) ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "vector": [0.05, 0.61, 0.76, 0.74], "payload": {"city": "Berlin", "price": 1.99} }, { "id": 2, "vector": [0.19, 0.81, 0.75, 0.11], "payload": {"city": ["Berlin", "London"], "price": 1.99} }, { "id": 3, "vector": [0.36, 0.55, 0.47, 0.94], "payload": {"city": ["Berlin", "Moscow"], "price": [1.99, 2.99]} } ] } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient(host="localhost", port=6333) client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={ "city": "Berlin", "price": 1.99, }, ), models.PointStruct( id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={ "city": ["Berlin", "London"], "price": 1.99, }, ), models.PointStruct( id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={ "city": ["Berlin", "Moscow"], "price": [1.99, 2.99], }, ), ], ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.upsert("{collection_name}", { points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin", price: 1.99, }, }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: ["Berlin", "London"], price: 1.99, }, }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: ["Berlin", "Moscow"], price: [1.99, 2.99], }, }, ], }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::PointStruct}; use serde_json::json; let client = QdrantClient::from_url("http://localhost:6334").build()?; let points = vec![ PointStruct::new( 1, vec![0.05, 0.61, 0.76, 0.74], json!( {"city": "Berlin", "price": 1.99} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.19, 0.81, 0.75, 0.11], json!( {"city": ["Berlin", "London"]} ) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.36, 0.55, 0.47, 0.94], json!( {"city": ["Berlin", "Moscow"], "price": [1.99, 2.99]} ) .try_into() .unwrap(), ), ]; client .upsert_points("{collection_name}".to_string(), None, points, None) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("city", value("Berlin"), "price", value(1.99))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload( Map.of("city", list(List.of(value("Berlin"), value("London"))))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload( Map.of( "city", list(List.of(value("Berlin"), value("London"))), "price", list(List.of(value(1.99), value(2.99))))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new PointStruct { Id = 1, Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "Berlin", ["price"] = 1.99 } }, new PointStruct { Id = 2, Vectors = new[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { ["city"] = new[] { "Berlin", "London" } } }, new PointStruct { Id = 3, Vectors = new[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { ["city"] = new[] { "Berlin", "Moscow" }, ["price"] = new Value { ListValue = new ListValue { Values = { new Value[] { 1.99, 2.99 } } } } } } } ); ``` ## Update payload ### Set payload Set only the given payload values on a point. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/set_payload)): ```http POST /collections/{collection_name}/points/payload { "payload": { "property1": "string", "property2": "string" }, "points": [ 0, 3, 100 ] } ``` ```python client.set_payload( collection_name="{collection_name}", payload={ "property1": "string", "property2": "string", }, points=[0, 3, 10], ) ``` ```typescript client.setPayload("{collection_name}", { payload: { property1: "string", property2: "string", }, points: [0, 3, 10], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, }; use serde_json::json; client .set_payload_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], })), }, json!({ "property1": "string", "property2": "string", }) .try_into() .unwrap(), None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; client .setPayloadAsync( "{collection_name}", Map.of("property1", value("string"), "property2", value("string")), List.of(id(0), id(3), id(10)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SetPayloadAsync( collectionName: "{collection_name}", payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } }, ids: new ulong[] { 0, 3, 10 } ); ``` You don't need to know the ids of the points you want to modify. The alternative is to use filters. ```http POST /collections/{collection_name}/points/payload { "payload": { "property1": "string", "property2": "string" }, "filter": { "must": [ { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.set_payload( collection_name="{collection_name}", payload={ "property1": "string", "property2": "string", }, points=models.Filter( must=[ models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ], ), ) ``` ```typescript client.setPayload("{collection_name}", { payload: { property1: "string", property2: "string", }, filter: { must: [ { key: "color", match: { value: "red", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector, }; use serde_json::json; client .set_payload_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([ Condition::matches("color", "red".to_string()), ]))), }, json!({ "property1": "string", "property2": "string", }) .try_into() .unwrap(), None, ) .await?; ``` ```java import java.util.Map; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ValueFactory.value; client .setPayloadAsync( "{collection_name}", Map.of("property1", value("string"), "property2", value("string")), Filter.newBuilder().addMust(matchKeyword("color", "red")).build(), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SetPayloadAsync( collectionName: "{collection_name}", payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } }, filter: MatchKeyword("color", "red") ); ``` ### Overwrite payload Fully replace any existing payload with the given one. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/overwrite_payload)): ```http PUT /collections/{collection_name}/points/payload { "payload": { "property1": "string", "property2": "string" }, "points": [ 0, 3, 100 ] } ``` ```python client.overwrite_payload( collection_name="{collection_name}", payload={ "property1": "string", "property2": "string", }, points=[0, 3, 10], ) ``` ```typescript client.overwritePayload("{collection_name}", { payload: { property1: "string", property2: "string", }, points: [0, 3, 10], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, }; use serde_json::json; client .overwrite_payload_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], })), }, json!({ "property1": "string", "property2": "string", }) .try_into() .unwrap(), None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; client .overwritePayloadAsync( "{collection_name}", Map.of("property1", value("string"), "property2", value("string")), List.of(id(0), id(3), id(10)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.OverwritePayloadAsync( collectionName: "{collection_name}", payload: new Dictionary<string, Value> { { "property1", "string" }, { "property2", "string" } }, ids: new ulong[] { 0, 3, 10 } ); ``` Like [set payload](#set-payload), you don't need to know the ids of the points you want to modify. The alternative is to use filters. ### Clear payload This method removes all payload keys from specified points REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/clear_payload)): ```http POST /collections/{collection_name}/points/payload/clear { "points": [0, 3, 100] } ``` ```python client.clear_payload( collection_name="{collection_name}", points_selector=models.PointIdsList( points=[0, 3, 100], ), ) ``` ```typescript client.clearPayload("{collection_name}", { points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, }; client .clear_payload( "{collection_name}", None, Some(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 100.into()], })), }), None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .clearPayloadAsync("{collection_name}", List.of(id(0), id(3), id(100)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ClearPayloadAsync(collectionName: "{collection_name}", ids: new ulong[] { 0, 3, 100 }); ``` <aside role="status"> You can also use <code>models.FilterSelector</code> to remove the points matching given filter criteria, instead of providing the ids. </aside> ### Delete payload keys Delete specific payload keys from points. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/delete_payload)): ```http POST /collections/{collection_name}/points/payload/delete { "keys": ["color", "price"], "points": [0, 3, 100] } ``` ```python client.delete_payload( collection_name="{collection_name}", keys=["color", "price"], points=[0, 3, 100], ) ``` ```typescript client.deletePayload("{collection_name}", { keys: ["color", "price"], points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, }; client .delete_payload_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 100.into()], })), }, vec!["color".to_string(), "price".to_string()], None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .deletePayloadAsync( "{collection_name}", List.of("color", "price"), List.of(id(0), id(3), id(100)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeletePayloadAsync( collectionName: "{collection_name}", keys: ["color", "price"], ids: new ulong[] { 0, 3, 100 } ); ``` Alternatively, you can use filters to delete payload keys from the points. ```http POST /collections/{collection_name}/points/payload/delete { "keys": ["color", "price"], "filter": { "must": [ { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.delete_payload( collection_name="{collection_name}", keys=["color", "price"], points=models.Filter( must=[ models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ], ), ) ``` ```typescript client.deletePayload("{collection_name}", { keys: ["color", "price"], filter: { must: [ { key: "color", match: { value: "red", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector, }; client .delete_payload_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([ Condition::matches("color", "red".to_string()), ]))), }, vec!["color".to_string(), "price".to_string()], None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; client .deletePayloadAsync( "{collection_name}", List.of("color", "price"), Filter.newBuilder().addMust(matchKeyword("color", "red")).build(), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.DeletePayloadAsync( collectionName: "{collection_name}", keys: ["color", "price"], filter: MatchKeyword("color", "red") ); ``` ## Payload indexing To search more efficiently with filters, Qdrant allows you to create indexes for payload fields by specifying the name and type of field it is intended to be. The indexed fields also affect the vector index. See [Indexing](../indexing) for details. In practice, we recommend creating an index on those fields that could potentially constrain the results the most. For example, using an index for the object ID will be much more efficient, being unique for each record, than an index by its color, which has only a few possible values. In compound queries involving multiple fields, Qdrant will attempt to use the most restrictive index first. To create index for the field, you can use the following: REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/create_field_index)) ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": "keyword" } ``` ```python client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema="keyword", ) ``` ```typescript client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: "keyword", }); ``` ```rust use qdrant_client::qdrant::FieldType; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Keyword, None, None, ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.PayloadSchemaType; client.createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Keyword, null, true, null, null); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index" ); ``` The index usage flag is displayed in the payload schema with the [collection info API](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_collection). Payload schema example: ```json { "payload_schema": { "property1": { "data_type": "keyword" }, "property2": { "data_type": "integer" } } } ```
documentation/concepts/payload.md
--- title: Collections weight: 30 aliases: - ../collections --- # Collections A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements. Distance metrics are used to measure similarities among vectors. The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training. Qdrant supports these most popular types of metrics: * Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product) * Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity) * Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance) * Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry) <aside role="status">For search efficiency, Cosine similarity is implemented as dot-product over normalized vectors. Vectors are automatically normalized during upload</aside> In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum. These settings can be changed at any time by a corresponding request. ## Setting up multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/) **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. > Note: If you're running `curl` from the command line, the following commands assume that you have a running instance of Qdrant on `http://localhost:6333`. If needed, you can set one up as described in our [Quickstart](/documentation/quick-start/) guide. For convenience, these commands specify collections named `test_collection1` through `test_collection4`. ## Create a collection ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" } } ``` ```bash curl -X PUT http://localhost:6333/collections/test_collection1 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "size": 300, "distance": "Cosine" } }' ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 100, distance: "Cosine" }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; //The Rust client uses Qdrant's GRPC interface let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 100, distance: Distance::Cosine.into(), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.createCollectionAsync("{collection_name}", VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine } ); ``` In addition to the required options, you can also specify custom values for the following collection options: * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning) * `optimizers_config` - see [optimizer](../optimizer) for details. * `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment#sharding) section for details. * `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). See [schema definitions](https://qdrant.github.io/qdrant/redoc/index.html#operation/create_collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters. *Available as of v1.2.0* Vectors all live in RAM for very quick access. The `on_disk` parameter can be set in the vector configuration. If true, all vectors will live on disk. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Create collection from another collection *Available as of v1.0.0* It is possible to initialize a collection from another existing collection. This might be useful for experimenting quickly with different configurations for the same data set. Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample code, `"size": 300` and `"distance": "Cosine"`. ```http PUT /collections/{collection_name} { "vectors": { "size": 100, "distance": "Cosine" }, "init_from": { "collection": "{from_collection_name}" } } ``` ```bash curl -X PUT http://localhost:6333/collections/test_collection2 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "size": 300, "distance": "Cosine" }, "init_from": { "collection": "test_collection1" } }' ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), init_from=models.InitFrom(collection="{from_collection_name}"), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 100, distance: "Cosine" }, init_from: { collection: "{from_collection_name}" }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 100, distance: Distance::Cosine.into(), ..Default::default() })), }), init_from_collection: Some("{from_collection_name}".to_string()), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(100) .setDistance(Distance.Cosine) .build())) .setInitFromCollection("{from_collection_name}") .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }, initFromCollection: "{from_collection_name}" ); ``` ### Collection with multiple vectors *Available as of v0.10.0* It is possible to have multiple vectors per record. This feature allows for multiple vector storages per collection. To distinguish vectors in one record, they should have a unique name defined when creating the collection. Each named vector in this mode has its distance and size: ```http PUT /collections/{collection_name} { "vectors": { "image": { "size": 4, "distance": "Dot" }, "text": { "size": 8, "distance": "Cosine" } } } ``` ```bash curl -X PUT http://localhost:6333/collections/test_collection3 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "image": { "size": 4, "distance": "Dot" }, "text": { "size": 8, "distance": "Cosine" } } }' ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config={ "image": models.VectorParams(size=4, distance=models.Distance.DOT), "text": models.VectorParams(size=8, distance=models.Distance.COSINE), }, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { image: { size: 4, distance: "Dot" }, text: { size: 8, distance: "Cosine" }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, VectorParams, VectorParamsMap, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::ParamsMap(VectorParamsMap { map: [ ( "image".to_string(), VectorParams { size: 4, distance: Distance::Dot.into(), ..Default::default() }, ), ( "text".to_string(), VectorParams { size: 8, distance: Distance::Cosine.into(), ..Default::default() }, ), ] .into(), })), }), ..Default::default() }) .await?; ``` ```java import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( "{collection_name}", Map.of( "image", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(), "text", VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParamsMap { Map = { ["image"] = new VectorParams { Size = 4, Distance = Distance.Dot }, ["text"] = new VectorParams { Size = 8, Distance = Distance.Cosine }, } } ); ``` For rare use cases, it is possible to create a collection without any vector storage. *Available as of v1.1.1* For each named vector you can optionally specify [`hnsw_config`](../indexing/#vector-index) or [`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to deviate from the collection configuration. This can be useful to fine-tune search performance on a vector level. *Available as of v1.2.0* Vectors all live in RAM for very quick access. On a per-vector basis you can set `on_disk` to true to store all vectors on disk at all times. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Collection with sparse vectors *Available as of v1.7.0* Qdrant supports sparse vectors as a first-class citizen. Sparse vectors are useful for text search, where each word is represented as a separate dimension. Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point. Unlike dense vectors, sparse vectors must be named. And additionally, sparse vectors and dense vectors must have different names within a collection. ```http PUT /collections/{collection_name} { "sparse_vectors": { "text": { }, } } ``` ```bash curl -X PUT http://localhost:6333/collections/test_collection4 \ -H 'Content-Type: application/json' \ --data-raw '{ "sparse_vectors": { "text": { } } }' ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", sparse_vectors_config={ "text": models.SparseVectorParams(), }, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { sparse_vectors: { text: { }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, SparseVectorParams, VectorParamsMap, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), sparse_vectors_config: Some(SparseVectorsConfig { map: [ ( "text".to_string(), SparseVectorParams {}, ), ] .into(), }), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap("text", SparseVectorParams.getDefaultInstance())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", sparseVectorsConfig: ("text", new SparseVectorParams()) ); ``` Outside of a unique name, there are no required configuration parameters for sparse vectors. The distance function for sparse vectors is always `Dot` and does not need to be specified. However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index). ### Delete collection ```http DELETE http://localhost:6333/collections/test_collection4 ``` ```bash curl -X DELETE http://localhost:6333/collections/test_collection4 ``` ```python client.delete_collection(collection_name="{collection_name}") ``` ```typescript client.deleteCollection("{collection_name}"); ``` ```rust client.delete_collection("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.deleteCollectionAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeleteCollectionAsync("{collection_name}"); ``` ### Update collection parameters Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors. For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished. As a result, you will not waste extra computation resources on rebuilding the index. The following command enables indexing for segments that have more than 10000 kB of vectors stored: ```http PATCH /collections/{collection_name} { "optimizers_config": { "indexing_threshold": 10000 } } ``` ```bash curl -X PATCH http://localhost:6333/collections/test_collection1 \ -H 'Content-Type: application/json' \ --data-raw '{ "optimizers_config": { "indexing_threshold": 10000 } }' ``` ```python client.update_collection( collection_name="{collection_name}", optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000), ) ``` ```typescript client.updateCollection("{collection_name}", { optimizers_config: { indexing_threshold: 10000, }, }); ``` ```rust use qdrant_client::qdrant::OptimizersConfigDiff; client .update_collection( "{collection_name}", &OptimizersConfigDiff { indexing_threshold: Some(10000), ..Default::default() }, None, None, None, None, None, ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.UpdateCollection; client.updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName("{collection_name}") .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build()) .build()); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateCollectionAsync( collectionName: "{collection_name}", optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 } ); ``` The following parameters can be updated: * `optimizers_config` - see [optimizer](../optimizer/) for details. * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. * `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings. * `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`. Full API specification is available in [schema definitions](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/update_collection). Calls to this endpoint may be blocking as it waits for existing optimizers to finish. We recommended against using this in a production database as it may introduce huge overhead due to the rebuilding of the index. #### Update vector parameters *Available as of v1.4.0* <aside role="status">To update vector parameters using the collection update API, you must always specify a vector name. If your collection does not have named vectors, use an empty (<code>""</code>) name.</aside> Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW index, quantization and disk configurations can now be changed without recreating a collection. Segments (with index and quantized data) will automatically be rebuilt in the background to match updated parameters. To put vector data on disk for a collection that **does not have** named vectors, use `""` as name: ```http PATCH /collections/{collection_name} { "vectors": { "": { "on_disk": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/test_collection1 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "": { "on_disk": true } } }' ``` To put vector data on disk for a collection that **does have** named vectors: Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name). ```http PATCH /collections/{collection_name} { "vectors": { "my_vector": { "on_disk": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/test_collection1 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "my_vector": { "on_disk": true } } }' ``` In the following example the HNSW index and quantization parameters are updated, both for the whole collection, and for `my_vector` specifically: ```http PATCH /collections/{collection_name} { "vectors": { "my_vector": { "hnsw_config": { "m": 32, "ef_construct": 123 }, "quantization_config": { "product": { "compression": "x32", "always_ram": true } }, "on_disk": true } }, "hnsw_config": { "ef_construct": 123 }, "quantization_config": { "scalar": { "type": "int8", "quantile": 0.8, "always_ram": false } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/test_collection1 \ -H 'Content-Type: application/json' \ --data-raw '{ "vectors": { "my_vector": { "hnsw_config": { "m": 32, "ef_construct": 123 }, "quantization_config": { "product": { "compression": "x32", "always_ram": true } }, "on_disk": true } }, "hnsw_config": { "ef_construct": 123 }, "quantization_config": { "scalar": { "type": "int8", "quantile": 0.8, "always_ram": false } } }' ``` ```python client.update_collection( collection_name="{collection_name}", vectors_config={ "my_vector": models.VectorParamsDiff( hnsw_config=models.HnswConfigDiff( m=32, ef_construct=123, ), quantization_config=models.ProductQuantization( product=models.ProductQuantizationConfig( compression=models.CompressionRatio.X32, always_ram=True, ), ), on_disk=True, ), }, hnsw_config=models.HnswConfigDiff( ef_construct=123, ), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.8, always_ram=False, ), ), ) ``` ```typescript client.updateCollection("{collection_name}", { vectors: { my_vector: { hnsw_config: { m: 32, ef_construct: 123, }, quantization_config: { product: { compression: "x32", always_ram: true, }, }, on_disk: true, }, }, hnsw_config: { ef_construct: 123, }, quantization_config: { scalar: { type: "int8", quantile: 0.8, always_ram: true, }, }, }); ``` ```rust use qdrant_client::client::QdrantClient; use qdrant_client::qdrant::{ quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiff, QuantizationConfigDiff, QuantizationType, ScalarQuantization, VectorParamsDiff, VectorsConfigDiff, }; client .update_collection( "{collection_name}", None, None, None, Some(&HnswConfigDiff { ef_construct: Some(123), ..Default::default() }), Some(&VectorsConfigDiff { config: Some(Config::ParamsMap( qdrant_client::qdrant::VectorParamsDiffMap { map: HashMap::from([( ("my_vector".into()), VectorParamsDiff { hnsw_config: Some(HnswConfigDiff { m: Some(32), ef_construct: Some(123), ..Default::default() }), ..Default::default() }, )]), }, )), }), Some(&QuantizationConfigDiff { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8 as i32, quantile: Some(0.8), always_ram: Some(true), ..Default::default() })), }), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.UpdateCollection; import io.qdrant.client.grpc.Collections.VectorParamsDiff; import io.qdrant.client.grpc.Collections.VectorParamsDiffMap; import io.qdrant.client.grpc.Collections.VectorsConfigDiff; client .updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName("{collection_name}") .setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build()) .setVectorsConfig( VectorsConfigDiff.newBuilder() .setParamsMap( VectorParamsDiffMap.newBuilder() .putMap( "my_vector", VectorParamsDiff.newBuilder() .setHnswConfig( HnswConfigDiff.newBuilder() .setM(3) .setEfConstruct(123) .build()) .build()))) .setQuantizationConfig( QuantizationConfigDiff.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setQuantile(0.8f) .setAlwaysRam(true) .build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateCollectionAsync( collectionName: "{collection_name}", hnswConfig: new HnswConfigDiff { EfConstruct = 123 }, vectorsConfig: new VectorParamsDiffMap { Map = { { "my_vector", new VectorParamsDiff { HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 } } } } }, quantizationConfig: new QuantizationConfigDiff { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, Quantile = 0.8f, AlwaysRam = true } } ); ``` ## Collection info Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are distributed and indexed. ```http GET /collections/test_collection1 ``` ```bash curl -X GET http://localhost:6333/collections/test_collection1 ``` ```python client.get_collection(collection_name="{collection_name}") ``` ```typescript client.getCollection("{collection_name}"); ``` ```rust client.collection_info("{collection_name}").await?; ``` ```java client.getCollectionInfoAsync("{collection_name}").get(); ``` <details> <summary>Expected result</summary> ```json { "result": { "status": "green", "optimizer_status": "ok", "vectors_count": 1068786, "indexed_vectors_count": 1024232, "points_count": 1068786, "segments_count": 31, "config": { "params": { "vectors": { "size": 384, "distance": "Cosine" }, "shard_number": 1, "replication_factor": 1, "write_consistency_factor": 1, "on_disk_payload": false }, "hnsw_config": { "m": 16, "ef_construct": 100, "full_scan_threshold": 10000, "max_indexing_threads": 0 }, "optimizer_config": { "deleted_threshold": 0.2, "vacuum_min_vector_number": 1000, "default_segment_number": 0, "max_segment_size": null, "memmap_threshold": null, "indexing_threshold": 20000, "flush_interval_sec": 5, "max_optimization_threads": 1 }, "wal_config": { "wal_capacity_mb": 32, "wal_segments_ahead": 0 } }, "payload_schema": {} }, "status": "ok", "time": 0.00010143 } ``` </details> <br/> ```csharp await client.GetCollectionInfoAsync("{collection_name}"); ``` If you insert the vectors into the collection, the `status` field may become `yellow` whilst it is optimizing. It will become `green` once all the points are successfully processed. The following color statuses are possible: - 🟢 `green`: collection is ready - 🟡 `yellow`: collection is optimizing - 🔴 `red`: an error occurred which the engine could not recover from ### Approximate point and vector counts You may be interested in the count attributes: - `points_count` - total number of objects (vectors and their payloads) stored in the collection - `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point - `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration. The above counts are not exact, but should be considered approximate. Depending on how you use Qdrant these may give very different numbers than what you may expect. It's therefore important **not** to rely on them. More specifically, these numbers represent the count of points and vectors in Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points as part of automatic optimizations. It may keep changed or deleted points for a bit. And it may delay indexing of new points. All of that is for optimization reasons. Updates you do are therefore not directly reflected in these numbers. If you see a wildly different count of points, it will likely resolve itself once a new round of automatic optimizations has completed. To clarify: these numbers don't represent the exact amount of points or vectors you have inserted, nor does it represent the exact number of distinguishable points or vectors you can query. If you want to know exact counts, refer to the [count API](../points/#counting-points). _Note: these numbers may be removed in a future version of Qdrant._ ### Indexing vectors in HNSW In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and depends on the [optimizer configuration](../optimizer). A new index segment is built if the size of non-indexed vectors is higher than the value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment created and `indexed_vectors_count` might be equal to `0`. It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters). ## Collection aliases In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly. For example, when upgrading to a new version of the neural network. There is no way to stop the service and rebuild the collection with new vectors in these situations. Aliases are additional names for existing collections. All queries to the collection can also be done identically, using an alias instead of the collection name. Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection. Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch. ### Create alias ```http POST /collections/aliases { "actions": [ { "create_alias": { "collection_name": "test_collection1", "alias_name": "production_collection" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "create_alias": { "collection_name": "test_collection1", "alias_name": "production_collection" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name="example_collection", alias_name="production_collection" ) ) ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { create_alias: { collection_name: "example_collection", alias_name: "production_collection", }, }, ], }); ``` ```rust client.create_alias("example_collection", "production_collection").await?; ``` ```java client.createAliasAsync("production_collection", "example_collection").get(); ``` ```csharp await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection"); ``` ### Remove alias ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "delete_alias": { "collection_name": "test_collection1", "alias_name": "production_collection" } } ] }' ``` ```http POST /collections/aliases { "actions": [ { "delete_alias": { "alias_name": "production_collection" } } ] } ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name="production_collection") ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: "production_collection", }, }, ], }); ``` ```rust client.delete_alias("production_collection").await?; ``` ```java client.deleteAliasAsync("production_collection").get(); ``` ```csharp await client.DeleteAliasAsync("production_collection"); ``` ### Switch collection Multiple alias actions are performed atomically. For example, you can switch underlying collection with the following command: ```http POST /collections/aliases { "actions": [ { "delete_alias": { "alias_name": "production_collection" } }, { "create_alias": { "collection_name": "test_collection2", "alias_name": "production_collection" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ "actions": [ { "delete_alias": { "alias_name": "production_collection" } }, { "create_alias": { "collection_name": "test_collection2", "alias_name": "production_collection" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name="production_collection") ), models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name="example_collection", alias_name="production_collection" ) ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: "production_collection", }, }, { create_alias: { collection_name: "example_collection", alias_name: "production_collection", }, }, ], }); ``` ```rust client.delete_alias("production_collection").await?; client.create_alias("example_collection", "production_collection").await?; ``` ```java client.deleteAliasAsync("production_collection").get(); client.createAliasAsync("production_collection", "example_collection").get(); ``` ```csharp await client.DeleteAliasAsync("production_collection"); await client.CreateAliasAsync(aliasName: "production_collection", collectionName: "example_collection"); ``` ### List collection aliases ```http GET /collections/test_collection2/aliases ``` ```bash curl -X GET http://localhost:6333/collections/test_collection2/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.get_collection_aliases(collection_name="{collection_name}") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getCollectionAliases("{collection_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_collection_aliases("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listCollectionAliasesAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListCollectionAliasesAsync("{collection_name}"); ``` ### List all aliases ```http GET /aliases ``` ```bash curl -X GET http://localhost:6333/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.get_aliases() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getAliases(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_aliases().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listAliasesAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListAliasesAsync(); ``` ### List all collections ```http GET /collections ``` ```bash curl -X GET http://localhost:6333/collections ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.get_collections() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.getCollections(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_collections().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listCollectionsAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListCollectionsAsync(); ```
documentation/concepts/collections.md
--- title: Indexing weight: 90 aliases: - ../indexing --- # Indexing A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering. The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection. Not all segments automatically have indexes. Their necessity is determined by the [optimizer](../optimizer) settings and depends, as a rule, on the number of stored points. ## Payload Index Payload index in Qdrant is similar to the index in conventional document-oriented databases. This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition. The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search#query-planning) choose a search strategy. Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user. To mark a field as indexable, you can use the following: ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": "keyword" } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(host="localhost", port=6333) client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema="keyword", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: "keyword", }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::FieldType}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Keyword, None, None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Keyword, null, null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync(collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index"); ``` Available field types are: * `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions. * `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions. * `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions. * `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of 1.4.0). * `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions. * `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions. Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions. If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most. As a rule, the more different values a payload value has, the more efficiently the index will be used. ### Full-text index *Available as of v0.10.0* Qdrant supports full-text search for string payload. Full-text index allows you to filter points by the presence of a word or a phrase in the payload field. Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters. Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index. To create a full-text index, you can use the following: ```http PUT /collections/{collection_name}/index { "field_name": "name_of_the_field_to_index", "field_schema": { "type": "text", "tokenizer": "word", "min_token_len": 2, "max_token_len": 20, "lowercase": true } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient(host="localhost", port=6333) client.create_payload_index( collection_name="{collection_name}", field_name="name_of_the_field_to_index", field_schema=models.TextIndexParams( type="text", tokenizer=models.TokenizerType.WORD, min_token_len=2, max_token_len=15, lowercase=True, ), ) ``` ```typescript import { QdrantClient, Schemas } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createPayloadIndex("{collection_name}", { field_name: "name_of_the_field_to_index", field_schema: { type: "text", tokenizer: "word", min_token_len: 2, max_token_len: 15, lowercase: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ payload_index_params::IndexParams, FieldType, PayloadIndexParams, TextIndexParams, TokenizerType, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "name_of_the_field_to_index", FieldType::Text, Some(&PayloadIndexParams { index_params: Some(IndexParams::TextIndexParams(TextIndexParams { tokenizer: TokenizerType::Word as i32, min_token_len: Some(2), max_token_len: Some(10), lowercase: Some(true), })), }), None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.TextIndexParams; import io.qdrant.client.grpc.Collections.TokenizerType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "name_of_the_field_to_index", PayloadSchemaType.Text, PayloadIndexParams.newBuilder() .setTextIndexParams( TextIndexParams.newBuilder() .setTokenizer(TokenizerType.Word) .setMinTokenLen(2) .setMaxTokenLen(10) .setLowercase(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "name_of_the_field_to_index", schemaType: PayloadSchemaType.Text, indexParams: new PayloadIndexParams { TextIndexParams = new TextIndexParams { Tokenizer = TokenizerType.Word, MinTokenLen = 2, MaxTokenLen = 10, Lowercase = true } } ); ``` Available tokenizers are: * `word` - splits the string into words, separated by spaces, punctuation marks, and special characters. * `whitespace` - splits the string into words, separated by spaces. * `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`. * `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags. See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index. ## Vector Index A vector index is a data structure built on vectors through a specific mathematical model. Through the vector index, we can efficiently query several vectors similar to the target vector. Qdrant currently only uses HNSW as a dense vector index. [HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position. In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range. The corresponding parameters could be configured in the configuration file: ```yaml storage: # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. # Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. # Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold: 10000 ``` And so in the process of creating a [collection](../collections). The `ef` parameter is configured during [the search](../search) and by default is equal to `ef_construct`. HNSW is chosen for several reasons. First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search. Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks). *Available as of v1.1.1* The HNSW parameters can also be configured on a collection and named vector level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search performance. ## Sparse Vector Index *Available as of v1.7.0* ### Key Features of Sparse Vector Index - **Support for Sparse Vectors:** Qdrant supports sparse vectors, characterized by a high proportion of zeroes. - **Efficient Indexing:** Utilizes an inverted index structure to store vectors for each non-zero dimension, optimizing memory and search speed. ### Search Mechanism - **Index Usage:** The index identifies vectors with non-zero values in query dimensions during a search. - **Scoring Method:** Vectors are scored using the dot product. ### Optimizations - **Reducing Vectors to Score:** Implementations are in place to minimize the number of vectors scored, especially for dimensions with numerous vectors. ### Filtering and Configuration - **Filtering Support:** Similar to dense vectors, supports filtering by payload fields. - **`full_scan_threshold` Configuration:** Allows control over when to switch search from the payload index to minimize scoring vectors. - **Threshold for Sparse Vectors:** Specifies the threshold in terms of the number of matching vectors found by the query planner. ### Index Storage and Management - **Memory-Based Index:** The index resides in memory for appendable segments, ensuring fast search and update operations. - **Handling Immutable Segments:** For immutable segments, the sparse index can either stay in memory or be mapped to disk with the `on_disk` flag. **Example Configuration:** To enable on-disk storage for immutable segments and full scan for queries inspecting less than 5000 vectors: ```http PUT /collections/{collection_name} { "sparse_vectors": { "text": { "index": { "on_disk": true, "full_scan_threshold": 5000 } }, } } ``` ## Filtrable Index Separately, payload index and vector index cannot solve the problem of search using the filter completely. In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore. However, for cases in the middle, this approach does not work well. On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters. ![HNSW fail](/docs/precision_by_m.png) ![hnsw graph](/docs/graph.gif) You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/). Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values. Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph. This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search.
documentation/concepts/indexing.md
--- title: Points weight: 40 aliases: - ../points --- # Points The points are the central entity that Qdrant operates with. A point is a record consisting of a vector and an optional [payload](../payload). You can search among the points grouped in one [collection](../collections) based on vector similarity. This procedure is described in more detail in the [search](../search) and [filtering](../filtering) sections. This section explains how to create and manage vectors. Any point modification operation is asynchronous and takes place in 2 steps. At the first stage, the operation is written to the Write-ahead-log. After this moment, the service will not lose the data, even if the machine loses power supply. ## Awaiting result If the API is called with the `&wait=false` parameter, or if it is not explicitly specified, the client will receive an acknowledgment of receiving data: ```json { "result": { "operation_id": 123, "status": "acknowledged" }, "status": "ok", "time": 0.000206061 } ``` This response does not mean that the data is available for retrieval yet. This uses a form of eventual consistency. It may take a short amount of time before it is actually processed as updating the collection happens in the background. In fact, it is possible that such request eventually fails. If inserting a lot of vectors, we also recommend using asynchronous requests to take advantage of pipelining. If the logic of your application requires a guarantee that the vector will be available for searching immediately after the API responds, then use the flag `?wait=true`. In this case, the API will return the result only after the operation is finished: ```json { "result": { "operation_id": 0, "status": "completed" }, "status": "ok", "time": 0.000206061 } ``` ## Point IDs Qdrant supports using both `64-bit unsigned integers` and `UUID` as identifiers for points. Examples of UUID string representations: * simple: `936DA01F9ABD4d9d80C702AF85C822A8` * hyphenated: `550e8400-e29b-41d4-a716-446655440000` * urn: `urn:uuid:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4` That means that in every request UUID string could be used instead of numerical id. Example: ```http PUT /collections/{collection_name}/points { "points": [ { "id": "5c56c793-69f3-4fbf-87e6-c4bf54c28c26", "payload": {"color": "red"}, "vector": [0.9, 0.1, 0.1] } ] } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id="5c56c793-69f3-4fbf-87e6-c4bf54c28c26", payload={ "color": "red", }, vector=[0.9, 0.1, 0.1], ), ], ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.upsert("{collection_name}", { points: [ { id: "5c56c793-69f3-4fbf-87e6-c4bf54c28c26", payload: { color: "red", }, vector: [0.9, 0.1, 0.1], }, ], }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::PointStruct}; use serde_json::json; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .upsert_points_blocking( "{collection_name}".to_string(), None, vec![PointStruct::new( "5c56c793-69f3-4fbf-87e6-c4bf54c28c26".to_string(), vec![0.05, 0.61, 0.76, 0.74], json!( {"color": "Red"} ) .try_into() .unwrap(), )], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import java.util.UUID; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(UUID.fromString("5c56c793-69f3-4fbf-87e6-c4bf54c28c26"))) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("color", value("Red"))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = Guid.Parse("5c56c793-69f3-4fbf-87e6-c4bf54c28c26"), Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "red" } } } ); ``` and ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "payload": {"color": "red"}, "vector": [0.9, 0.1, 0.1] } ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, payload={ "color": "red", }, vector=[0.9, 0.1, 0.1], ), ], ) ``` ```typescript client.upsert("{collection_name}", { points: [ { id: 1, payload: { color: "red", }, vector: [0.9, 0.1, 0.1], }, ], }); ``` ```rust use qdrant_client::qdrant::PointStruct; use serde_json::json; client .upsert_points_blocking( 1, None, vec![PointStruct::new( "5c56c793-69f3-4fbf-87e6-c4bf54c28c26".to_string(), vec![0.05, 0.61, 0.76, 0.74], json!( {"color": "Red"} ) .try_into() .unwrap(), )], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("color", value("Red"))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "red" } } } ); ``` are both possible. ## Upload points To optimize performance, Qdrant supports batch loading of points. I.e., you can load several points into the service in one API call. Batching allows you to minimize the overhead of creating a network connection. The Qdrant API supports two ways of creating batches - record-oriented and column-oriented. Internally, these options do not differ and are made only for the convenience of interaction. Create points with batch: ```http PUT /collections/{collection_name}/points { "batch": { "ids": [1, 2, 3], "payloads": [ {"color": "red"}, {"color": "green"}, {"color": "blue"} ], "vectors": [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9] ] } } ``` ```python client.upsert( collection_name="{collection_name}", points=models.Batch( ids=[1, 2, 3], payloads=[ {"color": "red"}, {"color": "green"}, {"color": "blue"}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], ), ) ``` ```typescript client.upsert("{collection_name}", { batch: { ids: [1, 2, 3], payloads: [{ color: "red" }, { color: "green" }, { color: "blue" }], vectors: [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], }, }); ``` or record-oriented equivalent: ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "payload": {"color": "red"}, "vector": [0.9, 0.1, 0.1] }, { "id": 2, "payload": {"color": "green"}, "vector": [0.1, 0.9, 0.1] }, { "id": 3, "payload": {"color": "blue"}, "vector": [0.1, 0.1, 0.9] } ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, payload={ "color": "red", }, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={ "color": "green", }, vector=[0.1, 0.9, 0.1], ), models.PointStruct( id=3, payload={ "color": "blue", }, vector=[0.1, 0.1, 0.9], ), ], ) ``` ```typescript client.upsert("{collection_name}", { points: [ { id: 1, payload: { color: "red" }, vector: [0.9, 0.1, 0.1], }, { id: 2, payload: { color: "green" }, vector: [0.1, 0.9, 0.1], }, { id: 3, payload: { color: "blue" }, vector: [0.1, 0.1, 0.9], }, ], }); ``` ```rust use qdrant_client::qdrant::PointStruct; use serde_json::json; client .upsert_points_batch_blocking( "{collection_name}".to_string(), None, vec![ PointStruct::new( 1, vec![0.9, 0.1, 0.1], json!( {"color": "red"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.1, 0.9, 0.1], json!( {"color": "green"} ) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.1, 0.1, 0.9], json!( {"color": "blue"} ) .try_into() .unwrap(), ), ], None, 100, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of("color", value("red"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of("color", value("green"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.9f)) .putAllPayload(Map.of("color", value("blue"))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { ["city"] = "red" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { ["city"] = "green" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { ["city"] = "blue" } } } ); ``` The Python client has additional features for loading points, which include: - Parallelization - A retry mechanism - Lazy batching support For example, you can read your data directly from hard drives, to avoid storing all data in RAM. You can use these features with the `upload_collection` and `upload_points` methods. Similar to the basic upsert API, these methods support both record-oriented and column-oriented formats. <aside role="status"> <code>upload_points</code> is available as of v1.7.1. It has replaced <code>upload_records</code> which is now deprecated. </aside> Column-oriented format: ```python client.upload_collection( collection_name="{collection_name}", ids=[1, 2], payload=[ {"color": "red"}, {"color": "green"}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], ], parallel=4, max_retries=3, ) ``` <aside role="status"> If <code>ids</code> are not provided, they will be generated automatically as UUIDs. </aside> Record-oriented format: ```python client.upload_points( collection_name="{collection_name}", points=[ models.PointStruct( id=1, payload={ "color": "red", }, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={ "color": "green", }, vector=[0.1, 0.9, 0.1], ), ], parallel=4, max_retries=3, ) ``` All APIs in Qdrant, including point loading, are idempotent. It means that executing the same method several times in a row is equivalent to a single execution. In this case, it means that points with the same id will be overwritten when re-uploaded. Idempotence property is useful if you use, for example, a message queue that doesn't provide an exactly-ones guarantee. Even with such a system, Qdrant ensures data consistency. [*Available as of v0.10.0*](#create-vector-name) If the collection was created with multiple vectors, each vector data can be provided using the vector's name: ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "vector": { "image": [0.9, 0.1, 0.1, 0.2], "text": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2] } }, { "id": 2, "vector": { "image": [0.2, 0.1, 0.3, 0.9], "text": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9] } } ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, vector={ "image": [0.9, 0.1, 0.1, 0.2], "text": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], }, ), models.PointStruct( id=2, vector={ "image": [0.2, 0.1, 0.3, 0.9], "text": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], }, ), ], ) ``` ```typescript client.upsert("{collection_name}", { points: [ { id: 1, vector: { image: [0.9, 0.1, 0.1, 0.2], text: [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], }, }, { id: 2, vector: { image: [0.2, 0.1, 0.3, 0.9], text: [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], }, }, ], }); ``` ```rust use qdrant_client::qdrant::PointStruct; use std::collections::HashMap; client .upsert_points_blocking( "{collection_name}".to_string(), None, vec![ PointStruct::new( 1, HashMap::from([ ("image".to_string(), vec![0.9, 0.1, 0.1, 0.2]), ( "text".to_string(), vec![0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], ), ]), HashMap::new().into(), ), PointStruct::new( 2, HashMap::from([ ("image".to_string(), vec![0.2, 0.1, 0.3, 0.9]), ( "text".to_string(), vec![0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], ), ]), HashMap::new().into(), ), ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import static io.qdrant.client.VectorsFactory.namedVectors; import io.qdrant.client.grpc.Points.PointStruct; client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors( namedVectors( Map.of( "image", vector(List.of(0.9f, 0.1f, 0.1f, 0.2f)), "text", vector(List.of(0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f))))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors( namedVectors( Map.of( "image", List.of(0.2f, 0.1f, 0.3f, 0.9f), "text", List.of(0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f)))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new Dictionary<string, float[]> { ["image"] = [0.9f, 0.1f, 0.1f, 0.2f], ["text"] = [0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f] } }, new() { Id = 2, Vectors = new Dictionary<string, float[]> { ["image"] = [0.2f, 0.1f, 0.3f, 0.9f], ["text"] = [0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f] } } } ); ``` *Available as of v1.2.0* Named vectors are optional. When uploading points, some vectors may be omitted. For example, you can upload one point with only the `image` vector and a second one with only the `text` vector. When uploading a point with an existing ID, the existing point is deleted first, then it is inserted with just the specified vectors. In other words, the entire point is replaced, and any unspecified vectors are set to null. To keep existing vectors unchanged and only update specified vectors, see [update vectors](#update-vectors). *Available as of v1.7.0* Points can contain dense and sparse vectors. A sparse vector is an array in which most of the elements have a value of zero. It is possible to take advantage of this property to have an optimized representation, for this reason they have a different shape than dense vectors. They are represented as a list of `(index, value)` pairs, where `index` is an integer and `value` is a floating point number. The `index` is the position of the non-zero value in the vector. The `values` is the value of the non-zero element. For example, the following vector: ``` [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0] ``` can be represented as a sparse vector: ``` [(6, 1.0), (7, 2.0)] ``` Qdrant uses the following JSON representation throughout its APIs. ```json { "indices": [6, 7], "values": [1.0, 2.0] } ``` The `indices` and `values` arrays must have the same length. And the `indices` must be unique. If the `indices` are not sorted, Qdrant will sort them internally so you may not rely on the order of the elements. Sparse vectors must be named and can be uploaded in the same way as dense vectors. ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "vector": { "text": { "indices": [6, 7], "values": [1.0, 2.0] } } }, { "id": 2, "vector": { "text": { "indices": [1, 1, 2, 3, 4, 5], "values": [0.1, 0.2, 0.3, 0.4, 0.5] } } } ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, vector={ "text": models.SparseVector( indices=[6, 7], values=[1.0, 2.0], ) }, ), models.PointStruct( id=2, vector={ "text": models.SparseVector( indices=[1, 2, 3, 4, 5], values= [0.1, 0.2, 0.3, 0.4, 0.5], ) }, ), ], ) ``` ```typescript client.upsert("{collection_name}", { points: [ { id: 1, vector: { text: { indices: [6, 7], values: [1.0, 2.0] }, }, }, { id: 2, vector: { text: { indices=[1, 2, 3, 4, 5], values= [0.1, 0.2, 0.3, 0.4, 0.5], }, }, }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, Vector}; use std::collections::HashMap; client .upsert_points_blocking( "{collection_name}".to_string(), vec![ PointStruct::new( 1, HashMap::from([ ( "text".to_string(), Vector::from( (vec![6, 7], vec![1.0, 2.0]) ), ), ]), HashMap::new().into(), ), PointStruct::new( 2, HashMap::from([ ( "text".to_string(), Vector::from( (vec![1, 2, 3, 4, 5], vec![0.1, 0.2, 0.3, 0.4, 0.5]) ), ), ]), HashMap::new().into(), ), ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.grpc.Points.NamedVectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.Vectors; client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors( Vectors.newBuilder() .setVectors( NamedVectors.newBuilder() .putAllVectors( Map.of( "text", vector(List.of(1.0f, 2.0f), List.of(6, 7)))) .build()) .build()) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors( Vectors.newBuilder() .setVectors( NamedVectors.newBuilder() .putAllVectors( Map.of( "text", vector( List.of(0.1f, 0.2f, 0.3f, 0.4f, 0.5f), List.of(1, 2, 3, 4, 5)))) .build()) .build()) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new Dictionary<string, Vector> { ["text"] = ([1.0f, 2.0f], [6, 7]) } }, new() { Id = 2, Vectors = new Dictionary<string, Vector> { ["text"] = ([0.1f, 0.2f, 0.3f, 0.4f, 0.5f], [1, 2, 3, 4, 5]) } } } ); ``` ## Modify points To change a point, you can modify its vectors or its payload. There are several ways to do this. ### Update vectors *Available as of v1.2.0* This method updates the specified vectors on the given points. Unspecified vectors are kept unchanged. All given points must exist. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/update_vectors)): ```http PUT /collections/{collection_name}/points/vectors { "points": [ { "id": 1, "vector": { "image": [0.1, 0.2, 0.3, 0.4] } }, { "id": 2, "vector": { "text": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2] } } ] } ``` ```python client.update_vectors( collection_name="{collection_name}", points=[ models.PointVectors( id=1, vector={ "image": [0.1, 0.2, 0.3, 0.4], }, ), models.PointVectors( id=2, vector={ "text": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], }, ), ], ) ``` ```typescript client.updateVectors("{collection_name}", { points: [ { id: 1, vector: { image: [0.1, 0.2, 0.3, 0.4], }, }, { id: 2, vector: { text: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], }, }, ], }); ``` ```rust use qdrant_client::qdrant::PointVectors; use std::collections::HashMap; client .update_vectors_blocking( "{collection_name}", None, &[ PointVectors { id: Some(1.into()), vectors: Some( HashMap::from([("image".to_string(), vec![0.1, 0.2, 0.3, 0.4])]).into(), ), }, PointVectors { id: Some(2.into()), vectors: Some( HashMap::from([( "text".to_string(), vec![0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], )]) .into(), ), }, ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import static io.qdrant.client.VectorsFactory.namedVectors; client .updateVectorsAsync( "{collection_name}", List.of( PointVectors.newBuilder() .setId(id(1)) .setVectors(namedVectors(Map.of("image", vector(List.of(0.1f, 0.2f, 0.3f, 0.4f))))) .build(), PointVectors.newBuilder() .setId(id(2)) .setVectors( namedVectors( Map.of( "text", vector(List.of(0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f))))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpdateVectorsAsync( collectionName: "{collection_name}", points: new List<PointVectors> { new() { Id = 1, Vectors = ("image", new float[] { 0.1f, 0.2f, 0.3f, 0.4f }) }, new() { Id = 2, Vectors = ("text", new float[] { 0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f }) } } ); ``` To update points and replace all of its vectors, see [uploading points](#upload-points). ### Delete vectors *Available as of v1.2.0* This method deletes just the specified vectors from the given points. Other vectors are kept unchanged. Points are never deleted. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/deleted_vectors)): ```http POST /collections/{collection_name}/points/vectors/delete { "points": [0, 3, 100], "vectors": ["text", "image"] } ``` ```python client.delete_vectors( collection_name="{collection_name}", points_selector=models.PointIdsList( points=[0, 3, 100], ), vectors=["text", "image"], ) ``` ```typescript client.deleteVectors("{collection_name}", { points: [0, 3, 10], vectors: ["text", "image"], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, VectorsSelector, }; client .delete_vectors_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], })), }, &VectorsSelector { names: vec!["text".into(), "image".into()], }, None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .deleteVectorsAsync( "{collection_name}", List.of("text", "image"), List.of(id(0), id(3), id(10))) .get(); ``` To delete entire points, see [deleting points](#delete-points). ### Update payload Learn how to modify the payload of a point in the [Payload](../payload/#update-payload) section. ## Delete points REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/delete_points)): ```http POST /collections/{collection_name}/points/delete { "points": [0, 3, 100] } ``` ```python client.delete( collection_name="{collection_name}", points_selector=models.PointIdsList( points=[0, 3, 100], ), ) ``` ```typescript client.delete("{collection_name}", { points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, }; client .delete_points_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![0.into(), 3.into(), 100.into()], })), }, None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client.deleteAsync("{collection_name}", List.of(id(0), id(3), id(100))); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeleteAsync(collectionName: "{collection_name}", ids: [0, 3, 100]); ``` Alternative way to specify which points to remove is to use filter. ```http POST /collections/{collection_name}/points/delete { "filter": { "must": [ { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.delete( collection_name="{collection_name}", points_selector=models.FilterSelector( filter=models.Filter( must=[ models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ], ) ), ) ``` ```typescript client.delete("{collection_name}", { filter: { must: [ { key: "color", match: { value: "red", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector, }; client .delete_points_blocking( "{collection_name}", None, &PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([ Condition::matches("color", "red".to_string()), ]))), }, None, ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; client .deleteAsync( "{collection_name}", Filter.newBuilder().addMust(matchKeyword("color", "red")).build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.DeleteAsync(collectionName: "{collection_name}", filter: MatchKeyword("color", "red")); ``` This example removes all points with `{ "color": "red" }` from the collection. ## Retrieve points There is a method for retrieving points by their ids. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_points)): ```http POST /collections/{collection_name}/points { "ids": [0, 3, 100] } ``` ```python client.retrieve( collection_name="{collection_name}", ids=[0, 3, 100], ) ``` ```typescript client.retrieve("{collection_name}", { ids: [0, 3, 100], }); ``` ```rust client .get_points( "{collection_name}", None, &[0.into(), 30.into(), 100.into()], Some(false), Some(false), None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .retrieveAsync("{collection_name}", List.of(id(0), id(30), id(100)), false, false, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.RetrieveAsync( collectionName: "{collection_name}", ids: [0, 30, 100], withPayload: false, withVectors: false ); ``` This method has additional parameters `with_vectors` and `with_payload`. Using these parameters, you can select parts of the point you want as a result. Excluding helps you not to waste traffic transmitting useless data. The single point can also be retrieved via the API: REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_point)): ```http GET /collections/{collection_name}/points/{point_id} ``` <!-- Python client: ```python ``` --> ## Scroll points Sometimes it might be necessary to get all stored points without knowing ids, or iterate over points that correspond to a filter. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/scroll_points)): ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "color", "match": { "value": "red" } } ] }, "limit": 1, "with_payload": true, "with_vector": false } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition(key="color", match=models.MatchValue(value="red")), ] ), limit=1, with_payload=True, with_vectors=False, ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { key: "color", match: { value: "red", }, }, ], }, limit: 1, with_payload: true, with_vector: false, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::matches( "color", "red".to_string(), )])), limit: Some(1), with_payload: Some(true.into()), with_vectors: Some(false.into()), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.WithPayloadSelectorFactory.enable; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter(Filter.newBuilder().addMust(matchKeyword("color", "red")).build()) .setLimit(1) .setWithPayload(enable(true)) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("color", "red"), limit: 1, payloadSelector: true ); ``` Returns all point with `color` = `red`. ```json { "result": { "next_page_offset": 1, "points": [ { "id": 0, "payload": { "color": "red" } } ] }, "status": "ok", "time": 0.0001 } ``` The Scroll API will return all points that match the filter in a page-by-page manner. All resulting points are sorted by ID. To query the next page it is necessary to specify the largest seen ID in the `offset` field. For convenience, this ID is also returned in the field `next_page_offset`. If the value of the `next_page_offset` field is `null` - the last page is reached. <!-- Python client: ```python ``` --> ## Counting points *Available as of v0.8.4* Sometimes it can be useful to know how many points fit the filter conditions without doing a real search. Among others, for example, we can highlight the following scenarios: * Evaluation of results size for faceted search * Determining the number of pages for pagination * Debugging the query execution speed REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/count_points)): ```http POST /collections/{collection_name}/points/count { "filter": { "must": [ { "key": "color", "match": { "value": "red" } } ] }, "exact": true } ``` ```python client.count( collection_name="{collection_name}", count_filter=models.Filter( must=[ models.FieldCondition(key="color", match=models.MatchValue(value="red")), ] ), exact=True, ) ``` ```typescript client.count("{collection_name}", { filter: { must: [ { key: "color", match: { value: "red", }, }, ], }, exact: true, }); ``` ```rust use qdrant_client::qdrant::{Condition, CountPoints, Filter}; client .count(&CountPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::matches( "color", "red".to_string(), )])), exact: Some(true), }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; client .countAsync( "{collection_name}", Filter.newBuilder().addMust(matchKeyword("color", "red")).build(), true) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.CountAsync( collectionName: "{collection_name}", filter: MatchKeyword("color", "red"), exact: true ); ``` Returns number of counts matching given filtering conditions: ```json { "count": 3811 } ``` ## Batch update *Available as of v1.5.0* You can batch multiple point update operations. This includes inserting, updating and deleting points, vectors and payload. A batch update request consists of a list of operations. These are executed in order. These operations can be batched: - [Upsert points](#upload-points): `upsert` or `UpsertOperation` - [Delete points](#delete-points): `delete_points` or `DeleteOperation` - [Update vectors](#update-vectors): `update_vectors` or `UpdateVectorsOperation` - [Delete vectors](#delete-vectors): `delete_vectors` or `DeleteVectorsOperation` - [Set payload](#set-payload): `set_payload` or `SetPayloadOperation` - [Overwrite payload](#overwrite-payload): `overwrite_payload` or `OverwritePayload` - [Delete payload](#delete-payload-keys): `delete_payload` or `DeletePayloadOperation` - [Clear payload](#clear-payload): `clear_payload` or `ClearPayloadOperation` The following example snippet makes use of all operations. REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/batch_update)): ```http POST /collections/{collection_name}/points/batch { "operations": [ { "upsert": { "points": [ { "id": 1, "vector": [1.0, 2.0, 3.0, 4.0], "payload": {} } ] } }, { "update_vectors": { "points": [ { "id": 1, "vector": [1.0, 2.0, 3.0, 4.0] } ] } }, { "delete_vectors": { "points": [1], "vector": [""] } }, { "overwrite_payload": { "payload": { "test_payload": "1" }, "points": [1] } }, { "set_payload": { "payload": { "test_payload_2": "2", "test_payload_3": "3" }, "points": [1] } }, { "delete_payload": { "keys": ["test_payload_2"], "points": [1] } }, { "clear_payload": { "points": [1] } }, {"delete": {"points": [1]}} ] } ``` ```python client.batch_update_points( collection_name=collection_name, update_operations=[ models.UpsertOperation( upsert=models.PointsList( points=[ models.PointStruct( id=1, vector=[1.0, 2.0, 3.0, 4.0], payload={}, ), ] ) ), models.UpdateVectorsOperation( update_vectors=models.UpdateVectors( points=[ models.PointVectors( id=1, vector=[1.0, 2.0, 3.0, 4.0], ) ] ) ), models.DeleteVectorsOperation( delete_vectors=models.DeleteVectors(points=[1], vector=[""]) ), models.OverwritePayloadOperation( overwrite_payload=models.SetPayload( payload={"test_payload": 1}, points=[1], ) ), models.SetPayloadOperation( set_payload=models.SetPayload( payload={ "test_payload_2": 2, "test_payload_3": 3, }, points=[1], ) ), models.DeletePayloadOperation( delete_payload=models.DeletePayload(keys=["test_payload_2"], points=[1]) ), models.ClearPayloadOperation(clear_payload=models.PointIdsList(points=[1])), models.DeleteOperation(delete=models.PointIdsList(points=[1])), ], ) ``` ```typescript client.batchUpdate("{collection_name}", { operations: [ { upsert: { points: [ { id: 1, vector: [1.0, 2.0, 3.0, 4.0], payload: {}, }, ], }, }, { update_vectors: { points: [ { id: 1, vector: [1.0, 2.0, 3.0, 4.0], }, ], }, }, { delete_vectors: { points: [1], vector: [""], }, }, { overwrite_payload: { payload: { test_payload: 1, }, points: [1], }, }, { set_payload: { payload: { test_payload_2: 2, test_payload_3: 3, }, points: [1], }, }, { delete_payload: { keys: ["test_payload_2"], points: [1], }, }, { clear_payload: { points: [1], }, }, { delete: { points: [1], }, }, ], }); ``` ```rust use qdrant_client::qdrant::{ points_selector::PointsSelectorOneOf, points_update_operation::{ DeletePayload, DeleteVectors, Operation, PointStructList, SetPayload, UpdateVectors, }, PointStruct, PointVectors, PointsIdsList, PointsSelector, PointsUpdateOperation, VectorsSelector, }; use serde_json::json; use std::collections::HashMap; client .batch_updates_blocking( "{collection_name}", &[ PointsUpdateOperation { operation: Some(Operation::Upsert(PointStructList { points: vec![PointStruct::new( 1, vec![1.0, 2.0, 3.0, 4.0], json!({}).try_into().unwrap(), )], })), }, PointsUpdateOperation { operation: Some(Operation::UpdateVectors(UpdateVectors { points: vec![PointVectors { id: Some(1.into()), vectors: Some(vec![1.0, 2.0, 3.0, 4.0].into()), }], })), }, PointsUpdateOperation { operation: Some(Operation::DeleteVectors(DeleteVectors { points_selector: Some(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points( PointsIdsList { ids: vec![1.into()], }, )), }), vectors: Some(VectorsSelector { names: vec!["".into()], }), })), }, PointsUpdateOperation { operation: Some(Operation::OverwritePayload(SetPayload { points_selector: Some(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points( PointsIdsList { ids: vec![1.into()], }, )), }), payload: HashMap::from([("test_payload".to_string(), 1.into())]), })), }, PointsUpdateOperation { operation: Some(Operation::SetPayload(SetPayload { points_selector: Some(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points( PointsIdsList { ids: vec![1.into()], }, )), }), payload: HashMap::from([ ("test_payload_2".to_string(), 2.into()), ("test_payload_3".to_string(), 3.into()), ]), })), }, PointsUpdateOperation { operation: Some(Operation::DeletePayload(DeletePayload { points_selector: Some(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points( PointsIdsList { ids: vec![1.into()], }, )), }), keys: vec!["test_payload_2".to_string()], })), }, PointsUpdateOperation { operation: Some(Operation::ClearPayload(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![1.into()], })), })), }, PointsUpdateOperation { operation: Some(Operation::Delete(PointsSelector { points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList { ids: vec![1.into()], })), })), }, ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.PointVectors; import io.qdrant.client.grpc.Points.PointsIdsList; import io.qdrant.client.grpc.Points.PointsSelector; import io.qdrant.client.grpc.Points.PointsUpdateOperation; import io.qdrant.client.grpc.Points.PointsUpdateOperation.ClearPayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePoints; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeleteVectors; import io.qdrant.client.grpc.Points.PointsUpdateOperation.PointStructList; import io.qdrant.client.grpc.Points.PointsUpdateOperation.SetPayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.UpdateVectors; import io.qdrant.client.grpc.Points.VectorsSelector; client .batchUpdateAsync( "{collection_name}", List.of( PointsUpdateOperation.newBuilder() .setUpsert( PointStructList.newBuilder() .addPoints( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f)) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setUpdateVectors( UpdateVectors.newBuilder() .addPoints( PointVectors.newBuilder() .setId(id(1)) .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f)) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeleteVectors( DeleteVectors.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .setVectors(VectorsSelector.newBuilder().addNames("").build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setOverwritePayload( SetPayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .putAllPayload(Map.of("test_payload", value(1))) .build()) .build(), PointsUpdateOperation.newBuilder() .setSetPayload( SetPayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .putAllPayload( Map.of("test_payload_2", value(2), "test_payload_3", value(3))) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeletePayload( DeletePayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .addKeys("test_payload_2") .build()) .build(), PointsUpdateOperation.newBuilder() .setClearPayload( ClearPayload.newBuilder() .setPoints( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeletePoints( DeletePoints.newBuilder() .setPoints( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .build()) .build())) .get(); ``` To batch many points with a single operation type, please use batching functionality in that operation directly.
documentation/concepts/points.md
--- title: Snapshots weight: 110 aliases: - ../snapshots --- # Snapshots *Available as of v0.8.4* Snapshots are `tar` archive files that contain data and configuration of a specific collection on a specific node at a specific time. In a distributed setup, when you have multiple nodes in your cluster, you must create snapshots for each node separately when dealing with a single collection. This feature can be used to archive data or easily replicate an existing deployment. For disaster recovery, Qdrant Cloud users may prefer to use [Backups](/documentation/cloud/backups/) instead, which are physical disk-level copies of your data. For a step-by-step guide on how to use snapshots, see our [tutorial](/documentation/tutorials/create-snapshot/). ## Store snapshots The target directory used to store generated snapshots is controlled through the [configuration](../../guides/configuration) or using the ENV variable: `QDRANT__STORAGE__SNAPSHOTS_PATH=./snapshots`. You can set the snapshots storage directory from the [config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) file. If no value is given, default is `./snapshots`. ```yaml storage: # Specify where you want to store snapshots. snapshots_path: ./snapshots ``` *Available as of v1.3.0* While a snapshot is being created, temporary files are by default placed in the configured storage directory. This location may have limited capacity or be on a slow network-attached disk. You may specify a separate location for temporary files: ```yaml storage: # Where to store temporary files temp_path: /tmp ``` ## Create snapshot <aside role="status">If you work with a distributed deployment, you have to create snapshots for each node separately. A single snapshot will contain only the data stored on the node on which the snapshot was created.</aside> To create a new snapshot for an existing collection: ```http POST /collections/{collection_name}/snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.create_snapshot(collection_name="{collection_name}") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createSnapshot("{collection_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.create_snapshot("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.createSnapshotAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreateSnapshotAsync("{collection_name}"); ``` This is a synchronous operation for which a `tar` archive file will be generated into the `snapshot_path`. ### Delete snapshot *Available as of v1.0.0* ```http DELETE /collections/{collection_name}/snapshots/{snapshot_name} ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.delete_snapshot( collection_name="{collection_name}", snapshot_name="{snapshot_name}" ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.deleteSnapshot("{collection_name}", "{snapshot_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.delete_snapshot("{collection_name}", "{snapshot_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.deleteSnapshotAsync("{collection_name}", "{snapshot_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeleteSnapshotAsync(collectionName: "{collection_name}", snapshotName: "{snapshot_name}"); ``` ## List snapshot List of snapshots for a collection: ```http GET /collections/{collection_name}/snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.list_snapshots(collection_name="{collection_name}") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.listSnapshots("{collection_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_snapshots("{collection_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listSnapshotAsync("{collection_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListSnapshotsAsync("{collection_name}"); ``` ## Retrieve snapshot <aside role="status">Only available through the REST API for the time being.</aside> To download a specified snapshot from a collection as a file: ```http GET /collections/{collection_name}/snapshots/{snapshot_name} ``` ```shell curl 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.snapshot' \ -H 'api-key: ********' \ --output 'filename.snapshot' ``` ## Restore snapshot <aside role="status">Snapshots generated in one Qdrant cluster can only be restored to other Qdrant clusters that share the same minor version. For instance, a snapshot captured from a v1.4.1 cluster can only be restored to clusters running version v1.4.x, where x is equal to or greater than 1.</aside> Snapshots can be restored in three possible ways: 1. [Recovering from a URL or local file](#recover-from-a-url-or-local-file) (useful for restoring a snapshot file that is on a remote server or already stored on the node) 3. [Recovering from an uploaded file](#recover-from-an-uploaded-file) (useful for migrating data to a new cluster) 3. [Recovering during start-up](#recover-during-start-up) (useful when running a self-hosted single-node Qdrant instance) Regardless of the method used, Qdrant will extract the shard data from the snapshot and properly register shards in the cluster. If there are other active replicas of the recovered shards in the cluster, Qdrant will replicate them to the newly recovered node by default to maintain data consistency. ### Recover from a URL or local file *Available as of v0.11.3* This method of recovery requires the snapshot file to be downloadable from a URL or exist as a local file on the node (like if you [created the snapshot](#create-snapshot) on this node previously). If instead you need to upload a snapshot file, see the next section. To recover from a URL or local file use the [snapshot recovery endpoint](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_snapshot). This endpoint accepts either a URL like `https://example.com` or a [file URI](https://en.wikipedia.org/wiki/File_URI_scheme) like `file:///tmp/snapshot-2022-10-10.snapshot`. If the target collection does not exist, it will be created. ```http PUT /collections/{collection_name}/snapshots/recover { "location": "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot" } ``` ```python from qdrant_client import QdrantClient client = QdrantClient("qdrant-node-2", port=6333) client.recover_snapshot( "{collection_name}", "http://qdrant-node-1:6333/collections/collection_name/snapshots/snapshot-2022-10-10.shapshot", ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.recoverSnapshot("{collection_name}", { location: "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot", }); ``` <aside role="status">When recovering from a URL, the URL must be reachable by the Qdrant node that you are restoring. In Qdrant Cloud, restoring via URL is not supported since all outbound traffic is blocked for security purposes. You may still restore via file URI or via an uploaded file.</aside> ### Recover from an uploaded file The snapshot file can also be uploaded as a file and restored using the [recover from uploaded snapshot](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_uploaded_snapshot). This endpoint accepts the raw snapshot data in the request body. If the target collection does not exist, it will be created. ```bash curl -X POST 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \ -H 'api-key: ********' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot' ``` This method is typically used to migrate data from one cluster to another, so we recommend setting the [priority](#snapshot-priority) to "snapshot" for that use-case. ### Recover during start-up <aside role="alert">This method cannot be used in a multi-node deployment and cannot be used in Qdrant Cloud.</aside> If you have a single-node deployment, you can recover any collection at start-up and it will be immediately available. Restoring snapshots is done through the Qdrant CLI at start-up time via the `--snapshot` argument which accepts a list of pairs such as `<snapshot_file_path>:<target_collection_name>` For example: ```bash ./qdrant --snapshot /snapshots/test-collection-archive.snapshot:test-collection --snapshot /snapshots/test-collection-archive.snapshot:test-copy-collection ``` The target collection **must** be absent otherwise the program will exit with an error. If you wish instead to overwrite an existing collection, use the `--force_snapshot` flag with caution. ### Snapshot priority When recovering a snapshot to a non-empty node, there may be conflicts between the snapshot data and the existing data. The "priority" setting controls how Qdrant handles these conflicts. The priority setting is important because different priorities can give very different end results. The default priority may not be best for all situations. The available snapshot recovery priorities are: - `replica`: _(default)_ prefer existing data over the snapshot. - `snapshot`: prefer snapshot data over existing data. - `no_sync`: restore snapshot without any additional synchronization. To recover a new collection from a snapshot, you need to set the priority to `snapshot`. With `snapshot` priority, all data from the snapshot will be recovered onto the cluster. With `replica` priority _(default)_, you'd end up with an empty collection because the collection on the cluster did not contain any points and that source was preferred. `no_sync` is for specialized use cases and is not commonly used. It allows managing shards and transferring shards between clusters manually without any additional synchronization. Using it incorrectly will leave your cluster in a broken state. To recover from a URL, you specify an additional parameter in the request body: ```http PUT /collections/{collection_name}/snapshots/recover { "location": "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot", "priority": "snapshot" } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("qdrant-node-2", port=6333) client.recover_snapshot( "{collection_name}", "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot", priority=models.SnapshotPriority.SNAPSHOT, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.recoverSnapshot("{collection_name}", { location: "http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot", priority: "snapshot" }); ``` ```bash curl -X POST 'http://qdrant-node-1:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \ -H 'api-key: ********' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot' ``` ## Snapshots for the whole storage *Available as of v0.8.5* Sometimes it might be handy to create snapshot not just for a single collection, but for the whole storage, including collection aliases. Qdrant provides a dedicated API for that as well. It is similar to collection-level snapshots, but does not require `collection_name`. <aside role="status">Whole storage snapshots can be created and downloaded from Qdrant Cloud, but you cannot restore a Qdrant Cloud cluster from a whole storage snapshot since that requires use of the Qdrant CLI. You can use <a href="/documentation/cloud/backups/">Backups</a> instead.</aside> ### Create full storage snapshot ```http POST /snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.create_full_snapshot() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createFullSnapshot(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.create_full_snapshot().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.createFullSnapshotAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreateFullSnapshotAsync(); ``` ### Delete full storage snapshot *Available as of v1.0.0* ```http DELETE /snapshots/{snapshot_name} ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.delete_full_snapshot(snapshot_name="{snapshot_name}") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.deleteFullSnapshot("{snapshot_name}"); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.delete_full_snapshot("{snapshot_name}").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.deleteFullSnapshotAsync("{snapshot_name}").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.DeleteFullSnapshotAsync("{snapshot_name}"); ``` ### List full storage snapshots ```http GET /snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) client.list_full_snapshots() ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.listFullSnapshots(); ``` ```rust use qdrant_client::client::QdrantClient; let client = QdrantClient::from_url("http://localhost:6334").build()?; client.list_full_snapshots().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.listFullSnapshotAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.ListFullSnapshotsAsync(); ``` ### Download full storage snapshot <aside role="status">Only available through the REST API for the time being.</aside> ```http GET /snapshots/{snapshot_name} ``` ## Restore full storage snapshot Restoring snapshots can only be done through the Qdrant CLI at startup time. For example: ```bash ./qdrant --storage-snapshot /snapshots/full-snapshot-2022-07-18-11-20-51.snapshot ```
documentation/concepts/snapshots.md
--- title: Filtering weight: 60 aliases: - ../filtering --- # Filtering With Qdrant, you can set conditions when searching or retrieving points. For example, you can impose conditions on both the [payload](../payload) and the `id` of the point. Setting additional conditions is important when it is impossible to express all the features of the object in the embedding. Examples include a variety of business requirements: stock availability, user location, or desired price range. ## Filtering clauses Qdrant allows you to combine conditions in clauses. Clauses are different logical operations, such as `OR`, `AND`, and `NOT`. Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression. Let's take a look at the clauses implemented in Qdrant. Suppose we have a set of points with the following payload: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 2, "city": "London", "color": "red" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" }, { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` ### Must Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } ... } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient(host="localhost", port=6333) client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue(value="London"), ), models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ] ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.scroll("{collection_name}", { filter: { must: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, ScrollPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // & operator combines two conditions in an AND conjunction(must) await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") & MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [{ "id": 2, "city": "London", "color": "red" }] ``` When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied. In this sense, `must` is equivalent to the operator `AND`. ### Should Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="city", match=models.MatchValue(value="London"), ), models.FieldCondition( key="color", match=models.MatchValue(value="red"), ), ] ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; import java.util.List; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllShould( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // | operator combines two conditions in an OR disjunction(should) await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") | MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 2, "city": "London", "color": "red" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" } ] ``` When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied. In this sense, `should` is equivalent to the operator `OR`. ### Must Not Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must_not": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must_not=[ models.FieldCondition(key="city", match=models.MatchValue(value="London")), models.FieldCondition(key="color", match=models.MatchValue(value="red")), ] ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must_not: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must_not([ Condition::matches("city", "london".to_string()), Condition::matches("color", "red".to_string()), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMustNot( List.of(matchKeyword("city", "London"), matchKeyword("color", "red"))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); // The ! operator negates the condition(must not) await client.ScrollAsync( collectionName: "{collection_name}", filter: !(MatchKeyword("city", "London") & MatchKeyword("color", "red")) ); ``` Filtered points would be: ```json [ { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied. In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`. ### Clauses combination It is also possible to use several clauses simultaneously: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ], "must_not": [ { "key": "color", "match": { "value": "red" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition(key="city", match=models.MatchValue(value="London")), ], must_not=[ models.FieldCondition(key="color", match=models.MatchValue(value="red")), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { key: "city", match: { value: "London" }, }, ], must_not: [ { key: "color", match: { value: "red" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter { must: vec![Condition::matches("city", "London".to_string())], must_not: vec![Condition::matches("color", "red".to_string())], ..Default::default() }), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust(matchKeyword("city", "London")) .addMustNot(matchKeyword("color", "red")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("city", "London") & !MatchKeyword("color", "red") ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" } ] ``` In this case, the conditions are combined by `AND`. Also, the conditions could be recursively nested. Example: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must_not": [ { "must": [ { "key": "city", "match": { "value": "London" } }, { "key": "color", "match": { "value": "red" } } ] } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must_not=[ models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue(value="London") ), models.FieldCondition( key="color", match=models.MatchValue(value="red") ), ], ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must_not: [ { must: [ { key: "city", match: { value: "London" }, }, { key: "color", match: { value: "red" }, }, ], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must_not([Filter::must([ Condition::matches("city", "London".to_string()), Condition::matches("color", "red".to_string()), ]) .into()])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.filter; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMustNot( filter( Filter.newBuilder() .addAllMust( List.of( matchKeyword("city", "London"), matchKeyword("color", "red"))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: new Filter { MustNot = { MatchKeyword("city", "London") & MatchKeyword("color", "red") } } ); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 4, "city": "Berlin", "color": "red" }, { "id": 5, "city": "Moscow", "color": "green" }, { "id": 6, "city": "Moscow", "color": "blue" } ] ``` ## Filtering conditions Different types of values in payload correspond to different kinds of queries that we can apply to them. Let's look at the existing condition variants and what types of data they apply to. ### Match ```json { "key": "color", "match": { "value": "red" } } ``` ```python models.FieldCondition( key="color", match=models.MatchValue(value="red"), ) ``` ```typescript { key: 'color', match: {value: 'red'} } ``` ```rust Condition::matches("color", "red".to_string()) ``` ```java matchKeyword("color", "red"); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchKeyword("color", "red"); ``` For the other types, the match condition will look exactly the same, except for the type used: ```json { "key": "count", "match": { "value": 0 } } ``` ```python models.FieldCondition( key="count", match=models.MatchValue(value=0), ) ``` ```typescript { key: 'count', match: {value: 0} } ``` ```rust Condition::matches("count", 0) ``` ```java import static io.qdrant.client.ConditionFactory.match; match("count", 0); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("count", 0); ``` The simplest kind of condition is one that checks if the stored value equals the given one. If several values are stored, at least one of them should match the condition. You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads. ### Match Any *Available as of v1.1.0* In case you want to check if the stored value is one of multiple values, you can use the Match Any condition. Match Any works as a logical OR for the given values. It can also be described as a `IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { "key": "color", "match": { "any": ["black", "yellow"] } } ``` ```python FieldCondition( key="color", match=models.MatchAny(any=["black", "yellow"]), ) ``` ```typescript { key: 'color', match: {any: ['black', 'yellow']} } ``` ```rust Condition::matches("color", vec!["black".to_string(), "yellow".to_string()]) ``` ```java import static io.qdrant.client.ConditionFactory.matchKeywords; matchKeywords("color", List.of("black", "yellow")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("color", ["black", "yellow"]); ``` In this example, the condition will be satisfied if the stored value is either `black` or `yellow`. If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"black"` is in `["black", "yellow"]`. ### Match Except *Available as of v1.2.0* In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition. Match Except works as a logical NOR for the given values. It can also be described as a `NOT IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { "key": "color", "match": { "except": ["black", "yellow"] } } ``` ```python FieldCondition( key="color", match=models.MatchExcept(**{"except": ["black", "yellow"]}), ) ``` ```typescript { key: 'color', match: {except: ['black', 'yellow']} } ``` ```rust Condition::matches( "color", !MatchValue::from(vec!["black".to_string(), "yellow".to_string()]), ) ``` ```java import static io.qdrant.client.ConditionFactory.matchExceptKeywords; matchExceptKeywords("color", List.of("black", "yellow")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match("color", ["black", "yellow"]); ``` In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`. If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `["black", "green"]`, the condition will be satisfied, because `"green"` does not match `"black"` nor `"yellow"`. ### Nested key *Available as of v1.1.0* Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field. For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project. Suppose we have a set of points with the following payload: ```json [ { "id": 1, "country": { "name": "Germany", "cities": [ { "name": "Berlin", "population": 3.7, "sightseeing": ["Brandenburg Gate", "Reichstag"] }, { "name": "Munich", "population": 1.5, "sightseeing": ["Marienplatz", "Olympiapark"] } ] } }, { "id": 2, "country": { "name": "Japan", "cities": [ { "name": "Tokyo", "population": 9.3, "sightseeing": ["Tokyo Tower", "Tokyo Skytree"] }, { "name": "Osaka", "population": 2.7, "sightseeing": ["Osaka Castle", "Universal Studios Japan"] } ] } } ] ``` You can search on a nested field using a dot notation. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.name", "match": { "value": "Germany" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.name", match=models.MatchValue(value="Germany") ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.name", match: { value: "Germany" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::matches( "country.name", "Germany".to_string(), )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould(matchKeyword("country.name", "Germany")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync(collectionName: "{collection_name}", filter: MatchKeyword("country.name", "Germany")); ``` You can also search through arrays by projecting inner values using the `[]` syntax. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.cities[].population", "range": { "gte": 9.0, } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.cities[].population", range=models.Range( gt=None, gte=9.0, lt=None, lte=None, ), ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.cities[].population", range: { gt: null, gte: 9.0, lt: null, lte: null, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::range( "country.cities[].population", Range { gte: Some(9.0), ..Default::default() }, )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.Range; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould( range( "country.cities[].population", Range.newBuilder().setGte(9.0).build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Range("country.cities[].population", new Qdrant.Client.Grpc.Range { Gte = 9.0 }) ); ``` This query would only output the point with id 2 as only Japan has a city with population greater than 9.0. And the leaf nested field can also be an array. ```http POST /collections/{collection_name}/points/scroll { "filter": { "should": [ { "key": "country.cities[].sightseeing", "match": { "value": "Osaka Castle" } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( should=[ models.FieldCondition( key="country.cities[].sightseeing", match=models.MatchValue(value="Osaka Castle"), ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { should: [ { key: "country.cities[].sightseeing", match: { value: "Osaka Castle" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::should([Condition::matches( "country.cities[].sightseeing", "Osaka Castle".to_string(), )])), ..Default::default() }) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addShould(matchKeyword("country.cities[].sightseeing", "Germany")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("country.cities[].sightseeing", "Germany") ); ``` This query would only output the point with id 2 as only Japan has a city with the "Osaka castke" as part of the sightseeing. ### Nested object filter *Available as of v1.2.0* By default, the conditions are taking into account the entire payload of a point. For instance, given two points with the following payload: ```json [ { "id": 1, "dinosaur": "t-rex", "diet": [ { "food": "leaves", "likes": false}, { "food": "meat", "likes": true} ] }, { "id": 2, "dinosaur": "diplodocus", "diet": [ { "food": "leaves", "likes": true}, { "food": "meat", "likes": false} ] } ] ``` The following query would match both points: ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "key": "diet[].food", "match": { "value": "meat" } }, { "key": "diet[].likes", "match": { "value": true } } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.FieldCondition( key="diet[].food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="diet[].likes", match=models.MatchValue(value=True) ), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { key: "diet[].food", match: { value: "meat" }, }, { key: "diet[].likes", match: { value: true }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ Condition::matches("diet[].food", "meat".to_string()), Condition::matches("diet[].likes", true), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword("diet[].food", "meat"), match("diet[].likes", true))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: MatchKeyword("diet[].food", "meat") & Match("diet[].likes", true) ); ``` This happens because both points are matching the two conditions: - the "t-rex" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes` - the "diplodocus" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes` To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter. Nested object filters allow arrays of objects to be queried independently of each other. It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply. The key should point to an array of objects and can be used with or without the bracket notation ("data" or "data[]"). ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [{ "nested": { "key": "diet", "filter":{ "must": [ { "key": "food", "match": { "value": "meat" } }, { "key": "likes", "match": { "value": true } } ] } } }] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key="diet", filter=models.Filter( must=[ models.FieldCondition( key="food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="likes", match=models.MatchValue(value=True) ), ] ), ) ) ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { nested: { key: "diet", filter: { must: [ { key: "food", match: { value: "meat" }, }, { key: "likes", match: { value: true }, }, ], }, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([NestedCondition { key: "diet".to_string(), filter: Some(Filter::must([ Condition::matches("food", "meat".to_string()), Condition::matches("likes", true), ])), } .into()])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust( nested( "diet", Filter.newBuilder() .addAllMust( List.of( matchKeyword("food", "meat"), match("likes", true))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true)) ); ``` The matching logic is modified to be applied at the level of an array element within the payload. Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time. Parent document is considered to match the condition if at least one element of the array matches the nested filter. **Limitations** The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause. ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ "nested": { { "key": "diet", "filter":{ "must": [ { "key": "food", "match": { "value": "meat" } }, { "key": "likes", "match": { "value": true } } ] } } }, { "has_id": [1] } ] } } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key="diet", filter=models.Filter( must=[ models.FieldCondition( key="food", match=models.MatchValue(value="meat") ), models.FieldCondition( key="likes", match=models.MatchValue(value=True) ), ] ), ) ), models.HasIdCondition(has_id=[1]), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { nested: { key: "diet", filter: { must: [ { key: "food", match: { value: "meat" }, }, { key: "likes", match: { value: true }, }, ], }, }, }, { has_id: [1], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([ NestedCondition { key: "diet".to_string(), filter: Some(Filter::must([ Condition::matches("food", "meat".to_string()), Condition::matches("likes", true), ])), } .into(), Condition::has_id([1]), ])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust( nested( "diet", Filter.newBuilder() .addAllMust( List.of( matchKeyword("food", "meat"), match("likes", true))) .build())) .addMust(hasId(id(1))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync( collectionName: "{collection_name}", filter: Nested("diet", MatchKeyword("food", "meat") & Match("likes", true)) & HasId(1) ); ``` ### Full Text Match *Available as of v0.10.0* A special case of the `match` condition is the `text` match condition. It allows you to search for a specific substring, token or phrase within the text field. Exact texts that will match the condition depend on full-text index configuration. Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index). If there is no full-text index for the field, the condition will work as exact substring match. ```json { "key": "description", "match": { "text": "good cheap" } } ``` ```python models.FieldCondition( key="description", match=models.MatchText(text="good cheap"), ) ``` ```typescript { key: 'description', match: {text: 'good cheap'} } ``` ```rust // If the match string contains a white-space, full text match is performed. // Otherwise a keyword match is performed. Condition::matches("description", "good cheap".to_string()) ``` ```java import static io.qdrant.client.ConditionFactory.matchText; matchText("description", "good cheap"); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchText("description", "good cheap"); ``` If the query has several words, then the condition will be satisfied only if all of them are present in the text. ### Range ```json { "key": "price", "range": { "gt": null, "gte": 100.0, "lt": null, "lte": 450.0 } } ``` ```python models.FieldCondition( key="price", range=models.Range( gt=None, gte=100.0, lt=None, lte=450.0, ), ) ``` ```typescript { key: 'price', range: { gt: null, gte: 100.0, lt: null, lte: 450.0 } } ``` ```rust Condition::range( "price", Range { gt: None, gte: Some(100.0), lt: None, lte: Some(450.0), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Range; range("price", Range.newBuilder().setGte(100.0).setLte(450).build()); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Range("price", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 }); ``` The `range` condition sets the range of possible values for stored payload values. If several values are stored, at least one of them should match the condition. Comparisons that can be used: - `gt` - greater than - `gte` - greater than or equal - `lt` - less than - `lte` - less than or equal Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads. ### Geo #### Geo Bounding Box ```json { "key": "location", "geo_bounding_box": { "bottom_right": { "lon": 13.455868, "lat": 52.495862 }, "top_left": { "lon": 13.403683, "lat": 52.520711 } } } ``` ```python models.FieldCondition( key="location", geo_bounding_box=models.GeoBoundingBox( bottom_right=models.GeoPoint( lon=13.455868, lat=52.495862, ), top_left=models.GeoPoint( lon=13.403683, lat=52.520711, ), ), ) ``` ```typescript { key: 'location', geo_bounding_box: { bottom_right: { lon: 13.455868, lat: 52.495862 }, top_left: { lon: 13.403683, lat: 52.520711 } } } ``` ```rust Condition::geo_bounding_box( "location", GeoBoundingBox { bottom_right: Some(GeoPoint { lon: 13.455868, lat: 52.495862, }), top_left: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoBoundingBox; geoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoBoundingBox("location", 52.520711, 13.403683, 52.495862, 13.455868); ``` It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`. #### Geo Radius ```json { "key": "location", "geo_radius": { "center": { "lon": 13.403683, "lat": 52.520711 }, "radius": 1000.0 } } ``` ```python models.FieldCondition( key="location", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=13.403683, lat=52.520711, ), radius=1000.0, ), ) ``` ```typescript { key: 'location', geo_radius: { center: { lon: 13.403683, lat: 52.520711 }, radius: 1000.0 } } ``` ```rust Condition::geo_radius( "location", GeoRadius { center: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), radius: 1000.0, }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoRadius; geoRadius("location", 52.520711, 13.403683, 1000.0f); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoRadius("location", 52.520711, 13.403683, 1000.0f); ``` It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters. If several values are stored, at least one of them should match the condition. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). #### Geo Polygon Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island. When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same. Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic. ```json { "key": "location", "geo_polygon": { "exterior": { "points": [ { "lon": -70.0, "lat": -70.0 }, { "lon": 60.0, "lat": -70.0 }, { "lon": 60.0, "lat": 60.0 }, { "lon": -70.0, "lat": 60.0 }, { "lon": -70.0, "lat": -70.0 } ] }, "interiors": [ { "points": [ { "lon": -65.0, "lat": -65.0 }, { "lon": 0.0, "lat": -65.0 }, { "lon": 0.0, "lat": 0.0 }, { "lon": -65.0, "lat": 0.0 }, { "lon": -65.0, "lat": -65.0 } ] } ] } } ``` ```python models.FieldCondition( key="location", geo_polygon=models.GeoPolygon( exterior=models.GeoLineString( points=[ models.GeoPoint( lon=-70.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=-70.0, ), ] ), interiors=[ models.GeoLineString( points=[ models.GeoPoint( lon=-65.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=-65.0, ), ] ) ], ), ) ``` ```typescript { key: 'location', geo_polygon: { exterior: { points: [ { lon: -70.0, lat: -70.0 }, { lon: 60.0, lat: -70.0 }, { lon: 60.0, lat: 60.0 }, { lon: -70.0, lat: 60.0 }, { lon: -70.0, lat: -70.0 } ] }, interiors: { points: [ { lon: -65.0, lat: -65.0 }, { lon: 0.0, lat: -65.0 }, { lon: 0.0, lat: 0.0 }, { lon: -65.0, lat: 0.0 }, { lon: -65.0, lat: -65.0 } ] } } } ``` ```rust Condition::geo_polygon( "location", GeoPolygon { exterior: Some(GeoLineString { points: vec![ GeoPoint { lon: -70.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: -70.0, }, ], }), interiors: vec![GeoLineString { points: vec![ GeoPoint { lon: -65.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: 0.0 }, GeoPoint { lon: -65.0, lat: 0.0, }, GeoPoint { lon: -65.0, lat: -65.0, }, ], }], }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoPolygon; import io.qdrant.client.grpc.Points.GeoLineString; import io.qdrant.client.grpc.Points.GeoPoint; geoPolygon( "location", GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build())) .build(), List.of( GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build())) .build())); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; GeoPolygon( field: "location", exterior: new GeoLineString { Points = { new GeoPoint { Lat = -70.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = -70.0 } } }, interiors: [ new() { Points = { new GeoPoint { Lat = -65.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = -65.0 } } } ] ); ``` A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors. If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). ### Values count In addition to the direct value comparison, it is also possible to filter by the amount of values. For example, given the data: ```json [ { "id": 1, "name": "product A", "comments": ["Very good!", "Excellent"] }, { "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] } ] ``` We can perform the search only among the items with more than two comments: ```json { "key": "comments", "values_count": { "gt": 2 } } ``` ```python models.FieldCondition( key="comments", values_count=models.ValuesCount(gt=2), ) ``` ```typescript { key: 'comments', values_count: {gt: 2} } ``` ```rust Condition::values_count( "comments", ValuesCount { gt: Some(2), ..Default::default() }, ) ``` ```java import static io.qdrant.client.ConditionFactory.valuesCount; import io.qdrant.client.grpc.Points.ValuesCount; valuesCount("comments", ValuesCount.newBuilder().setGt(2).build()); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; ValuesCount("comments", new ValuesCount { Gt = 2 }); ``` The result would be: ```json [{ "id": 2, "name": "product B", "comments": ["meh", "expected more", "ok"] }] ``` If stored value is not an array - it is assumed that the amount of values is equals to 1. ### Is Empty Sometimes it is also useful to filter out records that are missing some value. The `IsEmpty` condition may help you with that: ```json { "is_empty": { "key": "reports" } } ``` ```python models.IsEmptyCondition( is_empty=models.PayloadField(key="reports"), ) ``` ```typescript { is_empty: { key: "reports"; } } ``` ```rust Condition::is_empty("reports") ``` ```java import static io.qdrant.client.ConditionFactory.isEmpty; isEmpty("reports"); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsEmpty("reports"); ``` This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value. <aside role="status">The <b>IsEmpty</b> is often useful together with the logical negation <b>must_not</b>. In this case all non-empty values will be selected.</aside> ### Is Null It is not possible to test for `NULL` values with the <b>match</b> condition. We have to use `IsNull` condition instead: ```json { "is_null": { "key": "reports" } } ``` ```python models.IsNullCondition( is_null=models.PayloadField(key="reports"), ) ``` ```typescript { is_null: { key: "reports"; } } ``` ```rust Condition::is_null("reports") ``` ```java import static io.qdrant.client.ConditionFactory.isNull; isNull("reports"); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsNull("reports"); ``` This condition will match all records where the field `reports` exists and has `NULL` value. ### Has id This type of query is not related to payload, but can be very useful in some situations. For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points. ```http POST /collections/{collection_name}/points/scroll { "filter": { "must": [ { "has_id": [1,3,5,7,9,11] } ] } ... } ``` ```python client.scroll( collection_name="{collection_name}", scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]), ], ), ) ``` ```typescript client.scroll("{collection_name}", { filter: { must: [ { has_id: [1, 3, 5, 7, 9, 11], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPoints}; client .scroll(&ScrollPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder() .addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11)))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.ScrollAsync(collectionName: "{collection_name}", filter: HasId([1, 3, 5, 7, 9, 11])); ``` Filtered points would be: ```json [ { "id": 1, "city": "London", "color": "green" }, { "id": 3, "city": "London", "color": "blue" }, { "id": 5, "city": "Moscow", "color": "green" } ] ```
documentation/concepts/filtering.md
--- title: Concepts weight: 21 # If the index.md file is empty, the link to the section will be hidden from the sidebar --- # Concepts Think of these concepts as a glossary. Each of these concepts include a link to detailed information, usually with examples. If you're new to AI, these concepts can help you learn more about AI and the Qdrant approach. ## Collections [Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search. ## Payload A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors. ## Points [Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload. ## Search [Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space. ## Explore [Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections. ## Filtering [Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more. ## Optimizer [Optimizer](/documentation/concepts/optimizer/) describes options to rebuild database structures for faster search. They include a vacuum, a merge, and an indexing optimizer. ## Storage [Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper. ## Indexing [Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index. ## Snapshots [Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times.
documentation/concepts/_index.md
--- title: HowTos weight: 100 draft: true --- <!--- ## Implement search-as-you-type functionality --> <!--- ## Move data between clusters -->
documentation/tutorials/how-to.md
--- title: "Inference with Mighty" short_description: "Mighty offers a speedy scalable embedding, a perfect fit for the speedy scalable Qdrant search. Let's combine them!" description: "We combine Mighty and Qdrant to create a semantic search service in Rust with just a few lines of code." weight: 17 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-06-01T11:24:20+01:00 keywords: - vector search - embeddings - mighty - rust - semantic search --- # Semantic Search with Mighty and Qdrant Much like Qdrant, the [Mighty](https://max.io/) inference server is written in Rust and promises to offer low latency and high scalability. This brief demo combines Mighty and Qdrant into a simple semantic search service that is efficient, affordable and easy to setup. We will use [Rust](https://rust-lang.org) and our [qdrant\_client crate](https://docs.rs/qdrant_client) for this integration. ## Initial setup For Mighty, start up a [docker container](https://hub.docker.com/layers/maxdotio/mighty-sentence-transformers/0.9.9/images/sha256-0d92a89fbdc2c211d927f193c2d0d34470ecd963e8179798d8d391a4053f6caf?context=explore) with an open port 5050. Just loading the port in a window shows the following: ```json { "name": "sentence-transformers/all-MiniLM-L6-v2", "architectures": [ "BertModel" ], "model_type": "bert", "max_position_embeddings": 512, "labels": null, "named_entities": null, "image_size": null, "source": "https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" } ``` Note that this uses the `MiniLM-L6-v2` model from Hugging Face. As per their website, the model "maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search". The distance measure to use is cosine similarity. Verify that mighty works by calling `curl https://<address>:5050/sentence-transformer?q=hello+mighty`. This will give you a result like (formatted via `jq`): ```json { "outputs": [ [ -0.05019686743617058, 0.051746174693107605, 0.048117730766534805, ... (381 values skipped) ] ], "shape": [ 1, 384 ], "texts": [ "Hello mighty" ], "took": 77 } ``` For Qdrant, follow our [cloud documentation](../../cloud/cloud-quick-start/) to spin up a [free tier](https://cloud.qdrant.io/). Make sure to retrieve an API key. ## Implement model API For mighty, you will need a way to emit HTTP(S) requests. This version uses the [reqwest](https://docs.rs/reqwest) crate, so add the following to your `Cargo.toml`'s dependencies section: ```toml [dependencies] reqwest = { version = "0.11.18", default-features = false, features = ["json", "rustls-tls"] } ``` Mighty offers a variety of model APIs which will download and cache the model on first use. For semantic search, use the `sentence-transformer` API (as in the above `curl` command). The Rust code to make the call is: ```rust use anyhow::anyhow; use reqwest::Client; use serde::Deserialize; use serde_json::Value as JsonValue; #[derive(Deserialize)] struct EmbeddingsResponse { pub outputs: Vec<Vec<f32>>, } pub async fn get_mighty_embedding( client: &Client, url: &str, text: &str ) -> anyhow::Result<Vec<f32>> { let response = client.get(url).query(&[("text", text)]).send().await?; if !response.status().is_success() { return Err(anyhow!( "Mighty API returned status code {}", response.status() )); } let embeddings: EmbeddingsResponse = response.json().await?; // ignore multiple embeddings at the moment embeddings.get(0).ok_or_else(|| anyhow!("mighty returned empty embedding")) } ``` Note that mighty can return multiple embeddings (if the input is too long to fit the model, it is automatically split). ## Create embeddings and run a query Use this code to create embeddings both for insertion and search. On the Qdrant side, take the embedding and run a query: ```rust use anyhow::anyhow; use qdrant_client::prelude::*; pub const SEARCH_LIMIT: u64 = 5; const COLLECTION_NAME: &str = "mighty"; pub async fn qdrant_search_embeddings( qdrant_client: &QdrantClient, vector: Vec<f32>, ) -> anyhow::Result<Vec<ScoredPoint>> { qdrant_client .search_points(&SearchPoints { collection_name: COLLECTION_NAME.to_string(), vector, limit: SEARCH_LIMIT, with_payload: Some(true.into()), ..Default::default() }) .await .map_err(|err| anyhow!("Failed to search Qdrant: {}", err)) } ``` You can convert the [`ScoredPoint`](https://docs.rs/qdrant-client/latest/qdrant_client/qdrant/struct.ScoredPoint.html)s to fit your desired output format.
documentation/tutorials/mighty.md
--- title: Bulk Upload Vectors weight: 13 --- # Bulk upload a large number of vectors Uploading a large-scale dataset fast might be a challenge, but Qdrant has a few tricks to help you with that. The first important detail about data uploading is that the bottleneck is usually located on the client side, not on the server side. This means that if you are uploading a large dataset, you should prefer a high-performance client library. We recommend using our [Rust client library](https://github.com/qdrant/rust-client) for this purpose, as it is the fastest client library available for Qdrant. If you are not using Rust, you might want to consider parallelizing your upload process. ## Disable indexing during upload In case you are doing an initial upload of a large dataset, you might want to disable indexing during upload. It will enable to avoid unnecessary indexing of vectors, which will be overwritten by the next batch. To disable indexing during upload, set `indexing_threshold` to `0`: ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "indexing_threshold": 0 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff( indexing_threshold=0, ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { indexing_threshold: 0, }, }); ``` After upload is done, you can enable indexing by setting `indexing_threshold` to a desired value (default is 20000): ```http PATCH /collections/{collection_name} { "optimizers_config": { "indexing_threshold": 20000 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.update_collection( collection_name="{collection_name}", optimizer_config=models.OptimizersConfigDiff(indexing_threshold=20000), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.updateCollection("{collection_name}", { optimizers_config: { indexing_threshold: 20000, }, }); ``` ## Upload directly to disk When the vectors you upload do not all fit in RAM, you likely want to use [memmap](../../concepts/storage/#configuring-memmap-storage) support. During collection [creation](../../concepts/collections/#create-collection), memmaps may be enabled on a per-vector basis using the `on_disk` parameter. This will store vector data directly on disk at all times. It is suitable for ingesting a large amount of data, essential for the billion scale benchmark. Using `memmap_threshold_kb` is not recommended in this case. It would require the [optimizer](../../concepts/optimizer/) to constantly transform in-memory segments into memmap segments on disk. This process is slower, and the optimizer can be a bottleneck when ingesting a large amount of data. Read more about this in [Configuring Memmap Storage](../../concepts/storage/#configuring-memmap-storage). ## Parallel upload into multiple shards In Qdrant, each collection is split into shards. Each shard has a separate Write-Ahead-Log (WAL), which is responsible for ordering operations. By creating multiple shards, you can parallelize upload of a large dataset. From 2 to 4 shards per one machine is a reasonable number. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "shard_number": 2 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), shard_number=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, shard_number: 2, }); ```
documentation/tutorials/bulk-upload.md
--- title: Aleph Alpha Search weight: 16 --- # Multimodal Semantic Search with Aleph Alpha | Time: 30 min | Level: Beginner | | | | --- | ----------- | ----------- |----------- | This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks. In most cases, semantic search is limited to homogenous data types for both documents and queries (text-text, image-image, audio-audio, etc.). With the recent growth of multimodal architectures, it is now possible to encode different data types into the same latent space. That opens up some great possibilities, as you can finally explore non-textual data, for example visual, with text queries. In the past, this would require labelling every image with a description of what it presents. Right now, you can rely on vector embeddings, which can represent all the inputs in the same space. *Figure 1: Two examples of text-image pairs presenting a similar object, encoded by a multimodal network into the same 2D latent space. Both texts are examples of English [pangrams](https://en.wikipedia.org/wiki/Pangram). https://deepai.org generated the images with pangrams used as input prompts.* ![](/docs/integrations/aleph-alpha/2d_text_image_embeddings.png) ## Sample dataset You will be using [COCO](https://cocodataset.org/), a large-scale object detection, segmentation, and captioning dataset. It provides various splits, 330,000 images in total. For demonstration purposes, this tutorials uses the [2017 validation split](http://images.cocodataset.org/zips/train2017.zip) that contains 5000 images from different categories with total size about 19GB. ```terminal wget http://images.cocodataset.org/zips/train2017.zip ``` ## Prerequisites There is no need to curate your datasets and train the models. [Aleph Alpha](https://www.aleph-alpha.com/), already has multimodality and multilinguality already built-in. There is an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that simplifies the integration. In order to enable the search capabilities, you need to build the search index to query on. For this example, you are going to vectorize the images and store their embeddings along with the filenames. You can then return the most similar files for given query. There are two things you need to set up before you start: 1. You need to have a Qdrant instance running. If you want to launch it locally, [Docker is the fastest way to do that](https://qdrant.tech/documentation/quick_start/#installation). 2. You need to have a registered [Aleph Alpha account](https://app.aleph-alpha.com/). 3. Upon registration, create an API key (see: [API Tokens](https://app.aleph-alpha.com/profile)). Now you can store the Aleph Alpha API key in a variable and choose the model your are going to use. ```python aa_token = "<< your_token >>" model = "luminous-base" ``` ## Vectorize the dataset In this example, images have been extracted and are stored in the `val2017` directory: ```python from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, Image, ) from glob import glob ids, vectors, payloads = [], [], [] async with AsyncClient(token=aa_token) as client: for i, image_path in enumerate(glob("./val2017/*.jpg")): # Convert the JPEG file into the embedding by calling # Aleph Alpha API prompt = Image.from_file(image_path) prompt = Prompt.from_image(prompt) query_params = { "prompt": prompt, "representation": SemanticRepresentation.Symmetric, "compress_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed(request=query_request, model=model) # Finally store the id, vector and the payload ids.append(i) vectors.append(query_response.embedding) payloads.append({"filename": image_path}) ``` ## Load embeddings into Qdrant Add all created embeddings, along with their ids and payloads into the `COCO` collection. ```python import qdrant_client from qdrant_client.http.models import Batch, VectorParams, Distance qdrant_client = qdrant_client.QdrantClient() qdrant_client.recreate_collection( collection_name="COCO", vectors_config=VectorParams( size=len(vectors[0]), distance=Distance.COSINE, ), ) qdrant_client.upsert( collection_name="COCO", points=Batch( ids=ids, vectors=vectors, payloads=payloads, ), ) ``` ## Query the database The `luminous-base`, model can provide you the vectors for both texts and images, which means you can run both text queries and reverse image search. Assume you want to find images similar to the one below: ![An image used to query the database](/docs/integrations/aleph-alpha/visual_search_query.png) With the following code snippet create its vector embedding and then perform the lookup in Qdrant: ```python async with AsyncCliet(token=aa_token) as client: prompt = ImagePrompt.from_file("query.jpg") prompt = Prompt.from_image(prompt) query_params = { "prompt": prompt, "representation": SemanticRepresentation.Symmetric, "compress_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed(request=query_request, model=model) results = qdrant.search( collection_name="COCO", query_vector=query_response.embedding, limit=3, ) print(results) ``` Here are the results: ![Visual search results](/docs/integrations/aleph-alpha/visual_search_results.png) **Note:** AlephAlpha models can provide embeddings for English, French, German, Italian and Spanish. Your search is not only multimodal, but also multilingual, without any need for translations. ```python text = "Surfing" async with AsyncClient(token=aa_token) as client: query_params = { "prompt": Prompt.from_text(text), "representation": SemanticRepresentation.Symmetric, "compres_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed(request=query_request, model=model) results = qdrant.search( collection_name="COCO", query_vector=query_response.embedding, limit=3, ) print(results) ``` Here are the top 3 results for “Surfing”: ![Text search results](/docs/integrations/aleph-alpha/text_search_results.png)
documentation/tutorials/aleph-alpha-search.md
--- title: Measure retrieval quality weight: 21 --- # Measure retrieval quality | Time: 30 min | Level: Intermediate | | | |--------------|---------------------|--|----| Semantic search pipelines are as good as the embeddings they use. If your model cannot properly represent input data, similar objects might be far away from each other in the vector space. No surprise, that the search results will be poor in this case. There is, however, another component of the process which can also degrade the quality of the search results. It is the ANN algorithm itself. In this tutorial, we will show how to measure the quality of the semantic retrieval and how to tune the parameters of the HNSW, the ANN algorithm used in Qdrant, to obtain the best results. ## Embeddings quality The quality of the embeddings is a topic for a separate tutorial. In a nutshell, it is usually measured and compared by benchmarks, such as [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard). The evaluation process itself is pretty straightforward and is based on a ground truth dataset built by humans. We have a set of queries and a set of the documents we would expect to receive for each of them. In the evaluation process, we take a query, find the most similar documents in the vector space and compare them with the ground truth. In that setup, **finding the most similar documents is implemented as full kNN search, without any approximation**. As a result, we can measure the quality of the embeddings themselves, without the influence of the ANN algorithm. ## Retrieval quality Embeddings quality is indeed the most important factor in the semantic search quality. However, vector search engines, such as Qdrant, do not perform pure kNN search. Instead, they use **Approximate Nearest Neighbors** (ANN) algorithms, which are much faster than the exact search, but can return suboptimal results. We can also **measure the retrieval quality of that approximation** which also contributes to the overall search quality. ### Quality metrics There are various ways of how quantify the quality of semantic search. Some of them, such as [Precision@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k), are based on the number of relevant documents in the top-k search results. Others, such as [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank), take into account the position of the first relevant document in the search results. [DCG and NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) metrics are, in turn, based on the relevance score of the documents. If we treat the search pipeline as a whole, we could use them all. The same is true for the embeddings quality evaluation. However, for the ANN algorithm itself, anything based on the relevance score or ranking is not applicable. Ranking in vector search relies on the distance between the query and the document in the vector space, however distance is not going to change due to approximation, as the function is still the same. Therefore, it only makes sense to measure the quality of the ANN algorithm by the number of relevant documents in the top-k search results, such as `precision@k`. It is calculated as the number of relevant documents in the top-k search results divided by `k`. In case of testing just the ANN algorithm, we can use the exact kNN search as a ground truth, with `k` being fixed. It will be a measure on **how well the ANN algorithm approximates the exact search**. ## Measure the quality of the search results Let's build a quality evaluation of the ANN algorithm in Qdrant. We will, first, call the search endpoint in a standard way to obtain the approximate search results. Then, we will call the exact search endpoint to obtain the exact matches, and finally compare both results in terms of precision. Before we start, let's create a collection, fill it with some data and then start our evaluation. We will use the same dataset as in the [Loading a dataset from Hugging Face hub](/documentation/tutorials/huggingface-datasets/) tutorial, `Qdrant/arxiv-titles-instructorxl-embeddings` from the [Hugging Face hub](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings). Let's download it in a streaming mode, as we are only going to use part of it. ```python from datasets import load_dataset dataset = load_dataset( "Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True ) ``` We need some data to be indexed and another set for the testing purposes. Let's get the first 50000 items for the training and the next 1000 for the testing. ```python dataset_iterator = iter(dataset) train_dataset = [next(dataset_iterator) for _ in range(60000)] test_dataset = [next(dataset_iterator) for _ in range(1000)] ``` Now, let's create a collection and index the training data. This collection will be created with the default configuration. Please be aware that it might be different from your collection settings, and it's always important to test exactly the same configuration you are going to use later in production. <aside role="status"> Distance function is another parameter that may impact the retrieval quality. If the embedding model was not trained to minimize cosine distance, you can get suboptimal search results by using it. Please test different distance functions to find the best one for your embeddings, if you don't know the specifics of the model training. </aside> ```python from qdrant_client import QdrantClient, models client = QdrantClient("http://localhost:6333") client.create_collection( collection_name="arxiv-titles-instructorxl-embeddings", vectors_config=models.VectorParams( size=768, # Size of the embeddings generated by InstructorXL model distance=models.Distance.COSINE, ), ) ``` We are now ready to index the training data. Uploading the records is going to trigger the indexing process, which will build the HNSW graph. The indexing process may take some time, depending on the size of the dataset, but your data is going to be available for search immediately after receiving the response from the `upsert` endpoint. **As long as the indexing is not finished, and HNSW not built, Qdrant will perform the exact search**. We have to wait until the indexing is finished to be sure that the approximate search is performed. ```python client.upload_records( collection_name="arxiv-titles-instructorxl-embeddings", records=[ models.Record( id=item["id"], vector=item["vector"], payload=item, ) for item in train_dataset ] ) while True: collection_info = client.get_collection(collection_name="arxiv-titles-instructorxl-embeddings") if collection_info.status == models.CollectionStatus.GREEN: # Collection status is green, which means the indexing is finished break ``` ## Standard mode vs exact search Qdrant has a built-in exact search mode, which can be used to measure the quality of the search results. In this mode, Qdrant performs a full kNN search for each query, without any approximation. It is not suitable for production use with high load, but it is perfect for the evaluation of the ANN algorithm and its parameters. It might be triggered by setting the `exact` parameter to `True` in the search request. We are simply going to use all the examples from the test dataset as queries and compare the results of the approximate search with the results of the exact search. Let's create a helper function with `k` being a parameter, so we can calculate the `precision@k` for different values of `k`. ```python def avg_precision_at_k(k: int): precisions = [] for item in test_dataset: ann_result = client.search( collection_name="arxiv-titles-instructorxl-embeddings", query_vector=item["vector"], limit=k, ) knn_result = client.search( collection_name="arxiv-titles-instructorxl-embeddings", query_vector=item["vector"], limit=k, search_params=models.SearchParams( exact=True, # Turns on the exact search mode ), ) # We can calculate the precision@k by comparing the ids of the search results ann_ids = set(item.id for item in ann_result) knn_ids = set(item.id for item in knn_result) precision = len(ann_ids.intersection(knn_ids)) / k precisions.append(precision) return sum(precisions) / len(precisions) ``` Calculating the `precision@5` is as simple as calling the function with the corresponding parameter: ```python print(f"avg(precision@5) = {avg_precision_at_k(k=5)}") ``` Response: ```text avg(precision@5) = 0.9935999999999995 ``` As we can see, the precision of the approximate search vs exact search is pretty high. There are, however, some scenarios when we need higher precision and can accept higher latency. HNSW is pretty tunable, and we can increase the precision by changing its parameters. ## Tweaking the HNSW parameters HNSW is a hierarchical graph, where each node has a set of links to other nodes. The number of edges per node is called the `m` parameter. The larger the value of it, the higher the precision of the search, but more space required. The `ef_construct` parameter is the number of neighbours to consider during the index building. Again, the larger the value, the higher the precision, but the longer the indexing time. The default values of these parameters are `m=16` and `ef_construct=100`. Let's try to increase them to `m=32` and `ef_construct=200` and see how it affects the precision. Of course, we need to wait until the indexing is finished before we can perform the search. ```python client.update_collection( collection_name="arxiv-titles-instructorxl-embeddings", hnsw_config=models.HnswConfigDiff( m=32, # Increase the number of edges per node from the default 16 to 32 ef_construct=200, # Increase the number of neighbours from the default 100 to 200 ) ) while True: collection_info = client.get_collection(collection_name="arxiv-titles-instructorxl-embeddings") if collection_info.status == models.CollectionStatus.GREEN: # Collection status is green, which means the indexing is finished break ``` The same function can be used to calculate the average `precision@5`: ```python print(f"avg(precision@5) = {avg_precision_at_k(k=5)}") ``` Response: ```text avg(precision@5) = 0.9969999999999998 ``` The precision has obviously increased, and we know how to control it. However, there is a trade-off between the precision and the search latency and memory requirements. In some specific cases, we may want to increase the precision as much as possible, so now we know how to do it. ## Wrapping up Assessing the quality of retrieval is a critical aspect of evaluating semantic search performance. It is imperative to measure retrieval quality when aiming for optimal quality of. your search results. Qdrant provides a built-in exact search mode, which can be used to measure the quality of the ANN algorithm itself, even in an automated way, as part of your CI/CD pipeline. Again, **the quality of the embeddings is the most important factor**. HNSW does a pretty good job in terms of precision, and it is parameterizable and tunable, when required. There are some other ANN algorithms available out there, such as [IVF*](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes#cell-probe-methods-indexivf-indexes), but they usually [perform worse than HNSW in terms of quality and performance](https://nirantk.com/writing/pgvector-vs-qdrant/#correctness).
documentation/tutorials/retrieval-quality.md
--- title: Neural Search Service weight: 1 --- # Create a Simple Neural Search Service | Time: 30 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | | --- | ----------- | ----------- |----------- | This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry. A neural search service uses artificial neural networks to improve the accuracy and relevance of search results. Besides offering simple keyword results, this system can retrieve results by meaning. It can understand and interpret complex search queries and provide more contextually relevant output, effectively enhancing the user's search experience. <aside role="status"> There is a version of this tutorial that uses <a href="https://github.com/qdrant/fastembed">Fastembed</a> model inference engine instead of Sentence Transformers. Check it out <a href="/documentation/tutorials/neural-search-fastembed">here</a>. </aside> ## Workflow To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI. ![Neural Search Workflow](/docs/workflow-neural-search.png) > **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). | ## Prerequisites To complete this tutorial, you will need: - Docker - The easiest way to use Qdrant is to run a pre-built Docker image. - [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com. - Python version >=3.8 ## Prepare sample dataset To conduct a neural search on startup descriptions, you must first encode the description data into vectors. To process text, you can use a pre-trained models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) or sentence transformers. The [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library lets you conveniently download and use many pre-trained models, such as DistilBERT, MPNet, etc. 1. First you need to download the dataset. ```bash wget https://storage.googleapis.com/generall-shared-data/startups_demo.json ``` 2. Install the SentenceTransformer library as well as other relevant packages. ```bash pip install sentence-transformers numpy pandas tqdm ``` 3. Import all relevant models. ```python from sentence_transformers import SentenceTransformer import numpy as np import json import pandas as pd from tqdm.notebook import tqdm ``` You will be using a pre-trained model called `all-MiniLM-L6-v2`. This is a performance-optimized sentence embedding model and you can read more about it and other available models [here](https://www.sbert.net/docs/pretrained_models.html). 4. Download and create a pre-trained sentence encoder. ```python model = SentenceTransformer( "all-MiniLM-L6-v2", device="cuda" ) # or device="cpu" if you don't have a GPU ``` 5. Read the raw data file. ```python df = pd.read_json("./startups_demo.json", lines=True) ``` 6. Encode all startup descriptions to create an embedding vector for each. Internally, the `encode` function will split the input into batches, which will significantly speed up the process. ```python vectors = model.encode( [row.alt + ". " + row.description for row in df.itertuples()], show_progress_bar=True, ) ``` All of the descriptions are now converted into vectors. There are 40474 vectors of 384 dimensions. The output layer of the model has this dimension ```python vectors.shape # > (40474, 384) ``` 7. Download the saved vectors into a new file named `startup_vectors.npy` ```python np.save("startup_vectors.npy", vectors, allow_pickle=False) ``` ## Run Qdrant in Docker Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API. > **Note:** Before you begin, create a project directory and a virtual python environment in it. 1. Download the Qdrant image from DockerHub. ```bash docker pull qdrant/qdrant ``` 2. Start Qdrant inside of Docker. ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333 ``` Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser. All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container. ## Upload data to Qdrant 1. Install the official Python client to best interact with Qdrant. ```bash pip install qdrant-client ``` At this point, you should have startup records in the `startups_demo.json` file, encoded vectors in `startup_vectors.npy` and Qdrant running on a local machine. Now you need to write a script to upload all startup data and vectors into the search engine. 2. Create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient from qdrant_client.models import VectorParams, Distance qdrant_client = QdrantClient("http://localhost:6333") ``` 3. Related vectors need to be added to a collection. Create a new collection for your startup vectors. ```python qdrant_client.recreate_collection( collection_name="startups", vectors_config=VectorParams(size=384, distance=Distance.COSINE), ) ``` <aside role="status"> - Use `recreate_collection` if you are experimenting and running the script several times. This function will first try to remove an existing collection with the same name. - The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. `384` is the encoder output dimensionality. You can also use `model.get_sentence_embedding_dimension()` to get the dimensionality of the model you are using. - The `distance` parameter lets you specify the function used to measure the distance between two points. </aside> 4. Create an iterator over the startup data and vectors. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. ```python fd = open("./startups_demo.json") # payload is now an iterator over startup data payload = map(json.loads, fd) # Load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if you don't want to load all data into RAM vectors = np.load("./startup_vectors.npy") ``` 5. Upload the data ```python qdrant_client.upload_collection( collection_name="startups", vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256, # How many vectors will be uploaded in a single request? ) ``` Vectors are now uploaded to Qdrant. ## Build the search API Now that all the preparations are complete, let's start building a neural search class. In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries. 1. Create a file named `neural_searcher.py` and specify the following. ```python from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer("all-MiniLM-L6-v2", device="cpu") # initialize Qdrant client self.qdrant_client = QdrantClient("http://localhost:6333") ``` 2. Write the search function. ```python def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # If you don't want any filters for now limit=5, # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function you are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` 3. Add search filters. With Qdrant it is also feasible to add some conditions to the search. For example, if you wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = "Berlin" # Define a filter for cities city_filter = Filter(**{ "must": [{ "key": "city", # Store city information in a field of the same name "match": { # This condition checks if payload field has the requested value "value": city_of_interest } }] }) search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=city_filter, limit=5 ) ... ``` You have now created a class for neural search queries. Now wrap it up into a service. ## Deploy the search with FastAPI To build the service you will use the FastAPI framework. 1. Install FastAPI. To install it, use the command ```bash pip install fastapi uvicorn ``` 2. Implement the service. Create a file named `service.py` and specify the following. The service will have only one API endpoint and will look like this: ```python from fastapi import FastAPI # The file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create a neural searcher instance neural_searcher = NeuralSearcher(collection_name="startups") @app.get("/api/search") def search_startup(q: str): return {"result": neural_searcher.search(text=q)} if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` 3. Run the service. ```bash python service.py ``` 4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs). You should be able to see a debug interface for your service. ![FastAPI Swagger interface](/docs/fastapi_neural_search.png) Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results. ## Next steps The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn the neural search on and off to compare your result with a regular full-text search. > **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). | Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
documentation/tutorials/neural-search.md
--- title: Semantic Search 101 weight: -100 --- # Semantic Search for Beginners | Time: 5 - 15 min | Level: Beginner | | | | --- | ----------- | ----------- |----------- | <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/AASiqmtKo54" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> ## Overview If you are new to vector databases, this tutorial is for you. In 5 minutes you will build a semantic search engine for science fiction books. After you set it up, you will ask the engine about an impending alien threat. Your creation will recommend books as preparation for a potential space attack. Before you begin, you need to have a [recent version of Python](https://www.python.org/downloads/) installed. If you don't know how to run this code in a virtual environment, follow Python documentation for [Creating Virtual Environments](https://docs.python.org/3/tutorial/venv.html#creating-virtual-environments) first. This tutorial assumes you're in the bash shell. Use the Python documentation to activate a virtual environment, with commands such as: ```bash source tutorial-env/bin/activate ``` ## 1. Installation You need to process your data so that the search engine can work with it. The [Sentence Transformers](https://www.sbert.net/) framework gives you access to common Large Language Models that turn raw data into embeddings. ```bash pip install -U sentence-transformers ``` Once encoded, this data needs to be kept somewhere. Qdrant lets you store data as embeddings. You can also use Qdrant to run search queries against this data. This means that you can ask the engine to give you relevant answers that go way beyond keyword matching. ```bash pip install -U qdrant-client ``` <aside role="status"> This tutorial requires qdrant-client version 1.7.1 or higher. </aside> ### Import the models Once the two main frameworks are defined, you need to specify the exact models this engine will use. Before you do, activate the Python prompt (`>>>`) with the `python` command. ```python from qdrant_client import models, QdrantClient from sentence_transformers import SentenceTransformer ``` The [Sentence Transformers](https://www.sbert.net/index.html) framework contains many embedding models. However, [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) is the fastest encoder for this tutorial. ```python encoder = SentenceTransformer("all-MiniLM-L6-v2") ``` ## 2. Add the dataset [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) will encode the data you provide. Here you will list all the science fiction books in your library. Each book has metadata, a name, author, publication year and a short description. ```python documents = [ { "name": "The Time Machine", "description": "A man travels through time and witnesses the evolution of humanity.", "author": "H.G. Wells", "year": 1895, }, { "name": "Ender's Game", "description": "A young boy is trained to become a military leader in a war against an alien race.", "author": "Orson Scott Card", "year": 1985, }, { "name": "Brave New World", "description": "A dystopian society where people are genetically engineered and conditioned to conform to a strict social hierarchy.", "author": "Aldous Huxley", "year": 1932, }, { "name": "The Hitchhiker's Guide to the Galaxy", "description": "A comedic science fiction series following the misadventures of an unwitting human and his alien friend.", "author": "Douglas Adams", "year": 1979, }, { "name": "Dune", "description": "A desert planet is the site of political intrigue and power struggles.", "author": "Frank Herbert", "year": 1965, }, { "name": "Foundation", "description": "A mathematician develops a science to predict the future of humanity and works to save civilization from collapse.", "author": "Isaac Asimov", "year": 1951, }, { "name": "Snow Crash", "description": "A futuristic world where the internet has evolved into a virtual reality metaverse.", "author": "Neal Stephenson", "year": 1992, }, { "name": "Neuromancer", "description": "A hacker is hired to pull off a near-impossible hack and gets pulled into a web of intrigue.", "author": "William Gibson", "year": 1984, }, { "name": "The War of the Worlds", "description": "A Martian invasion of Earth throws humanity into chaos.", "author": "H.G. Wells", "year": 1898, }, { "name": "The Hunger Games", "description": "A dystopian society where teenagers are forced to fight to the death in a televised spectacle.", "author": "Suzanne Collins", "year": 2008, }, { "name": "The Andromeda Strain", "description": "A deadly virus from outer space threatens to wipe out humanity.", "author": "Michael Crichton", "year": 1969, }, { "name": "The Left Hand of Darkness", "description": "A human ambassador is sent to a planet where the inhabitants are genderless and can change gender at will.", "author": "Ursula K. Le Guin", "year": 1969, }, { "name": "The Three-Body Problem", "description": "Humans encounter an alien civilization that lives in a dying system.", "author": "Liu Cixin", "year": 2008, }, ] ``` ## 3. Define storage location You need to tell Qdrant where to store embeddings. This is a basic demo, so your local computer will use its memory as temporary storage. ```python qdrant = QdrantClient(":memory:") ``` ## 4. Create a collection All data in Qdrant is organized by collections. In this case, you are storing books, so we are calling it `my_books`. ```python qdrant.recreate_collection( collection_name="my_books", vectors_config=models.VectorParams( size=encoder.get_sentence_embedding_dimension(), # Vector size is defined by used model distance=models.Distance.COSINE, ), ) ``` - Use `recreate_collection` if you are experimenting and running the script several times. This function will first try to remove an existing collection with the same name. - The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. 384 is the encoder output dimensionality. You can also use model.get_sentence_embedding_dimension() to get the dimensionality of the model you are using. - The `distance` parameter lets you specify the function used to measure the distance between two points. ## 5. Upload data to collection Tell the database to upload `documents` to the `my_books` collection. This will give each record an id and a payload. The payload is just the metadata from the dataset. ```python qdrant.upload_points( collection_name="my_books", points=[ models.PointStruct( id=idx, vector=encoder.encode(doc["description"]).tolist(), payload=doc ) for idx, doc in enumerate(documents) ], ) ``` ## 6. Ask the engine a question Now that the data is stored in Qdrant, you can ask it questions and receive semantically relevant results. ```python hits = qdrant.search( collection_name="my_books", query_vector=encoder.encode("alien invasion").tolist(), limit=3, ) for hit in hits: print(hit.payload, "score:", hit.score) ``` **Response:** The search engine shows three of the most likely responses that have to do with the alien invasion. Each of the responses is assigned a score to show how close the response is to the original inquiry. ```text {'name': 'The War of the Worlds', 'description': 'A Martian invasion of Earth throws humanity into chaos.', 'author': 'H.G. Wells', 'year': 1898} score: 0.570093257022374 {'name': "The Hitchhiker's Guide to the Galaxy", 'description': 'A comedic science fiction series following the misadventures of an unwitting human and his alien friend.', 'author': 'Douglas Adams', 'year': 1979} score: 0.5040468703143637 {'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216 ``` ### Narrow down the query How about the most recent book from the early 2000s? ```python hits = qdrant.search( collection_name="my_books", query_vector=encoder.encode("alien invasion").tolist(), query_filter=models.Filter( must=[models.FieldCondition(key="year", range=models.Range(gte=2000))] ), limit=1, ) for hit in hits: print(hit.payload, "score:", hit.score) ``` **Response:** The query has been narrowed down to one result from 2008. ```text {'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216 ``` ## Next Steps Congratulations, you have just created your very first search engine! Trust us, the rest of Qdrant is not that complicated, either. For your next tutorial you should try building an actual [Neural Search Service with a complete API and a dataset](../../tutorials/neural-search/). ## Return to the bash shell To return to the bash prompt: 1. Press Ctrl+D to exit the Python prompt (`>>>`). 1. Enter the `deactivate` command to deactivate the virtual environment.
documentation/tutorials/search-beginners.md
--- title: Load Hugging Face dataset weight: 19 --- # Loading a dataset from Hugging Face hub [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) also publishes datasets along with the embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please [let us know](https://qdrant.to/discord) if you'd like to see a specific dataset!** ## arxiv-titles-instructorxl-embeddings [This dataset](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { "title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities", "DOI": "1612.05191" } ``` You can find a detailed description of the dataset in the [Practice Datasets](/documentation/datasets/#journal-article-titles) section. If you prefer loading the dataset from a Qdrant snapshot, it also linked there. Loading the dataset is as simple as using the `load_dataset` function from the `datasets` library: ```python from datasets import load_dataset dataset = load_dataset("Qdrant/arxiv-titles-instructorxl-embeddings") ``` <aside role="status">The dataset has over 16 GB, so it might take a while to download.</aside> The dataset contains 2,250,000 vectors. This is how you can check the list of the features in the dataset: ```python dataset.features ``` ### Streaming the dataset Dataset streaming lets you work with a dataset without downloading it. The data is streamed as you iterate over the dataset. You can read more about it in the [Hugging Face documentation](https://huggingface.co/docs/datasets/stream). ```python from datasets import load_dataset dataset = load_dataset( "Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True ) ``` ### Loading the dataset into Qdrant You can load the dataset into Qdrant using the [Python SDK](https://github.com/qdrant/qdrant-client). The embeddings are already precomputed, so you can store them in a collection, that we're going to create in a second: ```python from qdrant_client import QdrantClient, models client = QdrantClient("http://localhost:6333") client.create_collection( collection_name="arxiv-titles-instructorxl-embeddings", vectors_config=models.VectorParams( size=768, distance=models.Distance.COSINE, ), ) ``` It is always a good idea to use batching, while loading a large dataset, so let's do that. We are going to need a helper function to split the dataset into batches: ```python from itertools import islice def batched(iterable, n): iterator = iter(iterable) while batch := list(islice(iterator, n)): yield batch ``` If you are a happy user of Python 3.12+, you can use the [`batched` function from the `itertools` ](https://docs.python.org/3/library/itertools.html#itertools.batched) package instead. No matter what Python version you are using, you can use the `upsert` method to load the dataset, batch by batch, into Qdrant: ```python batch_size = 100 for batch in batched(dataset, batch_size): ids = [point.pop("id") for point in batch] vectors = [point.pop("vector") for point in batch] client.upsert( collection_name="arxiv-titles-instructorxl-embeddings", points=models.Batch( ids=ids, vectors=vectors, payloads=batch, ), ) ``` Your collection is ready to be used for search! Please [let us know using Discord](https://qdrant.to/discord) if you would like to see more datasets published on Hugging Face hub.
documentation/tutorials/huggingface-datasets.md
--- title: Neural Search with Fastembed weight: 2 --- # Create a Neural Search Service with Fastembed | Time: 20 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/) | | --- | ----------- | ----------- |----------- | This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry. Alternatively, you can use datasources such as [Crunchbase](https://www.crunchbase.com/), but that would require obtaining an API key from them. Our neural search service will use [Fastembed](https://github.com/qdrant/fastembed) package to generate embeddings of text descriptions and [FastAPI](https://fastapi.tiangolo.com/) to serve the search API. Fastembed natively integrates with Qdrant client, so you can easily upload the data into Qdrant and perform search queries. <aside role="status"> There is a version of this tutorial that uses <a href="https://www.sbert.net/">SentenceTransformers</a> model inference engine instead of Fastembed. Check it out <a href="/documentation/tutorials/neural-search">here</a>. </aside> ## Workflow To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI. ![Neural Search Workflow](/docs/workflow-neural-search.png) > **Note**: The code for this tutorial can be found here: [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/). ## Prerequisites To complete this tutorial, you will need: - Docker - The easiest way to use Qdrant is to run a pre-built Docker image. - [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com. - Python version >=3.8 ## Prepare sample dataset To conduct a neural search on startup descriptions, you must first encode the description data into vectors. Fastembed integration into qdrant client combines encoding and uploading into a single step. It also takes care of batching and parallelization, so you don't have to worry about it. Let's start by downloading the data and installing the necessary packages. 1. First you need to download the dataset. ```bash wget https://storage.googleapis.com/generall-shared-data/startups_demo.json ``` ## Run Qdrant in Docker Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API. > **Note:** Before you begin, create a project directory and a virtual python environment in it. 1. Download the Qdrant image from DockerHub. ```bash docker pull qdrant/qdrant ``` 2. Start Qdrant inside of Docker. ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333 ``` Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser. All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container. ## Upload data to Qdrant 1. Install the official Python client to best interact with Qdrant. ```bash pip install qdrant-client[fastembed] ``` Note, that you need to install the `fastembed` extra to enable Fastembed integration. At this point, you should have startup records in the `startups_demo.json` file and Qdrant running on a local machine. Now you need to write a script to upload all startup data and vectors into the search engine. 2. Create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient qdrant_client = QdrantClient("http://localhost:6333") ``` 3. Select model to encode your data. You will be using a pre-trained model called `sentence-transformers/all-MiniLM-L6-v2`. ```python qdrant_client.set_model("sentence-transformers/all-MiniLM-L6-v2") ``` 4. Related vectors need to be added to a collection. Create a new collection for your startup vectors. ```python qdrant_client.recreate_collection( collection_name="startups", vectors_config=qdrant_client.get_fastembed_vector_params(), ) ``` Note, that we use `get_fastembed_vector_params` to get the vector size and distance function from the model. This method automatically generates configuration, compatible with the model you are using. Without fastembed integration, you would need to specify the vector size and distance function manually. Read more about it [here](/documentation/tutorials/neural-search). Additionally, you can specify extended configuration for our vectors, like `quantization_config` or `hnsw_config`. 5. Read data from the file. ```python payload_path = os.path.join(DATA_DIR, "startups_demo.json") metadata = [] documents = [] with open(payload_path) as fd: for line in fd: obj = json.loads(line) documents.append(obj.pop("description")) metadata.append(obj) ``` In this block of code, we read data we read data from `startups_demo.json` file and split it into 2 lists: `documents` and `metadata`. Documents are the raw text descriptions of startups. Metadata is the payload associated with each startup, such as the name, location, and picture. We will use `documents` to encode the data into vectors. 6. Encode and upload data. ```python client.add( collection_name="startups", documents=documents, metadata=metadata, parallel=0, # Use all available CPU cores to encode data ) ``` The `add` method will encode all documents and upload them to Qdrant. This is one of two fastembed-specific methods, that combines encoding and uploading into a single step. The `parallel` parameter controls the number of CPU cores used to encode data. Additionally, you can specify ids for each document, if you want to use them later to update or delete documents. If you don't specify ids, they will be generated automatically and returned as a result of the `add` method. You can monitor the progress of the encoding by passing tqdm progress bar to the `add` method. ```python from tqdm import tqdm client.add( collection_name="startups", documents=documents, metadata=metadata, ids=tqdm(range(len(documents))), ) ``` > **Note**: See the full code for this step [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py). ## Build the search API Now that all the preparations are complete, let's start building a neural search class. In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries. Fastembed integration into qdrant client combines encoding and uploading into a single method call. 1. Create a file named `neural_searcher.py` and specify the following. ```python from qdrant_client import QdrantClient class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # initialize Qdrant client self.qdrant_client = QdrantClient("http://localhost:6333") self.qdrant_client.set_model("sentence-transformers/all-MiniLM-L6-v2") ``` 2. Write the search function. ```python def search(self, text: str): search_result = self.qdrant_client.query( collection_name=self.collection_name, query_text=text, query_filter=None, # If you don't want any filters for now limit=5, # 5 the closest results are enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function you are interested in payload only metadata = [hit.metadata for hit in search_result] return metadata ``` 3. Add search filters. With Qdrant it is also feasible to add some conditions to the search. For example, if you wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = "Berlin" # Define a filter for cities city_filter = Filter(**{ "must": [{ "key": "city", # Store city information in a field of the same name "match": { # This condition checks if payload field has the requested value "value": "city_of_interest" } }] }) search_result = self.qdrant_client.query( collection_name=self.collection_name, query_text=text, query_filter=city_filter, limit=5 ) ... ``` You have now created a class for neural search queries. Now wrap it up into a service. ## Deploy the search with FastAPI To build the service you will use the FastAPI framework. 1. Install FastAPI. To install it, use the command ```bash pip install fastapi uvicorn ``` 2. Implement the service. Create a file named `service.py` and specify the following. The service will have only one API endpoint and will look like this: ```python from fastapi import FastAPI # The file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create a neural searcher instance neural_searcher = NeuralSearcher(collection_name="startups") @app.get("/api/search") def search_startup(q: str): return {"result": neural_searcher.search(text=q)} if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` 3. Run the service. ```bash python service.py ``` 4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs). You should be able to see a debug interface for your service. ![FastAPI Swagger interface](/docs/fastapi_neural_search.png) Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results. ## Next steps The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn the neural search on and off to compare your result with a regular full-text search. > **Note**: The code for this tutorial can be found here: [Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/). Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
documentation/tutorials/neural-search-fastembed.md
--- title: Asynchronous API weight: 14 --- # Using Qdrant asynchronously Asynchronous programming is being broadly adopted in the Python ecosystem. Tools such as FastAPI [have embraced this new paradigm](https://fastapi.tiangolo.com/async/), but it is also becoming a standard for ML models served as SaaS. For example, the Cohere SDK [provides an async client](https://cohere-sdk.readthedocs.io/en/latest/cohere.html#asyncclient) next to its synchronous counterpart. Databases are often launched as separate services and are accessed via a network. All the interactions with them are IO-bound and can be performed asynchronously so as not to waste time actively waiting for a server response. In Python, this is achieved by using [`async/await`](https://docs.python.org/3/library/asyncio-task.html) syntax. That lets the interpreter switch to another task while waiting for a response from the server. ## When to use async API There is no need to use async API if the application you are writing will never support multiple users at once (e.g it is a script that runs once per day). However, if you are writing a web service that multiple users will use simultaneously, you shouldn't be blocking the threads of the web server as it limits the number of concurrent requests it can handle. In this case, you should use the async API. Modern web frameworks like [FastAPI](https://fastapi.tiangolo.com/) and [Quart](https://quart.palletsprojects.com/en/latest/) support async API out of the box. Mixing asynchronous code with an existing synchronous codebase might be a challenge. The `async/await` syntax cannot be used in synchronous functions. On the other hand, calling an IO-bound operation synchronously in async code is considered an antipattern. Therefore, if you build an async web service, exposed through an [ASGI](https://asgi.readthedocs.io/en/latest/) server, you should use the async API for all the interactions with Qdrant. <aside role="status"> All the async code has to be launched in an async context. Usually, it means you have to use <code>asyncio.run</code> or <code>asyncio.create_task</code> to run them. Please refer to the <a href="https://docs.python.org/3/library/asyncio.html">asyncio documentation</a> for more details. </aside> ### Using Qdrant asynchronously The simplest way of running asynchronous code is to use define `async` function and use the `asyncio.run` in the following way to run it: ```python from qdrant_client import models import qdrant_client import asyncio async def main(): client = qdrant_client.AsyncQdrantClient("localhost") # Create a collection await client.create_collection( collection_name="my_collection", vectors_config=models.VectorParams(size=4, distance=models.Distance.COSINE), ) # Insert a vector await client.upsert( collection_name="my_collection", points=[ models.PointStruct( id="5c56c793-69f3-4fbf-87e6-c4bf54c28c26", payload={ "color": "red", }, vector=[0.9, 0.1, 0.1, 0.5], ), ], ) # Search for nearest neighbors points = await client.search( collection_name="my_collection", query_vector=[0.9, 0.1, 0.1, 0.5], limit=2, ) # Your async code using AsyncQdrantClient might be put here # ... asyncio.run(main()) ``` The `AsyncQdrantClient` provides the same methods as the synchronous counterpart `QdrantClient`. If you already have a synchronous codebase, switching to async API is as simple as replacing `QdrantClient` with `AsyncQdrantClient` and adding `await` before each method call. <aside role="status"> Asynchronous client was introduced in <code>qdrant-client</code> version 1.6.1. If you are using an older version, you need to use autogenerated async clients directly. </aside> ## Supported Python libraries Qdrant integrates with numerous Python libraries. Until recently, only [Langchain](https://python.langchain.com) provided async Python API support. Qdrant is the only vector database with full coverage of async API in Langchain. Their documentation [describes how to use it](https://python.langchain.com/docs/modules/data_connection/vectorstores/#asynchronous-operations).
documentation/tutorials/async-api.md
--- title: Create and restore from snapshot weight: 14 --- # Create and restore collections from snapshot | Time: 20 min | Level: Beginner | | | |--------------|-----------------|--|----| A collection is a basic unit of data storage in Qdrant. It contains vectors, their IDs, and payloads. However, keeping the search efficient requires additional data structures to be built on top of the data. Building these data structures may take a while, especially for large collections. That's why using snapshots is the best way to export and import Qdrant collections, as they contain all the bits and pieces required to restore the entire collection efficiently. This tutorial will show you how to create a snapshot of a collection and restore it. Since working with snapshots in a distributed environment might be thought to be a bit more complex, we will use a 3-node Qdrant cluster. However, the same approach applies to a single-node setup. <aside role="status">Snapshots cannot be created in local mode of Python SDK. You need to spin up a Qdrant Docker container or use Qdrant Cloud.</aside> ## Prerequisites Let's assume you already have a running Qdrant instance or a cluster. If not, you can follow the [installation guide](/documentation/guides/installation) to set up a local Qdrant instance or use [Qdrant Cloud](https://cloud.qdrant.io/) to create a cluster in a few clicks. Once the cluster is running, let's install the required dependencies: ```shell pip install qdrant-client datasets ``` ### Establish a connection to Qdrant We are going to use the Python SDK and raw HTTP calls to interact with Qdrant. Since we are going to use a 3-node cluster, we need to know the URLs of all the nodes. For the simplicity, let's keep them all in constants, along with the API key, so we can refer to them later: ```python QDRANT_MAIN_URL = "https://my-cluster.com:6333" QDRANT_NODES = ( "https://node-0.my-cluster.com:6333", "https://node-1.my-cluster.com:6333", "https://node-2.my-cluster.com:6333", ) QDRANT_API_KEY = "my-api-key" ``` <aside role="status">If you are using Qdrant Cloud, you can find the URL and API key in the <a href="https://cloud.qdrant.io/">Qdrant Cloud dashboard</a>.</aside> We can now create a client instance: ```python from qdrant_client import QdrantClient client = QdrantClient(QDRANT_MAIN_URL, api_key=QDRANT_API_KEY) ``` First of all, we are going to create a collection from a precomputed dataset. If you already have a collection, you can skip this step and start by [creating a snapshot](#create-and-download-snapshots). <details> <summary>(Optional) Create collection and import data</summary> ### Load the dataset We are going to use a dataset with precomputed embeddings, available on Hugging Face Hub. The dataset is called [Qdrant/arxiv-titles-instructorxl-embeddings](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) and was created using the [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. It contains 2.25M embeddings for the titles of the papers from the [arXiv](https://arxiv.org/) dataset. Loading the dataset is as simple as: ```python from datasets import load_dataset dataset = load_dataset( "Qdrant/arxiv-titles-instructorxl-embeddings", split="train", streaming=True ) ``` We used the streaming mode, so the dataset is not loaded into memory. Instead, we can iterate through it and extract the id and vector embedding: ```python for payload in dataset: id = payload.pop("id") vector = payload.pop("vector") print(id, vector, payload) ``` A single payload looks like this: ```json { 'title': 'Dynamics of partially localized brane systems', 'DOI': '1109.1415' } ``` ### Create a collection First things first, we need to create our collection. We're not going to play with the configuration of it, but it makes sense to do it right now. The configuration is also a part of the collection snapshot. ```python from qdrant_client import models client.recreate_collection( collection_name="test_collection", vectors_config=models.VectorParams( size=768, # Size of the embedding vector generated by the InstructorXL model distance=models.Distance.COSINE ), ) ``` ### Upload the dataset Calculating the embeddings is usually a bottleneck of the vector search pipelines, but we are happy to have them in place already. Since the goal of this tutorial is to show how to create a snapshot, **we are going to upload only a small part of the dataset**. ```python ids, vectors, payloads = [], [], [] for payload in dataset: id = payload.pop("id") vector = payload.pop("vector") ids.append(id) vectors.append(vector) payloads.append(payload) # We are going to upload only 1000 vectors if len(ids) == 1000: break client.upsert( collection_name="test_collection", points=models.Batch( ids=ids, vectors=vectors, payloads=payloads, ), ) ``` Our collection is now ready to be used for search. Let's create a snapshot of it. </details> If you already have a collection, you can skip the previous step and start by [creating a snapshot](#create-and-download-snapshots). ## Create and download snapshots Qdrant exposes an HTTP endpoint to request creating a snapshot, but we can also call it with the Python SDK. Our setup consists of 3 nodes, so we need to call the endpoint **on each of them** and create a snapshot on each node. While using Python SDK, that means creating a separate client instance for each node. <aside role="status">You may get a timeout error, if the collection size is big. You can trigger the snapshot process in the background, without awaiting for the result, by using <code>wait=false</code> parameter. You can always <a href="/documentation/concepts/snapshots/#list-snapshot">list all the snapshots through the API</a> later on.</aside> ```python snapshot_urls = [] for node_url in QDRANT_NODES: node_client = QdrantClient(node_url, api_key=QDRANT_API_KEY) snapshot_info = node_client.create_snapshot(collection_name="test_collection") snapshot_url = f"{node_url}/collections/test_collection/snapshots/{snapshot_info.name}" snapshot_urls.append(snapshot_url) ``` ```http // for `https://node-0.my-cluster.com:6333` POST /collections/test_collection/snapshots // for `https://node-1.my-cluster.com:6333` POST /collections/test_collection/snapshots // for `https://node-2.my-cluster.com:6333` POST /collections/test_collection/snapshots ``` <details> <summary>Response</summary> ```json { "result": { "name": "test_collection-559032209313046-2024-01-03-13-20-11.snapshot", "creation_time": "2024-01-03T13:20:11", "size": 18956800 }, "status": "ok", "time": 0.307644965 } ``` </details> Once we have the snapshot URLs, we can download them. Please make sure to include the API key in the request headers. Downloading the snapshot **can be done only through the HTTP API**, so we are going to use the `requests` library. ```python import requests import os # Create a directory to store snapshots os.makedirs("snapshots", exist_ok=True) local_snapshot_paths = [] for snapshot_url in snapshot_urls: snapshot_name = os.path.basename(snapshot_url) local_snapshot_path = os.path.join("snapshots", snapshot_name) response = requests.get( snapshot_url, headers={"api-key": QDRANT_API_KEY} ) with open(local_snapshot_path, "wb") as f: response.raise_for_status() f.write(response.content) local_snapshot_paths.append(local_snapshot_path) ``` Alternatively, you can use the `wget` command: ```bash wget https://node-0.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313046-2024-01-03-13-20-11.snapshot \ --header="api-key: ${QDRANT_API_KEY}" \ -O node-0-shapshot.snapshot wget https://node-1.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313047-2024-01-03-13-20-12.snapshot \ --header="api-key: ${QDRANT_API_KEY}" \ -O node-1-shapshot.snapshot wget https://node-2.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313048-2024-01-03-13-20-13.snapshot \ --header="api-key: ${QDRANT_API_KEY}" \ -O node-2-shapshot.snapshot ``` The snapshots are now stored locally. We can use them to restore the collection to a different Qdrant instance, or treat them as a backup. We will create another collection using the same data on the same cluster. ## Restore from snapshot Our brand-new snapshot is ready to be restored. Typically, it is used to move a collection to a different Qdrant instance, but we are going to use it to create a new collection on the same cluster. It is just going to have a different name, `test_collection_import`. We do not need to create a collection first, as it is going to be created automatically. Restoring collection is also done separately on each node, but our Python SDK does not support it yet. We are going to use the HTTP API instead, and send a request to each node using `requests` library. ```python for node_url, snapshot_path in zip(QDRANT_NODES, local_snapshot_paths): snapshot_name = os.path.basename(snapshot_path) requests.post( f"{node_url}/collections/test_collection_import/snapshots/upload?priority=snapshot", headers={ "api-key": QDRANT_API_KEY, }, files={"snapshot": (snapshot_name, open(snapshot_path, "rb"))}, ) ``` Alternatively, you can use the `curl` command: ```bash curl -X POST 'https://node-0.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-0-shapshot.snapshot' curl -X POST 'https://node-1.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-1-shapshot.snapshot' curl -X POST 'https://node-2.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-2-shapshot.snapshot' ``` **Important:** We selected `priority=snapshot` to make sure that the snapshot is preferred over the data stored on the node. You can read mode about the priority in the [documentation](/documentation/concepts/snapshots/#snapshot-priority).
documentation/tutorials/create-snapshot.md
--- title: Multitenancy with LlamaIndex weight: 18 --- # Multitenancy with LlamaIndex If you are building a service that serves vectors for many independent users, and you want to isolate their data, the best practice is to use a single collection with payload-based partitioning. This approach is called **multitenancy**. Our guide on the [Separate Partitions](/documentation/guides/multiple-partitions/) describes how to set it up in general, but if you use [LlamaIndex](/documentation/integrations/llama-index/) as a backend, you may prefer reading a more specific instruction. So here it is! ## Prerequisites This tutorial assumes that you have already installed Qdrant and LlamaIndex. If you haven't, please run the following commands: ```bash pip install qdrant-client llama-index ``` We are going to use a local Docker-based instance of Qdrant. If you want to use a remote instance, please adjust the code accordingly. Here is how we can start a local instance: ```bash docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest ``` ## Setting up LlamaIndex pipeline We are going to implement an end-to-end example of multitenant application using LlamaIndex. We'll be indexing the documentation of different Python libraries, and we definitely don't want any users to see the results coming from a library they are not interested in. In real case scenarios, this is even more dangerous, as the documents may contain sensitive information. ### Creating vector store [QdrantVectorStore](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo.html) is a wrapper around Qdrant that provides all the necessary methods to work with your vector database in LlamaIndex. Let's create a vector store for our collection. It requires setting a collection name and passing an instance of `QdrantClient`. ```python from qdrant_client import QdrantClient from llama_index.vector_stores import QdrantVectorStore client = QdrantClient("http://localhost:6333") vector_store = QdrantVectorStore( collection_name="my_collection", client=client, ) ``` ### Defining chunking strategy and embedding model Any semantic search application requires a way to convert text queries into vectors - an embedding model. `ServiceContext` is a bundle of commonly used resources used during the indexing and querying stage in any LlamaIndex application. We can also use it to set up an embedding model - in our case, a local [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). set up ```python from llama_index import ServiceContext service_context = ServiceContext.from_defaults( embed_model="local:BAAI/bge-small-en-v1.5", ) ``` We can also control how our documents are split into chunks, or nodes using LLamaIndex's terminology. The `SimpleNodeParser` splits documents into fixed length chunks with an overlap. The defaults are reasonable, but we can also adjust them if we want to. Both values are defined in tokens. ```python from llama_index.node_parser import SimpleNodeParser node_parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=32) ``` Now we also need to inform the `ServiceContext` about our choices: ```python service_context = ServiceContext.from_defaults( embed_model="local:BAAI/bge-large-en-v1.5", node_parser=node_parser, ) ``` Both embedding model and selected node parser will be implicitly used during the indexing and querying. ### Combining everything together The last missing piece, before we can start indexing, is the `VectorStoreIndex`. It is a wrapper around `VectorStore` that provides a convenient interface for indexing and querying. It also requires a `ServiceContext` to be initialized. ```python from llama_index import VectorStoreIndex index = VectorStoreIndex.from_vector_store( vector_store=vector_store, service_context=service_context ) ``` ## Indexing documents No matter how our documents are generated, LlamaIndex will automatically split them into nodes, if required, encode using selected embedding model, and then store in the vector store. Let's define some documents manually and insert them into Qdrant collection. Our documents are going to have a single metadata attribute - a library name they belong to. ```python from llama_index.schema import Document documents = [ Document( text="LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models.", metadata={ "library": "llama-index", }, ), Document( text="Qdrant is a vector database & vector similarity search engine.", metadata={ "library": "qdrant", }, ), ] ``` Now we can index them using our `VectorStoreIndex`: ```python for document in documents: index.insert(document) ``` ### Performance considerations Our documents have been split into nodes, encoded using the embedding model, and stored in the vector store. However, we don't want to allow our users to search for all the documents in the collection, but only for the documents that belong to a library they are interested in. For that reason, we need to set up the Qdrant [payload index](/documentation/concepts/indexing/#payload-index), so the search is more efficient. ```python from qdrant_client import models client.create_payload_index( collection_name="my_collection", field_name="metadata.library", field_type=models.PayloadSchemaType.KEYWORD, ) ``` The payload index is not the only thing we want to change. Since none of the search queries will be executed on the whole collection, we can also change its configuration, so the HNSW graph is not built globally. This is also done due to [performance reasons](/documentation/guides/multiple-partitions/#calibrate-performance). **You should not be changing these parameters, if you know there will be some global search operations done on the collection.** ```python client.update_collection( collection_name="my_collection", hnsw_config=models.HnswConfigDiff(payload_m=16, m=0), ) ``` Once both operations are completed, we can start searching for our documents. <aside role="status">These steps are done just once, when you index your first documents!</aside> ## Querying documents with constraints Let's assume we are searching for some information about large language models, but are only allowed to use Qdrant documentation. LlamaIndex has a concept of retrievers, responsible for finding the most relevant nodes for a given query. Our `VectorStoreIndex` can be used as a retriever, with some additional constraints - in our case value of the `library` metadata attribute. ```python from llama_index.vector_stores.types import MetadataFilters, ExactMatchFilter qdrant_retriever = index.as_retriever( filters=MetadataFilters( filters=[ ExactMatchFilter( key="library", value="qdrant", ) ] ) ) nodes_with_scores = qdrant_retriever.retrieve("large language models") for node in nodes_with_scores: print(node.text, node.score) # Output: Qdrant is a vector database & vector similarity search engine. 0.60551536 ``` The description of Qdrant was the best match, even though it didn't mention large language models at all. However, it was the only document that belonged to the `qdrant` library, so there was no other choice. Let's try to search for something that is not present in the collection. Let's define another retrieve, this time for the `llama-index` library: ```python llama_index_retriever = index.as_retriever( filters=MetadataFilters( filters=[ ExactMatchFilter( key="library", value="llama-index", ) ] ) ) nodes_with_scores = llama_index_retriever.retrieve("large language models") for node in nodes_with_scores: print(node.text, node.score) # Output: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. 0.63576734 ``` The results returned by both retrievers are different, due to the different constraints, so we implemented a real multitenant search application!
documentation/tutorials/llama-index-multitenancy.md
--- title: Tutorials weight: 23 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: false aliases: - how-to - tutorials --- # Tutorials These tutorials demonstrate different ways you can build vector search into your applications. | Tutorial | Description | Stack | |------------------------------------------------------------------------|-------------------------------------------------------------------|----------------------------| | [Configure Optimal Use](../tutorials/optimize/) | Configure Qdrant collections for best resource use. | Qdrant | | [Separate Partitions](../tutorials/multiple-partitions/) | Serve vectors for many independent users. | Qdrant | | [Bulk Upload Vectors](../tutorials/bulk-upload/) | Upload a large scale dataset. | Qdrant | | [Create Dataset Snapshots](../tutorials/create-snapshot/) | Turn a dataset into a snapshot by exporting it from a collection. | Qdrant | | [Semantic Search for Beginners](../tutorials/search-beginners/) | Create a simple search engine locally in minutes. | Qdrant | | [Simple Neural Search](../tutorials/neural-search/) | Build and deploy a neural search that browses startup data. | Qdrant, BERT, FastAPI | | [Aleph Alpha Search](../tutorials/aleph-alpha-search/) | Build a multimodal search that combines text and image data. | Qdrant, Aleph Alpha | | [Mighty Semantic Search](../tutorials/mighty/) | Build a simple semantic search with an on-demand NLP service. | Qdrant, Mighty | | [Asynchronous API](../tutorials/async-api/) | Communicate with Qdrant server asynchronously with Python SDK. | Qdrant, Python | | [Multitenancy with LlamaIndex](../tutorials/llama-index-multitenancy/) | Handle data coming from multiple users in LlamaIndex. | Qdrant, Python, LlamaIndex | | [HuggingFace datasets](../tutorials/huggingface-datasets/) | Load a Hugging Face dataset to Qdrant | Qdrant, Python, datasets | | [Measure retrieval quality](../tutorials/retrieval-quality/) | Measure and fine-tune the retrieval quality | Qdrant, Python, datasets | | [Troubleshooting](../tutorials/common-errors/) | Solutions to common errors and fixes | Qdrant |
documentation/tutorials/_index.md
--- title: Airbyte weight: 1000 aliases: [ ../integrations/airbyte/ ] --- # Airbyte [Airbyte](https://airbyte.com/) is an open-source data integration platform that helps you replicate your data between different systems. It has a [growing list of connectors](https://docs.airbyte.io/integrations) that can be used to ingest data from multiple sources. Building data pipelines is also crucial for managing the data in Qdrant, and Airbyte is a great tool for this purpose. Airbyte may take care of the data ingestion from a selected source, while Qdrant will help you to build a search engine on top of it. There are three supported modes of how the data can be ingested into Qdrant: * **Full Refresh Sync** * **Incremental - Append Sync** * **Incremental - Append + Deduped** You can read more about these modes in the [Airbyte documentation](https://docs.airbyte.io/integrations/destinations/qdrant). ## Prerequisites Before you start, make sure you have the following: 1. Airbyte instance, either [Open Source](https://airbyte.com/solutions/airbyte-open-source), [Self-Managed](https://airbyte.com/solutions/airbyte-enterprise), or [Cloud](https://airbyte.com/solutions/airbyte-cloud). 2. Running instance of Qdrant. It has to be accessible by URL from the machine where Airbyte is running. You can follow the [installation guide](/documentation/guides/installation/) to set up Qdrant. ## Setting up Qdrant as a destination Once you have a running instance of Airbyte, you can set up Qdrant as a destination directly in the UI. Airbyte's Qdrant destination is connected with a single collection in Qdrant. ![Airbyte Qdrant destination](/documentation/frameworks/airbyte/qdrant-destination.png) ### Text processing Airbyte has some built-in mechanisms to transform your texts into embeddings. You can choose how you want to chunk your fields into pieces before calculating the embeddings, but also which fields should be used to create the point payload. ![Processing settings](/documentation/frameworks/airbyte/processing.png) ### Embeddings You can choose the model that will be used to calculate the embeddings. Currently, Airbyte supports multiple models, including OpenAI and Cohere. ![Embeddings settings](/documentation/frameworks/airbyte/embedding.png) Using some precomputed embeddings from your data source is also possible. In this case, you can pass the field name containing the embeddings and their dimensionality. ![Precomputed embeddings settings](/documentation/frameworks/airbyte/precomputed-embedding.png) ### Qdrant connection details Finally, we can configure the target Qdrant instance and collection. In case you use the built-in authentication mechanism, here is where you can pass the token. ![Qdrant connection details](/documentation/frameworks/airbyte/qdrant-config.png) Once you confirm creating the destination, Airbyte will test if a specified Qdrant cluster is accessible and might be used as a destination. ## Setting up connection Airbyte combines sources and destinations into a single entity called a connection. Once you have a destination configured and a source, you can create a connection between them. It doesn't matter what source you use, as long as Airbyte supports it. The process is pretty straightforward, but depends on the source you use. ![Airbyte connection](/documentation/frameworks/airbyte/connection.png) More information about creating connections can be found in the [Airbyte documentation](https://docs.airbyte.com/understanding-airbyte/connections/).
documentation/frameworks/airbyte.md
--- title: Stanford DSPy weight: 1500 aliases: [ ../integrations/dspy/ ] --- # Stanford DSPy [DSPy](https://github.com/stanfordnlp/dspy) is the framework for solving advanced tasks with language models (LMs) and retrieval models (RMs). It unifies techniques for prompting and fine-tuning LMs — and approaches for reasoning, self-improvement, and augmentation with retrieval and tools. - Provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax. - Introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Qdrant can be used as a retrieval mechanism in the DSPy flow. ## Installation For the Qdrant retrieval integration, include `dspy-ai` with the `qdrant` extra: ```bash pip install dspy-ai[qdrant] ``` ## Usage We can configure `DSPy` settings to use the Qdrant retriever model like so: ```python import dspy from dspy.retrieve.qdrant_rm import QdrantRM from qdrant_client import QdrantClient turbo = dspy.OpenAI(model="gpt-3.5-turbo") qdrant_client = QdrantClient() # Defaults to a local instance at http://localhost:6333/ qdrant_retriever_model = QdrantRM("collection-name", qdrant_client, k=3) dspy.settings.configure(lm=turbo, rm=qdrant_retriever_model) ``` Using the retriever is pretty simple. The `dspy.Retrieve(k)` module will search for the top-k passages that match a given query. ```python retrieve = dspy.Retrieve(k=3) question = "Some question about my data" topK_passages = retrieve(question).passages print(f"Top {retrieve.k} passages for question: {question} \n", "\n") for idx, passage in enumerate(topK_passages): print(f"{idx+1}]", passage, "\n") ``` With Qdrant configured as the retriever for contexts, you can set up a DSPy module like so: ```python class RAG(dspy.Module): def __init__(self, num_passages=3): super().__init__() self.retrieve = dspy.Retrieve(k=num_passages) ... def forward(self, question): context = self.retrieve(question).passages ... ``` With the generic RAG blueprint now in place, you can add the many interactions offered by DSPy with context retrieval powered by Qdrant. ## Next steps Find DSPy usage docs and examples [here](https://github.com/stanfordnlp/dspy#4-documentation--tutorials).
documentation/frameworks/dspy.md
--- title: Apache Spark weight: 1400 aliases: [ ../integrations/spark/ ] --- # Apache Spark [Spark](https://spark.apache.org/) is a leading distributed computing framework that empowers you to work with massive datasets efficiently. When it comes to leveraging the power of Spark for your data processing needs, the [Qdrant-Spark Connector](https://github.com/qdrant/qdrant-spark) is to be considered. This connector enables Qdrant to serve as a storage destination in Spark, offering a seamless bridge between the two. ## Installation You can set up the Qdrant-Spark Connector in a few different ways, depending on your preferences and requirements. ### GitHub Releases The simplest way to get started is by downloading pre-packaged JAR file releases from the [Qdrant-Spark GitHub releases page](https://github.com/qdrant/qdrant-spark/releases). These JAR files come with all the necessary dependencies to get you going. ### Building from Source If you prefer to build the JAR from source, you'll need [JDK 8](https://www.azul.com/downloads/#zulu) and [Maven](https://maven.apache.org/) installed on your system. Once you have the prerequisites in place, navigate to the project's root directory and run the following command: ```bash mvn package ``` This command will compile the source code and generate a fat JAR, which will be stored in the `target` directory by default. ### Maven Central For Java and Scala projects, you can also obtain the Qdrant-Spark Connector from [Maven Central](https://central.sonatype.com/artifact/io.qdrant/spark). ```xml <dependency> <groupId>io.qdrant</groupId> <artifactId>spark</artifactId> <version>2.0.0</version> </dependency> ``` ## Getting Started After successfully installing the Qdrant-Spark Connector, you can start integrating Qdrant with your Spark applications. Below, we'll walk through the basic steps of creating a Spark session with Qdrant support and loading data into Qdrant. ### Creating a single-node Spark session with Qdrant Support To begin, import the necessary libraries and create a Spark session with Qdrant support. Here's how: ```python from pyspark.sql import SparkSession spark = SparkSession.builder.config( "spark.jars", "spark-2.0.jar", # Specify the downloaded JAR file ) .master("local[*]") .appName("qdrant") .getOrCreate() ``` ```scala import org.apache.spark.sql.SparkSession val spark = SparkSession.builder .config("spark.jars", "spark-2.0.jar") // Specify the downloaded JAR file .master("local[*]") .appName("qdrant") .getOrCreate() ``` ```java import org.apache.spark.sql.SparkSession; public class QdrantSparkJavaExample { public static void main(String[] args) { SparkSession spark = SparkSession.builder() .config("spark.jars", "spark-2.0.jar") // Specify the downloaded JAR file .master("local[*]") .appName("qdrant") .getOrCreate(); ... } } ``` ### Loading Data into Qdrant <aside role="status">To load data into Qdrant, you'll need to create a collection with the appropriate vector dimensions and configurations in advance.</aside> Here's how you can use the Qdrant-Spark Connector to upsert data: ```python <YourDataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", <QDRANT_GRPC_URL>) # REST URL of the Qdrant instance .option("collection_name", <QDRANT_COLLECTION_NAME>) # Name of the collection to write data into .option("embedding_field", <EMBEDDING_FIELD_NAME>) # Name of the field holding the embeddings .option("schema", <YourDataFrame>.schema.json()) # JSON string of the dataframe schema .mode("append") .save() ``` ```scala <YourDataFrame> .write .format("io.qdrant.spark.Qdrant") .option("qdrant_url", QDRANT_GRPC_URL) // REST URL of the Qdrant instance .option("collection_name", QDRANT_COLLECTION_NAME) // Name of the collection to write data into .option("embedding_field", EMBEDDING_FIELD_NAME) // Name of the field holding the embeddings .option("schema", <YourDataFrame>.schema.json()) // JSON string of the dataframe schema .mode("append") .save() ``` ```java <YourDataFrame> .write() .format("io.qdrant.spark.Qdrant") .option("qdrant_url", QDRANT_GRPC_URL) // REST URL of the Qdrant instance .option("collection_name", QDRANT_COLLECTION_NAME) // Name of the collection to write data into .option("embedding_field", EMBEDDING_FIELD_NAME) // Name of the field holding the embeddings .option("schema", <YourDataFrame>.schema().json()) // JSON string of the dataframe schema .mode("append") .save(); ``` ## Databricks You can use the `qdrant-spark` connector as a library in [Databricks](https://www.databricks.com/) to ingest data into Qdrant. - Go to the `Libraries` section in your cluster dashboard. - Select `Install New` to open the library installation modal. - Search for `io.qdrant:spark:2.0.0` in the Maven packages and click `Install`. ![Databricks](/documentation/frameworks/spark/databricks.png) ## Datatype Support Qdrant supports all the Spark data types, and the appropriate data types are mapped based on the provided schema. ## Options and Spark Types The Qdrant-Spark Connector provides a range of options to fine-tune your data integration process. Here's a quick reference: | Option | Description | DataType | Required | | :---------------- | :------------------------------------------------------------------------ | :--------------------- | :------- | | `qdrant_url` | GRPC URL of the Qdrant instance. Eg: <http://localhost:6334> | `StringType` | ✅ | | `collection_name` | Name of the collection to write data into | `StringType` | ✅ | | `embedding_field` | Name of the field holding the embeddings | `ArrayType(FloatType)` | ✅ | | `schema` | JSON string of the dataframe schema | `StringType` | ✅ | | `id_field` | Name of the field holding the point IDs. Default: Generates a random UUId | `StringType` | ❌ | | `batch_size` | Max size of the upload batch. Default: 100 | `IntType` | ❌ | | `retries` | Number of upload retries. Default: 3 | `IntType` | ❌ | | `api_key` | Qdrant API key to be sent in the header. Default: null | `StringType` | ❌ | | `vector_name` | Name of the vector in the collection. Default: null | `StringType` | ❌ For more information, be sure to check out the [Qdrant-Spark GitHub repository](https://github.com/qdrant/qdrant-spark). The Apache Spark guide is available [here](https://spark.apache.org/docs/latest/quick-start.html). Happy data processing!
documentation/frameworks/spark.md
--- title: Make.com weight: 1800 --- # Make.com [Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code. Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations). Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios. ![Qdrant Make hero](/documentation/frameworks/make/hero-page.png) ## Prerequisites Before you start, make sure you have the following: 1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/). 2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register). ## Setting up a connection Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection. ![Qdrant Make connection](/documentation/frameworks/make/connection.png) You can now establish a connection to Qdrant using your [instance credentials](https://qdrant.tech/documentation/cloud/authentication/). ![Qdrant Make form](/documentation/frameworks/make/connection-form.png) ## Modules Modules represent actions that Make performs with an app. The Qdrant Make app enables you to trigger the following app modules. ![Qdrant Make modules](/documentation/frameworks/make/modules.png) The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules). ## Next steps - Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates). - Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios).
documentation/frameworks/make.md
--- title: FiftyOne weight: 600 aliases: [ ../integrations/fifty-one ] --- # FiftyOne [FiftyOne](https://voxel51.com/) is an open-source toolkit designed to enhance computer vision workflows by optimizing dataset quality and providing valuable insights about your models. FiftyOne 0.20, which includes a native integration with Qdrant, supporting workflows like [image similarity search](https://docs.voxel51.com/user_guide/brain.html#image-similarity) and [text search](https://docs.voxel51.com/user_guide/brain.html#text-similarity). Qdrant helps FiftyOne to find the most similar images in the dataset using vector embeddings. FiftyOne is available as a Python package that might be installed in the following way: ```bash pip install fiftyone ``` Please check out the documentation of FiftyOne on [Qdrant integration](https://docs.voxel51.com/integrations/qdrant.html).
documentation/frameworks/fifty-one.md
--- title: Langchain Go weight: 120 --- # Langchain Go [Langchain Go](https://tmc.github.io/langchaingo/docs/) is a framework for developing data-aware applications powered by language models in Go. You can use Qdrant as a vector store in Langchain Go. ## Setup Install the `langchain-go` project dependency ```bash go get -u github.com/tmc/langchaingo ``` ## Usage Before you use the following code sample, customize the following values for your configuration: - `YOUR_QDRANT_REST_URL`: If you've set up Qdrant using the [Quick Start](/documentation/quick-start/) guide, set this value to `http://localhost:6333`. - `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections) guide to create or list collections. ```go import ( "fmt" "log" "github.com/tmc/langchaingo/embeddings" "github.com/tmc/langchaingo/llms/openai" "github.com/tmc/langchaingo/vectorstores" "github.com/tmc/langchaingo/vectorstores/qdrant" ) llm, err := openai.New() if err != nil { log.Fatal(err) } e, err := embeddings.NewEmbedder(llm) if err != nil { log.Fatal(err) } url, err := url.Parse("YOUR_QDRANT_REST_URL") if err != nil { log.Fatal(err) } store, err := qdrant.New( qdrant.WithURL(*url), qdrant.WithCollectionName("YOUR_COLLECTION_NAME"), qdrant.WithEmbedder(e), ) if err != nil { log.Fatal(err) } ``` ## Further Reading - You can find usage examples of Langchain Go [here](https://github.com/tmc/langchaingo/tree/main/examples).
documentation/frameworks/langchain-go.md
--- title: Langchain4J weight: 110 --- # LangChain for Java LangChain for Java, also known as [Langchain4J](https://github.com/langchain4j/langchain4j), is a community port of [Langchain](https://www.langchain.com/) for building context-aware AI applications in Java You can use Qdrant as a vector store in Langchain4J through the [`langchain4j-qdrant`](https://central.sonatype.com/artifact/dev.langchain4j/langchain4j-qdrant) module. ## Setup Add the `langchain4j-qdrant` to your project dependencies. ```xml <dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-qdrant</artifactId> <version>VERSION</version> </dependency> ``` ## Usage Before you use the following code sample, customize the following values for your configuration: - `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections) guide to create or list collections. - `YOUR_HOST_URL`: Use the GRPC URL for your system. If you used the [Quick Start](/documentation/quick-start/) guide, it may be http://localhost:6334. If you've deployed in the [Qdrant Cloud](/documentation/cloud/), you may have a longer URL such as `https://example.location.cloud.qdrant.io:6334`. - `YOUR_API_KEY`: Substitute the API key associated with your configuration. ```java import dev.langchain4j.store.embedding.EmbeddingStore; import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore; EmbeddingStore<TextSegment> embeddingStore = QdrantEmbeddingStore.builder() // Ensure the collection is configured with the appropriate dimensions // of the embedding model. // Reference https://qdrant.tech/documentation/concepts/collections/ .collectionName("YOUR_COLLECTION_NAME") .host("YOUR_HOST_URL") // GRPC port of the Qdrant server .port(6334) .apiKey("YOUR_API_KEY") .build(); ``` `QdrantEmbeddingStore` supports all the semantic features of Langchain4J. ## Further Reading - You can refer to the [Langchain4J examples](https://github.com/langchain4j/langchain4j-examples/) to get started.
documentation/frameworks/langchain4j.md
--- title: OpenLLMetry weight: 2300 --- # OpenLLMetry OpenLLMetry from [Traceloop](https://www.traceloop.com/) is a set of extensions built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM application. OpenLLMetry supports instrumenting the `qdrant_client` Python library and exporting the traces to various observability platforms, as described in their [Integrations catalog](https://www.traceloop.com/docs/openllmetry/integrations/introduction#the-integrations-catalog). This page assumes you're using `qdrant-client` version 1.7.3 or above. ## Usage To set up OpenLLMetry, follow these steps: 1. Install the SDK: ```console pip install traceloop-sdk ``` 1. Instantiate the SDK: ```python from traceloop.sdk import Traceloop Traceloop.init() ``` You're now tracing your `qdrant_client` usage with OpenLLMetry! ## Without the SDK Since Traceloop provides standard OpenTelemetry instrumentations, you can use them as standalone packages. To do so, follow these steps: 1. Install the package: ```console pip install opentelemetry-instrumentation-qdrant ``` 1. Instantiate the `QdrantInstrumentor`. ```python from opentelemetry.instrumentation.qdrant import QdrantInstrumentor QdrantInstrumentor().instrument() ``` ## Further Reading - 📚 OpenLLMetry [API reference](https://www.traceloop.com/docs/api-reference/introduction)
documentation/frameworks/openllmetry.md
--- title: LangChain weight: 100 aliases: [ ../integrations/langchain/ ] --- # LangChain LangChain is a library that makes developing Large Language Models based applications much easier. It unifies the interfaces to different libraries, including major embedding providers and Qdrant. Using LangChain, you can focus on the business value instead of writing the boilerplate. Langchain comes with the Qdrant integration by default. It might be installed with pip: ```bash pip install langchain ``` Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways how to use it, but calling `Qdrant.from_texts` is probably the most straightforward way how to get started: ```python from langchain.vectorstores import Qdrant from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-mpnet-base-v2" ) doc_store = Qdrant.from_texts( texts, embeddings, url="<qdrant-url>", api_key="<qdrant-api-key>", collection_name="texts" ) ``` Calling `Qdrant.from_documents` or `Qdrant.from_texts` will always recreate the collection and remove all the existing points. That's fine for some experiments, but you'll prefer not to start from scratch every single time in a real-world scenario. If you prefer reusing an existing collection, you can create an instance of Qdrant on your own: ```python import qdrant_client embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-mpnet-base-v2" ) client = qdrant_client.QdrantClient( "<qdrant-url>", api_key="<qdrant-api-key>", # For Qdrant Cloud, None for local instance ) doc_store = Qdrant( client=client, collection_name="texts", embeddings=embeddings, ) ``` ## Local mode Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk. ### In-memory For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook. ```python qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents", ) ``` ### On-disk storage Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs. ```python qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents", ) ``` ### On-premise server deployment No matter if you choose to launch Qdrant locally with [a Docker container](/documentation/guides/installation/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service. ```python url = "<---qdrant url here --->" qdrant = Qdrant.from_documents( docs, embeddings, url, prefer_grpc=True, collection_name="my_documents", ) ``` ## Next steps If you'd like to know more about running Qdrant in a LangChain-based application, please read our article [Question Answering with LangChain and Qdrant without boilerplate](/articles/langchain-integration/). Some more information might also be found in the [LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant).
documentation/frameworks/langchain.md
--- title: LlamaIndex weight: 200 aliases: [ ../integrations/llama-index/ ] --- # LlamaIndex (GPT Index) LlamaIndex (formerly GPT Index) acts as an interface between your external data and Large Language Models. So you can bring your private data and augment LLMs with it. LlamaIndex simplifies data ingestion and indexing, integrating Qdrant as a vector index. Installing LlamaIndex is straightforward if we use pip as a package manager. Qdrant is not installed by default, so we need to install it separately: ```bash pip install llama-index qdrant-client ``` LlamaIndex requires providing an instance of `QdrantClient`, so it can interact with Qdrant server. ```python from llama_index.vector_stores.qdrant import QdrantVectorStore import qdrant_client client = qdrant_client.QdrantClient( "<qdrant-url>", api_key="<qdrant-api-key>", # For Qdrant Cloud, None for local instance ) vector_store = QdrantVectorStore(client=client, collection_name="documents") index = VectorStoreIndex.from_vector_store(vector_store=vector_store) ``` The library [comes with a notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/examples/vector_stores/QdrantIndexDemo.ipynb) that shows an end-to-end example of how to use Qdrant within LlamaIndex.
documentation/frameworks/llama-index.md
--- title: DLT weight: 1300 aliases: [ ../integrations/dlt/ ] --- # DLT(Data Load Tool) [DLT](https://dlthub.com/) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. With the DLT-Qdrant integration, you can now select Qdrant as a DLT destination to load data into. **DLT Enables** - Automated maintenance - with schema inference, alerts and short declarative code, maintenance becomes simple. - Run it where Python runs - on Airflow, serverless functions, notebooks. Scales on micro and large infrastructure alike. - User-friendly, declarative interface that removes knowledge obstacles for beginners while empowering senior professionals. ## Usage To get started, install `dlt` with the `qdrant` extra. ```bash pip install "dlt[qdrant]" ``` Configure the destination in the DLT secrets file. The file is located at `~/.dlt/secrets.toml` by default. Add the following section to the secrets file. ```toml [destination.qdrant.credentials] location = "https://your-qdrant-url" api_key = "your-qdrant-api-key" ``` The location will default to `http://localhost:6333` and `api_key` is not defined - which are the defaults for a local Qdrant instance. Find more information about DLT configurations [here](https://dlthub.com/docs/general-usage/credentials). Define the source of the data. ```python import dlt from dlt.destinations.qdrant import qdrant_adapter movies = [ { "title": "Blade Runner", "year": 1982, "description": "The film is about a dystopian vision of the future that combines noir elements with sci-fi imagery." }, { "title": "Ghost in the Shell", "year": 1995, "description": "The film is about a cyborg policewoman and her partner who set out to find the main culprit behind brain hacking, the Puppet Master." }, { "title": "The Matrix", "year": 1999, "description": "The movie is set in the 22nd century and tells the story of a computer hacker who joins an underground group fighting the powerful computers that rule the earth." } ] ``` <aside role="status"> A more comprehensive pipeline would load data from some API or use one of <a href="https://dlthub.com/docs/dlt-ecosystem/verified-sources">DLT's verified sources</a>. </aside> Define the pipeline. ```python pipeline = dlt.pipeline( pipeline_name="movies", destination="qdrant", dataset_name="movies_dataset", ) ``` Run the pipeline. ```python info = pipeline.run( qdrant_adapter( movies, embed=["title", "description"] ) ) ``` The data is now loaded into Qdrant. To use vector search after the data has been loaded, you must specify which fields Qdrant needs to generate embeddings for. You do that by wrapping the data (or [DLT resource](https://dlthub.com/docs/general-usage/resource)) with the `qdrant_adapter` function. ## Write disposition A DLT [write disposition](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/#write-disposition) defines how the data should be written to the destination. All write dispositions are supported by the Qdrant destination. ## DLT Sync Qdrant destination supports syncing of the [`DLT` state](https://dlthub.com/docs/general-usage/state#syncing-state-with-destination). ## Next steps - The comprehensive Qdrant DLT destination documentation can be found [here](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/).
documentation/frameworks/dlt.md
--- title: Apache Airflow weight: 2100 --- # Apache Airflow [Apache Airflow](https://airflow.apache.org/) is an open-source platform for authoring, scheduling and monitoring data and computing workflows. Airflow uses Python to create workflows that can be easily scheduled and monitored. Qdrant is available as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in Airflow to interface with the database. ## Prerequisites Before configuring Airflow, you need: 1. A Qdrant instance to connect to. You can set one up in our [installation guide](https://qdrant.tech/documentation/guides/installation). 2. A running Airflow instance. You can use their [Quick Start Guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html). ## Setting up a connection Open the `Admin-> Connections` section of the Airflow UI. Click the `Create` link to create a new [Qdrant connection](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/connections.html). ![Qdrant connection](/documentation/frameworks/airflow/connection.png) You can also set up a connection using [environment variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#environment-variables-connections) or an [external secret backend](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html). ## Qdrant hook An Airflow hook is an abstraction of a specific API that allows Airflow to interact with an external system. ```python from airflow.providers.qdrant.hooks.qdrant import QdrantHook hook = QdrantHook(conn_id="qdrant_connection") hook.verify_connection() ``` A [`qdrant_client#QdrantClient`](https://pypi.org/project/qdrant-client/) instance is available via `@property conn` of the `QdrantHook` instance for use within your Airflow workflows. ```python from qdrant_client import models hook.conn.count("<COLLECTION_NAME>") hook.conn.upsert( "<COLLECTION_NAME>", points=[ models.PointStruct(id=32, vector=[0.32, 0.12, 0.123], payload={"color": "red"}) ], ) ``` ## Qdrant Ingest Operator The Qdrant provider also provides a convenience operator for uploading data to a Qdrant collection that internally uses the Qdrant hook. ```python from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator vectors = [ [0.11, 0.22, 0.33, 0.44], [0.55, 0.66, 0.77, 0.88], [0.88, 0.11, 0.12, 0.13], ] ids = [32, 21, "b626f6a9-b14d-4af9-b7c3-43d8deb719a6"] payload = [{"meta": "data"}, {"meta": "data_2"}, {"meta": "data_3", "extra": "data"}] QdrantIngestOperator( conn_id="qdrant_connection" task_id="qdrant_ingest", collection_name="<COLLECTION_NAME>", vectors=vectors, ids=ids, payload=payload, ) ``` ## Reference - 📦 [Provider package PyPI](https://pypi.org/project/apache-airflow-providers-qdrant/) - 📚 [Provider docs](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html)
documentation/frameworks/airflow.md
--- title: PrivateGPT weight: 1600 aliases: [ ../integrations/privategpt/ ] --- # PrivateGPT [PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support. PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents. ## Configuration Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000. Example: ```yaml qdrant: url: "https://xyz-example.eu-central.aws.cloud.qdrant.io:6333" api_key: "<your-api-key>" ``` The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are: | Field | Description | |--------------|-------------| | location | If `:memory:` - use in-memory Qdrant instance.<br>If `str` - use it as a `url` parameter.| | url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`.<br> Eg. `http://localhost:6333` | | port | Port of the REST API interface. Default: `6333` | | grpc_port | Port of the gRPC interface. Default: `6334` | | prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. | | https | If `true` - use HTTPS(SSL) protocol.| | api_key | API key for authentication in Qdrant Cloud.| | prefix | If set, add `prefix` to the REST URL path.<br>Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.| | timeout | Timeout for REST and gRPC API requests.<br>Default: 5.0 seconds for REST and unlimited for gRPC | | host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.| | path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`| | force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.| ## Next steps Find the PrivateGPT docs [here](https://docs.privategpt.dev/).
documentation/frameworks/privategpt.md
--- title: DocArray weight: 300 aliases: [ ../integrations/docarray/ ] --- # DocArray You can use Qdrant natively in DocArray, where Qdrant serves as a high-performance document store to enable scalable vector search. DocArray is a library from Jina AI for nested, unstructured data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the data with a Pythonic API. To install DocArray with Qdrant support, please do ```bash pip install "docarray[qdrant]" ``` More information can be found in [DocArray's documentations](https://docarray.jina.ai/advanced/document-store/qdrant/).
documentation/frameworks/docarray.md
--- title: MindsDB weight: 1100 aliases: [ ../integrations/mindsdb/ ] --- # MindsDB [MindsDB](https://mindsdb.com) is an AI automation platform for building AI/ML powered features and applications. It works by connecting any source of data with any AI/ML model or framework and automating how real-time data flows between them. With the MindsDB-Qdrant integration, you can now select Qdrant as a database to load into and retrieve from with semantic search and filtering. **MindsDB allows you to easily**: - Connect to any store of data or end-user application. - Pass data to an AI model from any store of data or end-user application. - Plug the output of an AI model into any store of data or end-user application. - Fully automate these workflows to build AI-powered features and applications ## Usage To get started with Qdrant and MindsDB, the following syntax can be used. ```sql CREATE DATABASE qdrant_test WITH ENGINE = "qdrant", PARAMETERS = { "location": ":memory:", "collection_config": { "size": 386, "distance": "Cosine" } } ``` The available arguments for instantiating Qdrant can be found [here](https://github.com/mindsdb/mindsdb/blob/23a509cb26bacae9cc22475497b8644e3f3e23c3/mindsdb/integrations/handlers/qdrant_handler/qdrant_handler.py#L408-L468). ## Creating a new table - Qdrant options for creating a collection can be specified as `collection_config` in the `CREATE DATABASE` parameters. - By default, UUIDs are set as collection IDs. You can provide your own IDs under the `id` column. ```sql CREATE TABLE qdrant_test.test_table ( SELECT embeddings,'{"source": "bbc"}' as metadata FROM mysql_demo_db.test_embeddings ); ``` ## Querying the database #### Perform a full retrieval using the following syntax. ```sql SELECT * FROM qdrant_test.test_table ``` By default, the `LIMIT` is set to 10 and the `OFFSET` is set to 0. #### Perform a similarity search using your embeddings <aside role="status">Qdrant supports <a href="https://qdrant.tech/documentation/concepts/indexing/#payload-index">payload indexing</a> that vastly improves retrieval efficiency with filters and is highly recommended. Please note that this feature currently cannot be configured via MindsDB and must be set up separately if needed.</aside> ```sql SELECT * FROM qdrant_test.test_table WHERE search_vector = (select embeddings from mysql_demo_db.test_embeddings limit 1) ``` #### Perform a search using filters ```sql SELECT * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Delete entries using IDs ```sql DELETE FROM qtest.test_table_6 WHERE id = 2 ``` #### Delete entries using filters ```sql DELETE * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Drop a table ```sql DROP TABLE qdrant_test.test_table; ``` ## Next steps You can find more information pertaining to MindsDB and its datasources [here](https://docs.mindsdb.com/).
documentation/frameworks/mindsdb.md
--- title: Autogen weight: 1200 aliases: [ ../integrations/autogen/ ] --- # Microsoft Autogen [AutoGen](https://github.com/microsoft/autogen) is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools. - Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. With the Autogen-Qdrant integration, you can use the `QdrantRetrieveUserProxyAgent` from autogen to build retrieval augmented generation(RAG) services with ease. ## Installation ```bash pip install "pyautogen[retrievechat]" "qdrant_client[fastembed]" ``` ## Usage A demo application that generates code based on context w/o human feedback #### Set your API Endpoint The config_list_from_json function loads a list of configurations from an environment variable or a JSON file. ```python from autogen import config_list_from_json from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent from qdrant_client import QdrantClient config_list = config_list_from_json( env_or_file="OAI_CONFIG_LIST", file_location="." ) ``` It first looks for the environment variable "OAI_CONFIG_LIST" which needs to be a valid JSON string. If that variable is not found, it then looks for a JSON file named "OAI_CONFIG_LIST". The file structure sample can be found [here](https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample). #### Construct agents for RetrieveChat We start by initializing the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent. The system message needs to be set to "You are a helpful assistant." for RetrieveAssistantAgent. The detailed instructions are given in the user message. ```python # Print the generation steps autogen.ChatCompletion.start_logging() # 1. create a RetrieveAssistantAgent instance named "assistant" assistant = RetrieveAssistantAgent( name="assistant", system_message="You are a helpful assistant.", llm_config={ "request_timeout": 600, "seed": 42, "config_list": config_list, }, ) # 2. create a QdrantRetrieveUserProxyAgent instance named "qdrantagent" # By default, the human_input_mode is "ALWAYS", i.e. the agent will ask for human input at every step. # `docs_path` is the path to the docs directory. # `task` indicates the kind of task we're working on. # `chunk_token_size` is the chunk token size for the retrieve chat. # We use an in-memory QdrantClient instance here. Not recommended for production. ragproxyagent = QdrantRetrieveUserProxyAgent( name="qdrantagent", human_input_mode="NEVER", max_consecutive_auto_reply=10, retrieve_config={ "task": "code", "docs_path": "./path/to/docs", "chunk_token_size": 2000, "model": config_list[0]["model"], "client": QdrantClient(":memory:"), "embedding_model": "BAAI/bge-small-en-v1.5", }, ) ``` #### Run the retriever service ```python # Always reset the assistant before starting a new conversation. assistant.reset() # We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message. # The assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing. # The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected. # The query used below is for demonstration. It should usually be related to the docs made available to the agent code_problem = "How can I use FLAML to perform a classification task?" ragproxyagent.initiate_chat(assistant, problem=code_problem) ``` ## Next steps Check out more Autogen [examples](https://microsoft.github.io/autogen/docs/Examples). You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).
documentation/frameworks/autogen.md
--- title: Unstructured weight: 1900 --- # Unstructured [Unstructured](https://unstructured.io/) is a library designed to help preprocess, structure unstructured text documents for downstream machine learning tasks. Qdrant can be used as an ingestion destination in Unstructured. ## Setup Install Unstructured with the `qdrant` extra. ```bash pip install "unstructured[qdrant]" ``` ## Usage Depending on the use case you can prefer the command line or using it within your application. ### CLI ```bash EMBEDDING_PROVIDER=${EMBEDDING_PROVIDER:-"langchain-huggingface"} unstructured-ingest \ local \ --input-path example-docs/book-war-and-peace-1225p.txt \ --output-dir local-output-to-qdrant \ --strategy fast \ --chunk-elements \ --embedding-provider "$EMBEDDING_PROVIDER" \ --num-processes 2 \ --verbose \ qdrant \ --collection-name "test" \ --location "http://localhost:6333" \ --batch-size 80 ``` For a full list of the options the CLI accepts, run `unstructured-ingest <upstream connector> qdrant --help` ### Programmatic usage ```python from unstructured.ingest.connector.local import SimpleLocalConfig from unstructured.ingest.connector.qdrant import ( QdrantWriteConfig, SimpleQdrantConfig, ) from unstructured.ingest.interfaces import ( ChunkingConfig, EmbeddingConfig, PartitionConfig, ProcessorConfig, ReadConfig, ) from unstructured.ingest.runner import LocalRunner from unstructured.ingest.runner.writers.base_writer import Writer from unstructured.ingest.runner.writers.qdrant import QdrantWriter def get_writer() -> Writer: return QdrantWriter( connector_config=SimpleQdrantConfig( location="http://localhost:6333", collection_name="test", ), write_config=QdrantWriteConfig(batch_size=80), ) if __name__ == "__main__": writer = get_writer() runner = LocalRunner( processor_config=ProcessorConfig( verbose=True, output_dir="local-output-to-qdrant", num_processes=2, ), connector_config=SimpleLocalConfig( input_path="example-docs/book-war-and-peace-1225p.txt", ), read_config=ReadConfig(), partition_config=PartitionConfig(), chunking_config=ChunkingConfig(chunk_elements=True), embedding_config=EmbeddingConfig(provider="langchain-huggingface"), writer=writer, writer_kwargs={}, ) runner.run() ``` ## Next steps - Unstructured API [reference](https://unstructured-io.github.io/unstructured/api.html). - Qdrant ingestion destination [reference](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html).
documentation/frameworks/unstructured.md
--- title: txtai weight: 500 aliases: [ ../integrations/txtai/ ] --- # txtai Qdrant might be also used as an embedding backend in [txtai](https://neuml.github.io/txtai/) semantic applications. txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings' proximity. Qdrant is not built-in txtai backend and requires installing an additional dependency: ```bash pip install qdrant-txtai ``` The examples and some more information might be found in [qdrant-txtai repository](https://github.com/qdrant/qdrant-txtai).
documentation/frameworks/txtai.md
--- title: Frameworks weight: 33 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: true --- | Frameworks | |---| | [AirByte](./airbyte/) | | [AutoGen](./autogen/) | | [Cheshire Cat](./cheshire-cat/) | | [DLT](./dlt/) | | [DocArray](./docarray/) | | [DSPy](./dspy/) | | [Fifty One](./fifty-one/) | | [txtai](./txtai/) | | [Fondant](./fondant/) | | [Haystack](./haystack/) | | [Langchain](./langchain/) | | [Llama Index](./llama-index/) | | [Minds DB](./mindsdb/) | | [PrivateGPT](./privategpt/) | | [Spark](./spark/) |
documentation/frameworks/_index.md
--- title: N8N weight: 2000 --- # N8N [N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration. Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/). ## Setting up the vectorstore Select the Qdrant vectorstore from the list of nodes in your workflow editor. ![Qdrant n8n node](/documentation/frameworks/n8n/node.png) You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters). ![Qdrant Config](/documentation/frameworks/n8n/config.png) Create a connection to Qdrant using your [instance credentials](https://qdrant.tech/documentation/cloud/authentication/). ![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png) The vectorstore supports the following operations: - Get Many - Get the top-ranked documents for a query. - Insert documents - Add documents to the vectorstore. - Retrieve documents - Retrieve documents for use with AI nodes. ## Further Reading - N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/). - N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/).
documentation/frameworks/n8n.md
--- title: Haystack weight: 400 aliases: [ ../integrations/haystack/ ] --- # Haystack [Haystack](https://haystack.deepset.ai/) serves as a comprehensive NLP framework, offering a modular methodology for constructing cutting-edge generative AI, QA, and semantic knowledge base search systems. A critical element in contemporary NLP systems is an efficient database for storing and retrieving extensive text data. Vector databases excel in this role, as they house vector representations of text and implement effective methods for swift retrieval. Thus, we are happy to announce the integration with Haystack - `QdrantDocumentStore`. This document store is unique, as it is maintained externally by the Qdrant team. The new document store comes as a separate package and can be updated independently of Haystack: ```bash pip install qdrant-haystack ``` `QdrantDocumentStore` supports [all the configuration properties](/documentation/collections/#create-collection) available in the Qdrant Python client. If you want to customize the default configuration of the collection used under the hood, you can provide that settings when you create an instance of the `QdrantDocumentStore`. For example, if you'd like to enable the Scalar Quantization, you'd make that in the following way: ```python from qdrant_haystack.document_stores import QdrantDocumentStore from qdrant_client.http import models document_store = QdrantDocumentStore( ":memory:", index="Document", embedding_dim=512, recreate_index=True, quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.99, always_ram=True, ), ), ) ```
documentation/frameworks/haystack.md
--- title: Fondant weight: 1700 aliases: [ ../integrations/fondant/ ] --- # Fondant [Fondant](https://fondant.ai/en/stable/) is an open-source framework that aims to simplify and speed up large-scale data processing by making containerized components reusable across pipelines and execution environments. Benefit from built-in features such as autoscaling, data lineage, and pipeline caching, and deploy to (managed) platforms such as Vertex AI, Sagemaker, and Kubeflow Pipelines. Fondant comes with a library of reusable components that you can leverage to compose your own pipeline, including a Qdrant component for writing embeddings to Qdrant. ## Usage <aside role="status"> A Qdrant collection has to be <a href="documentation/concepts/collections/">created in advance</a> </aside> **A data load pipeline for RAG using Qdrant**. A simple ingestion pipeline could look like the following: ```python import pyarrow as pa from fondant.pipeline import Pipeline indexing_pipeline = Pipeline( name="ingestion-pipeline", description="Pipeline to prepare and process data for building a RAG solution", base_path="./fondant-artifacts", ) # An custom implemenation of a read component. text = indexing_pipeline.read( "path/to/data-source-component", arguments={ # your custom arguments } ) chunks = text.apply( "chunk_text", arguments={ "chunk_size": 512, "chunk_overlap": 32, }, ) embeddings = chunks.apply( "embed_text", arguments={ "model_provider": "huggingface", "model": "all-MiniLM-L6-v2", }, ) embeddings.write( "index_qdrant", arguments={ "url": "http:localhost:6333", "collection_name": "some-collection-name", }, cache=False, ) ``` Once you have a pipeline, you can easily run it using the built-in CLI. Fondant allows you to run the pipeline in production across different clouds. The first component is a custom read module that needs to be implemented and cannot be used off the shelf. A detailed tutorial on how to rebuild this pipeline [is provided on GitHub](https://github.com/ml6team/fondant-usecase-RAG/tree/main). ## Next steps More information about creating your own pipelines and components can be found in the [Fondant documentation](https://fondant.ai/en/stable/).
documentation/frameworks/fondant.md
--- title: Cheshire Cat weight: 600 aliases: [ ../integrations/cheshire-cat/ ] --- # Cheshire Cat [Cheshire Cat](https://cheshirecat.ai/) is an open-source framework that allows you to develop intelligent agents on top of many Large Language Models (LLM). You can develop your custom AI architecture to assist you in a wide range of tasks. ![Cheshire cat](/documentation/frameworks/cheshire-cat/cat.jpg) ## Cheshire Cat and Qdrant Cheshire Cat uses Qdrant as the default [Vector Memory](https://cheshire-cat-ai.github.io/docs/conceptual/memory/vector_memory/) for ingesting and retrieving documents. ``` # Decide host and port for your Cat. Default will be localhost:1865 CORE_HOST=localhost CORE_PORT=1865 # Qdrant server # QDRANT_HOST=localhost # QDRANT_PORT=6333 ``` Cheshire Cat takes great advantage of the following features of Qdrant: * [Collection Aliases](../../concepts/collections/#collection-aliases) to manage the change from one embedder to another. * [Quantization](../../guides/quantization/) to obtain a good balance between speed, memory usage and quality of the results. * [Snapshots](../../concepts/snapshots/) to not miss any information. * [Community](https://discord.com/invite/tdtYvXjC4h) ![RAG Pipeline](/documentation/frameworks/cheshire-cat/stregatto.jpg) ## How to use the Cheshire Cat ### Requirements To run the Cheshire Cat, you need to have [Docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docs.docker.com/compose/install/) already installed on your system. ```shell docker run --rm -it -p 1865:80 ghcr.io/cheshire-cat-ai/core:latest ``` * Chat with the Cheshire Cat on [localhost:1865/admin](http://localhost:1865/admin). * You can also interact via REST API and try out the endpoints on [localhost:1865/docs](http://localhost:1865/docs) Check the [instructions on github](https://github.com/cheshire-cat-ai/core/blob/main/README.md) for a more comprehensive quick start. ### First configuration of the LLM * Open the Admin Portal in your browser at [localhost:1865/admin](http://localhost:1865/admin). * Configure the LLM in the `Settings` tab. * If you don't explicitly choose it using `Settings` tab, the Embedder follows the LLM. ## Next steps For more information, refer to the Cheshire Cat [documentation](https://cheshire-cat-ai.github.io/docs/) and [blog](https://cheshirecat.ai/blog/). * [Getting started](https://cheshirecat.ai/hello-world/) * [How the Cat works](https://cheshirecat.ai/how-the-cat-works/) * [Write Your First Plugin](https://cheshirecat.ai/write-your-first-plugin/) * [Cheshire Cat's use of Qdrant - Vector Space](https://cheshirecat.ai/dont-get-lost-in-vector-space/) * [Cheshire Cat's use of Qdrant - Aliases](https://cheshirecat.ai/the-drunken-cat-effect/) * [Discord Community](https://discord.com/invite/bHX5sNFCYU)
documentation/frameworks/cheshire-cat.md
--- title: Vector Search Basics weight: 1 social_preview_image: /docs/gettingstarted/vector-social.png --- # Vector Search Basics If you are still trying to figure out how vector search works, please read ahead. This document describes how vector search is used, covers Qdrant's place in the larger ecosystem, and outlines how you can use Qdrant to augment your existing projects. For those who want to start writing code right away, visit our [Complete Beginners tutorial](/documentation/tutorials/search-beginners) to build a search engine in 5-15 minutes. ## A Brief History of Search Human memory is unreliable. Thus, as long as we have been trying to collect ‘knowledge’ in written form, we had to figure out how to search for relevant content without rereading the same books repeatedly. That’s why some brilliant minds introduced the inverted index. In the simplest form, it’s an appendix to a book, typically put at its end, with a list of the essential terms-and links to pages they occur at. Terms are put in alphabetical order. Back in the day, that was a manually crafted list requiring lots of effort to prepare. Once digitalization started, it became a lot easier, but still, we kept the same general principles. That worked, and still, it does. If you are looking for a specific topic in a particular book, you can try to find a related phrase and quickly get to the correct page. Of course, assuming you know the proper term. If you don’t, you must try and fail several times or find somebody else to help you form the correct query. {{< figure src=/docs/gettingstarted/inverted-index.png caption="A simplified version of the inverted index." >}} Time passed, and we haven’t had much change in that area for quite a long time. But our textual data collection started to grow at a greater pace. So we also started building up many processes around those inverted indexes. For example, we allowed our users to provide many words and started splitting them into pieces. That allowed finding some documents which do not necessarily contain all the query words, but possibly part of them. We also started converting words into their root forms to cover more cases, removing stopwords, etc. Effectively we were becoming more and more user-friendly. Still, the idea behind the whole process is derived from the most straightforward keyword-based search known since the Middle Ages, with some tweaks. {{< figure src=/docs/gettingstarted/tokenization.png caption="The process of tokenization with an additional stopwords removal and converstion to root form of a word." >}} Technically speaking, we encode the documents and queries into so-called sparse vectors where each position has a corresponding word from the whole dictionary. If the input text contains a specific word, it gets a non-zero value at that position. But in reality, none of the texts will contain more than hundreds of different words. So the majority of vectors will have thousands of zeros and a few non-zero values. That’s why we call them sparse. And they might be already used to calculate some word-based similarity by finding the documents which have the biggest overlap. {{< figure src=/docs/gettingstarted/query.png caption="An example of a query vectorized to sparse format." >}} Sparse vectors have relatively **high dimensionality**; equal to the size of the dictionary. And the dictionary is obtained automatically from the input data. So if we have a vector, we are able to partially reconstruct the words used in the text that created that vector. ## The Tower of Babel Every once in a while, when we discover new problems with inverted indexes, we come up with a new heuristic to tackle it, at least to some extent. Once we realized that people might describe the same concept with different words, we started building lists of synonyms to convert the query to a normalized form. But that won’t work for the cases we didn’t foresee. Still, we need to craft and maintain our dictionaries manually, so they can support the language that changes over time. Another difficult issue comes to light with multilingual scenarios. Old methods require setting up separate pipelines and keeping humans in the loop to maintain the quality. {{< figure src=/docs/gettingstarted/babel.jpg caption="The Tower of Babel, Pieter Bruegel." >}} ## The Representation Revolution The latest research in Machine Learning for NLP is heavily focused on training Deep Language Models. In this process, the neural network takes a large corpus of text as input and creates a mathematical representation of the words in the form of vectors. These vectors are created in such a way that words with similar meanings and occurring in similar contexts are grouped together and represented by similar vectors. And we can also take, for example, an average of all the word vectors to create the vector for a whole text (e.g query, sentence, or paragraph). ![deep neural](/docs/gettingstarted/deep-neural.png) We can take those **dense vectors** produced by the network and use them as a **different data representation**. They are dense because neural networks will rarely produce zeros at any position. In contrary to sparse ones, they have a relatively low dimensionality — hundreds or a few thousand only. Unfortunately, if we want to have a look and understand the content of the document by looking at the vector it’s no longer possible. Dimensions are no longer representing the presence of specific words. Dense vectors can capture the meaning, not the words used in a text. That being said, **Large Language Models can automatically handle synonyms**. Moreso, since those neural networks might have been trained with multilingual corpora, they translate the same sentence, written in different languages, to similar vector representations, also called **embeddings**. And we can compare them to find similar pieces of text by calculating the distance to other vectors in our database. {{< figure src=/docs/gettingstarted/input.png caption="Input queries contain different words, but they are still converted into similar vector representations, because the neural encoder can capture the meaning of the sentences. That feature can capture synonyms but also different languages.." >}} **Vector search** is a process of finding similar objects based on their embeddings similarity. The good thing is, you don’t have to design and train your neural network on your own. Many pre-trained models are available, either on **HuggingFace** or by using libraries like [SentenceTransformers](https://www.sbert.net/?ref=hackernoon.com). If you, however, prefer not to get your hands dirty with neural models, you can also create the embeddings with SaaS tools, like [co.embed API](https://docs.cohere.com/reference/embed?ref=hackernoon.com). ## Why Qdrant? The challenge with vector search arises when we need to find similar documents in a big set of objects. If we want to find the closest examples, the naive approach would require calculating the distance to every document. That might work with dozens or even hundreds of examples but may become a bottleneck if we have more than that. When we work with relational data, we set up database indexes to speed things up and avoid full table scans. And the same is true for vector search. Qdrant is a fully-fledged vector database that speeds up the search process by using a graph-like structure to find the closest objects in sublinear time. So you don’t calculate the distance to every object from the database, but some candidates only. {{< figure src=/docs/gettingstarted/vector-search.png caption="Vector search with Qdrant. Thanks to HNSW graph we are able to compare the distance to some of the objects from the database, not to all of them." >}} While doing a semantic search at scale, because this is what we sometimes call the vector search done on texts, we need a specialized tool to do it effectively — a tool like Qdrant. ## Next Steps Vector search is an exciting alternative to sparse methods. It solves the issues we had with the keyword-based search without needing to maintain lots of heuristics manually. It requires an additional component, a neural encoder, to convert text into vectors. [**Tutorial 1 - Qdrant for Complete Beginners**](../../tutorials/search-beginners) Despite its complicated background, vectors search is extraordinarily simple to set up. With Qdrant, you can have a search engine up-and-running in five minutes. Our [Complete Beginners tutorial](../../tutorials/search-beginners) will show you how. [**Tutorial 2 - Question and Answer System**](../../../articles/qa-with-cohere-and-qdrant) However, you can also choose SaaS tools to generate them and avoid building your model. Setting up a vector search project with Qdrant Cloud and Cohere co.embed API is fairly easy if you follow the [Question and Answer system tutorial](../../../articles/qa-with-cohere-and-qdrant). There is another exciting thing about vector search. You can search for any kind of data as long as there is a neural network that would vectorize your data type. Do you think about a reverse image search? That’s also possible with vector embeddings.
documentation/overview/vector-search.md
--- title: Qdrant vs. Alternatives weight: 2 --- # Comparing Qdrant with alternatives If you are currently using other vector databases, we recommend you read this short guide. It breaks down the key differences between Qdrant and other similar products. This document should help you decide which product has the features and support you need. Unfortunately, since Pinecone is not an open source product, we can't include it in our [benchmarks](/benchmarks/). However, we still recommend you use the [benchmark tool](/benchmarks/) while exploring Qdrant. ## Feature comparison | Feature | Pinecone | Qdrant | Comments | |-------------------------------------|-------------------------------|----------------------------------------------|----------------------------------------------------------| | **Deployment Modes** | SaaS-only | Local, on-premise, Cloud | Qdrant offers more flexibility in deployment modes | | **Supported Technologies** | Python, JavaScript/TypeScript | Python, JavaScript/TypeScript, Rust, Go | Qdrant supports a broader range of programming languages | | **Performance** (e.g., query speed) | TnC Prohibit Benchmarking | [Benchmark result](/benchmarks/) | Compare performance metrics | | **Pricing** | Starts at $70/mo | Free and Open Source, Cloud starts at $25/mo | Pricing as of May 2023 | ## Prototyping options Qdrant offers multiple ways of deployment, including local mode, on-premise, and [Qdrant Cloud](https://cloud.qdrant.io/). You can [get started with local mode quickly](/documentation/quick-start/) and without signing up for SaaS. With Pinecone you will have to connect your development environment to the cloud service just to test the product. When it comes to SaaS, both Pinecone and [Qdrant Cloud](https://cloud.qdrant.io/) offer a free cloud tier to check out the services, and you don't have to give credit card details for either. Qdrant's free tier should be enough to keep around 1M of 768-dimensional vectors, but it may vary depending on the additional attributes stored with vectors. Pinecone's starter plan supports approximately 200k 768-dimensional embeddings and metadata, stored within a single index. With Qdrant Cloud, however, you can experiment with different models as you may create several collections or keep multiple vectors per each point. That means Qdrant Cloud allows you building several small demos, even on a free tier. ## Terminology Although both tools serve similar purposes, there are some differences in the terms used. This dictionary may come in handy during the transition. | Pinecone | Qdrant | Comments | |----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **Index** | [**Collection**](../../concepts/collections/) | Pinecone's index is an organizational unit for storing and managing vectors of the same size. The index is tightly coupled with hardware (pods). Qdrant uses the collection to describe a similar concept, however, a single instance may handle multiple collections at once. | | **Collection** | [**Snapshots**](../../concepts/snapshots/) | A collection in Pinecone is a static copy of an *index* that you cannot query, mostly used as some sort of backup. There is no direct analogy in Qdrant, but if you want to back your collection up, you may always create a more flexible [snapshot](../../concepts/snapshots/). | | **Namespace** | [**Payload-based isolation**](../../guides/multiple-partitions/) / [**User-defined sharding**](../../guides/distributed_deployment/#user-defined-sharding) | Namespaces allow the partitioning of the vectors in an index into subsets. Qdrant provides multiple tools to ensure efficient data isolation within a collection. For fine-grained data segreation you can use payload-based approach to multitenancy, and use custom sharding at bigger scale | | **Metadata** | [**Payload**](../../concepts/payload/) | Additional attributes describing a particular object, other than the embedding vector. Both engines support various data types, but Pinecone metadata is key-value, while Qdrant supports any JSON-like objects. | | **Query** | [**Search**](../../concepts/search/) | Name of the method used to find the nearest neighbors for a given vector, possibly with some additional filters applied on top. | | N/A | [**Scroll**](../../concepts/points/#scroll-points) | Pinecone does not offer a way to iterate through all the vectors in a particular index. Qdrant has a `scroll` method to get them all without using search. | ## Known limitations 1. Pinecone does not support arbitrary JSON metadata, but a flat structure with strings, numbers, booleans, or lists of strings used as values. Qdrant accepts any JSON object as a payload, even nested structures. 2. NULL values are not supported in Pinecone metadata but are handled properly by Qdrant. 3. The maximum size of Pinecone metadata is 40kb per vector. 4. Pinecone, unlike Qdrant, does not support geolocation and filtering based on geographical criteria. 5. Qdrant allows storing multiple vectors per point, and those might be of a different dimensionality. Pinecone doesn't support anything similar. 6. Vectors in Pinecone are mandatory for each point. Qdrant supports optional vectors. It is worth mentioning, that **Pinecone will automatically create metadata indexes for all the fields**. Qdrant assumes you know your data and your future queries best, so it's up to you to choose the fields to be indexed. Thus, **you need to explicitly define the payload indexes while using Qdrant**. ## Supported technologies Both tools support various programming languages providing official SDKs. | | Pinecone | Qdrant | |---------------------------|----------------------|----------------------| | **Python** | ✅ | ✅ | | **JavaScript/TypeScript** | ✅ | ✅ | | **Rust** | ❌ | ✅ | | **Go** | ❌ | ✅ | There are also various community-driven projects aimed to provide the support for the other languages, but those are not officially maintained, thus not mentioned here. However, it is still possible to interact with both engines through the HTTP REST or gRPC API. That makes it easy to integrate with any technology of your choice. If you are a Python user, then both tools are well-integrated with the most popular libraries like [LangChain](../integrations/langchain/), [LlamaIndex](../integrations/llama-index/), [Haystack](../integrations/haystack/), and more. Using any of those libraries makes it easier to experiment with different vector databases, as the transition should be seamless. ## Planning to migrate? > We strongly recommend you use [Qdrant Tools](https://github.com/NirantK/qdrant_tools) to migrate from Pinecone to Qdrant. Migrating from Pinecone to Qdrant involves a series of well-planned steps to ensure that the transition is smooth and disruption-free. Here is a suggested migration plan: 1. Understanding Qdrant: It's important to first get a solid grasp of Qdrant, its functions, and its APIs. Take time to understand how to establish collections, add points, and query these collections. 2. Migration strategy: Create a comprehensive migration strategy, incorporating data migration (copying your vectors and associated metadata from Pinecone to Qdrant), feature migration (verifying the availability and setting up of features currently in use with Pinecone in Qdrant), and a contingency plan (should there be any unexpected issues). 3. Establishing a parallel Qdrant system: Set up a Qdrant system to run concurrently with your current Pinecone system. This step will let you begin testing Qdrant without disturbing your ongoing operations on Pinecone. 4. Data migration: Shift your vectors and metadata from Pinecone to Qdrant. The timeline for this step could vary, depending on the size of your data and Pinecone API's rate limitations. 5. Testing and transition: Following the data migration, thoroughly test the Qdrant system. Once you're assured of the Qdrant system's stability and performance, you can make the switch. 6. Monitoring and fine-tuning: After transitioning to Qdrant, maintain a close watch on its performance. It's key to continue refining the system for optimal results as needed. ## Next steps 1. If you aren't ready yet, [try out Qdrant locally](/documentation/quick-start/) or sign up for [Qdrant Cloud](https://cloud.qdrant.io/). 2. For more basic information on Qdrant read our [Overview](overview/) section or learn more about Qdrant Cloud's [Free Tier](documentation/cloud/). 3. If ready to migrate, please consult our [Comprehensive Guide](https://github.com/NirantK/qdrant_tools) for further details on migration steps.
documentation/overview/qdrant-alternatives.md