text
stringlengths
47
58.8k
source
stringlengths
12
112
--- draft: false title: Food Discovery short_description: Qdrant Food Discovery Demo recommends more similar meals based on how they look description: This demo uses data from Delivery Service. Users may like or dislike the photo of a dish, and the app will recommend more similar meals based on how they look. It's also possible to choose to view results from the restaurants within the delivery radius. preview_image: /demo/food-discovery-demo.png link: https://food-discovery.qdrant.tech/ weight: 2 sitemapExclude: True ---
demo/demo-2.md
--- draft: false title: E-commerce products categorization short_description: E-commerce products categorization demo from Qdrant vector database description: This demo shows how you can use vector database in e-commerce. Enter the name of the product and the application will understand which category it belongs to, based on the multi-language model. The dots represent clusters of products. preview_image: /demo/products_categorization_demo.jpg link: https://qdrant.to/extreme-classification-demo weight: 3 sitemapExclude: True ---
demo/demo-3.md
--- draft: false title: Startup Search short_description: Qdrant Startup Search. This demo uses short descriptions of startups to perform a semantic search description: This demo uses short descriptions of startups to perform a semantic search. Each startup description converted into a vector using a pre-trained SentenceTransformer model and uploaded to the Qdrant vector search engine. Demo service processes text input with the same model and uses its output to query Qdrant for similar vectors. You can turn neural search on and off to compare the result with regular full-text search. preview_image: /demo/startup_search_demo.jpg link: https://qdrant.to/semantic-search-demo weight: 1 sitemapExclude: True ---
demo/demo-1.md
--- page_title: Vector Search Demos and Examples description: Interactive examples and demos of vector search based applications developed with Qdrant vector search engine. title: Vector Search Demos section_title: Interactive Live Examples ---
demo/_index.md
--- title: Examples weight: 25 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: false --- # Sample Use Cases Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example. | Example | Description | Stack | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------| | [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant | | [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs | OpenAI, Qdrant, FastEmbed |
documentation/examples.md
--- title: Release notes weight: 42 type: external-link external_url: https://github.com/qdrant/qdrant/releases sitemapExclude: True ---
documentation/release-notes.md
--- title: Benchmarks weight: 33 draft: true ---
documentation/benchmarks.md
--- title: Community links weight: 42 --- # Community Contributions Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors. | Link | Description | Stack | |------|------------------------------|--------| | [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone | | [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex | | [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
documentation/community-links.md
--- title: Quickstart weight: 11 aliases: - quick_start --- # Quickstart In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query. <aside role="status">Before you start, please make sure Docker is installed and running on your system.</aside> ## Download and run First, download the latest Qdrant image from Dockerhub: ```bash docker pull qdrant/qdrant ``` Then, run the service: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see. Qdrant is now accessible: - REST API: [localhost:6333](http://localhost:6333) - Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard) - GRPC API: [localhost:6334](http://localhost:6334) ## Initialize the client ```python from qdrant_client import QdrantClient client = QdrantClient("localhost", port=6333) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); ``` ```rust use qdrant_client::client::QdrantClient; // The Rust client uses Qdrant's GRPC interface let client = QdrantClient::from_url("http://localhost:6334").build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; // The Java client uses Qdrant's GRPC interface QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); ``` ```csharp using Qdrant.Client; // The C# client uses Qdrant's GRPC interface var client = new QdrantClient("localhost", 6334); ``` <aside role="status">By default, Qdrant starts with no encryption or authentication . This means anyone with network access to your machine can access your Qdrant container instance. Please read <a href="https://qdrant.tech/documentation/security/">Security</a> carefully for details on how to secure your instance.</aside> ## Create a collection You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors. ```python from qdrant_client.http.models import Distance, VectorParams client.create_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.DOT), ) ``` ```typescript await client.createCollection("test_collection", { vectors: { size: 4, distance: "Dot" }, }); ``` ```rust use qdrant_client::qdrant::{vectors_config::Config, VectorParams, VectorsConfig}; client .create_collection(&CreateCollection { collection_name: "test_collection".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 4, distance: Distance::Dot.into(), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; client.createCollectionAsync("test_collection", VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get(); ``` ```csharp using Qdrant.Client.Grpc; await client.CreateCollectionAsync( collectionName: "test_collection", vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot } ); ``` <aside role="status">TypeScript, Rust examples use async/await syntax, so should be used in an async block.</aside> <aside role="status">Java examples are enclosed within a try/catch block.</aside> ## Add vectors Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector: ```python from qdrant_client.http.models import PointStruct operation_info = client.upsert( collection_name="test_collection", wait=True, points=[ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}), PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}), PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}), PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}), PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}), PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}), ], ) print(operation_info) ``` ```typescript const operationInfo = await client.upsert("test_collection", { wait: true, points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: "Berlin" } }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: "London" } }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: "Moscow" } }, { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: "New York" } }, { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: "Beijing" } }, { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: "Mumbai" } }, ], }); console.debug(operationInfo); ``` ```rust use qdrant_client::qdrant::PointStruct; use serde_json::json; let points = vec![ PointStruct::new( 1, vec![0.05, 0.61, 0.76, 0.74], json!( {"city": "Berlin"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.19, 0.81, 0.75, 0.11], json!( {"city": "London"} ) .try_into() .unwrap(), ), // ..truncated ]; let operation_info = client .upsert_points_blocking("test_collection".to_string(), None, points, None) .await?; dbg!(operation_info); ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpdateResult; UpdateResult operationInfo = client .upsertAsync( "test_collection", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of("city", value("Berlin"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload(Map.of("city", value("London"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload(Map.of("city", value("Moscow"))) .build())) // Truncated .get(); System.out.println(operationInfo); ``` ```csharp using Qdrant.Client.Grpc; var operationInfo = await client.UpsertAsync( collectionName: "test_collection", points: new List<PointStruct> { new() { Id = 1, Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { ["city"] = "Berlin" } }, new() { Id = 2, Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { ["city"] = "London" } }, new() { Id = 3, Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { ["city"] = "Moscow" } }, // Truncated } ); Console.WriteLine(operationInfo); ``` **Response:** ```python operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'> ``` ```typescript { operation_id: 0, status: 'completed' } ``` ```rust PointsOperationResponse { result: Some(UpdateResult { operation_id: 0, status: Completed, }), time: 0.006347708, } ``` ```java operation_id: 0 status: Completed ``` ```csharp { "operationId": "0", "status": "Completed" } ``` ## Run a query Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`? ```python search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], limit=3 ) print(search_result) ``` ```typescript let searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::SearchPoints; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, with_payload: Some(true.into()), ..Default::default() }) .await?; dbg!(search_result); ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.ScoredPoint; import io.qdrant.client.grpc.Points.SearchPoints; import static io.qdrant.client.WithPayloadSelectorFactory.enable; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=4, version=0, score=1.362, payload={"city": "New York"}, vector=None), ScoredPoint(id=1, version=0, score=1.273, payload={"city": "Berlin"}, vector=None), ScoredPoint(id=3, version=0, score=1.208, payload={"city": "Moscow"}, vector=None) ``` ```typescript [ { id: 4, version: 0, score: 1.362, payload: null, vector: null, }, { id: 1, version: 0, score: 1.273, payload: null, vector: null, }, { id: 3, version: 0, score: 1.208, payload: null, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some(PointId { point_id_options: Some(Num(4)), }), payload: {}, score: 1.362, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(1)), }), payload: {}, score: 1.273, version: 0, vectors: None, }, ScoredPoint { id: Some(PointId { point_id_options: Some(Num(3)), }), payload: {}, score: 1.208, version: 0, vectors: None, }, ], time: 0.003635125, } ``` ```java [id { num: 4 } payload { key: "city" value { string_value: "New York" } } score: 1.362 version: 1 , id { num: 1 } payload { key: "city" value { string_value: "Berlin" } } score: 1.273 version: 1 , id { num: 3 } payload { key: "city" value { string_value: "Moscow" } } score: 1.208 version: 1 ] ``` ```csharp [ { "id": { "num": "4" }, "payload": { "city": { "stringValue": "New York" } }, "score": 1.362, "version": "7" }, { "id": { "num": "1" }, "payload": { "city": { "stringValue": "Berlin" } }, "score": 1.273, "version": "7" }, { "id": { "num": "3" }, "payload": { "city": { "stringValue": "Moscow" } }, "score": 1.208, "version": "7" } ] ``` The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](../concepts/search#payload-and-vector-in-the-result) on how to enable it. ## Add a filter We can narrow down the results further by filtering by payload. Let's find the closest results that include "London". ```python from qdrant_client.http.models import Filter, FieldCondition, MatchValue search_result = client.search( collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], query_filter=Filter( must=[FieldCondition(key="city", match=MatchValue(value="London"))] ), with_payload=True, limit=3, ) print(search_result) ``` ```typescript searchResult = await client.search("test_collection", { vector: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: "city", match: { value: "London" } }], }, with_payload: true, limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, SearchPoints}; let search_result = client .search_points(&SearchPoints { collection_name: "test_collection".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], filter: Some(Filter::all([Condition::matches( "city", "London".to_string(), )])), limit: 2, ..Default::default() }) .await?; dbg!(search_result); ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; List<ScoredPoint> searchResult = client .searchAsync( SearchPoints.newBuilder() .setCollectionName("test_collection") .setLimit(3) .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London"))) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()) .get(); System.out.println(searchResult); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; var searchResult = await client.SearchAsync( collectionName: "test_collection", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` **Response:** ```python ScoredPoint(id=2, version=0, score=0.871, payload={"city": "London"}, vector=None) ``` ```typescript [ { id: 2, version: 0, score: 0.871, payload: { city: "London" }, vector: null, }, ]; ``` ```rust SearchResponse { result: [ ScoredPoint { id: Some( PointId { point_id_options: Some( Num( 2, ), ), }, ), payload: { "city": Value { kind: Some( StringValue( "London", ), ), }, }, score: 0.871, version: 0, vectors: None, }, ], time: 0.004001083, } ``` ```java [id { num: 2 } payload { key: "city" value { string_value: "London" } } score: 0.871 version: 1 ] ``` ```csharp [ { "id": { "num": "2" }, "payload": { "city": { "stringValue": "London" } }, "score": 0.871, "version": "7" } ] ``` <aside role="status">To make filtered search fast on real datasets, we highly recommend to create <a href="../concepts/indexing/#payload-index">payload indexes</a>!</aside> You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score. ## Next steps Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates. To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/). **Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
documentation/quick-start.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Getting Started" type: delimiter weight: 8 # Change this weight to change order of sections sitemapExclude: True ---
documentation/0-dl.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Integrations" type: delimiter weight: 30 # Change this weight to change order of sections sitemapExclude: True ---
documentation/2-dl.md
--- title: Roadmap weight: 32 draft: true --- # Qdrant 2023 Roadmap Goals of the release: * **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back. * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version. * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions. * **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable. * **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly. * **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc. ## Milestones * :atom_symbol: Quantization support * [ ] Scalar quantization f32 -> u8 (4x compression) * [ ] Advanced quantization (8x and 16x compression) * [ ] Support for binary vectors --- * :arrow_double_up: Scalability * [ ] Automatic replication factor adjustment * [ ] Automatic shard distribution on cluster scaling * [ ] Repartitioning support --- * :eyes: Search scenarios * [ ] Diversity search - search for vectors that are different from each other * [ ] Sparse vectors search - search for vectors with a small number of non-zero values * [ ] Grouping requests - search within payload-defined groups * [ ] Different scenarios for recommendation API --- * Additionally * [ ] Extend full-text filtering support * [ ] Support for phrase queries * [ ] Support for logical operators * [ ] Simplify update of collection parameters
documentation/roadmap.md
--- title: Interfaces weight: 14 --- # Interfaces Qdrant supports these "official" clients. > **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json) or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions. ## Client Libraries ||Client Repository|Installation|Version| |-|-|-|-| |[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)| |![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)| |![rust](/docs/misc/rust.webp)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)| |![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)| |![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)| |![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)| ## API Reference All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype. |API|Documentation| |-|-| | REST API |[OpenAPI Specification](https://qdrant.github.io/qdrant/redoc/index.html)| | gRPC API| [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md)| ### gRPC Interface The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method. As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port. ```yaml service: grpc_port: 6334 ``` <aside role="status">If you decide to use gRPC, you must expose the port when starting Qdrant.</aside> Running the service inside of Docker will look like this: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` **When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application. ## Qdrant Web UI Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points. In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots. ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Accessing the Web UI First, run the Docker container: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` The GUI is available at `http://localhost:6333/dashboard`
documentation/interfaces.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "Support" type: delimiter weight: 40 # Change this weight to change order of sections sitemapExclude: True ---
documentation/3-dl.md
--- title: Practice Datasets weight: 41 --- # Common Datasets in Snapshot Format You may find that creating embeddings from datasets is a very resource-intensive task. If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page. These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance. ## Available datasets Our snapshots are usually generated from publicly available datasets, which are often used for non-commercial or academic purposes. The following datasets are currently available. Please click on a dataset name to see its detailed description. | Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub | |--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) | | [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) | | [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) | Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot) using the Qdrant CLI upon startup or through the API. ## Qdrant on Hugging Face <p align="center"> <a href="https://huggingface.co/Qdrant"> <img style="width: 500px; max-width: 100%;" src="/content/images/hf-logo-with-title.svg" alt="HuggingFace" title="HuggingFace"> </a> </p> [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to provide you with datasets containing neural embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please let us know if you'd like to see a specific dataset!** If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index), or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/). ## Arxiv.org [Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple fields. Operated by Cornell University, arXiv allows researchers to share their findings with the scientific community and receive feedback before they undergo peer review for formal publication. Its archives host millions of scholarly articles, making it an invaluable resource for those looking to explore the cutting edge of scientific research. With a high frequency of daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset that is ripe for mining, analysis, and the development of future innovations. <aside role="status"> Arxiv.org snapshots were created using precomputed embeddings exposed by <a href="https://alex.macrocosm.so/download">the Alexandria Index</a>. </aside> ### Arxiv.org titles This dataset contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { "title": "Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper title for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments" instruction = "Represent the Research Paper title for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot" } ``` ### Arxiv.org abstracts This dataset contains embeddings generated from the paper abstracts. Each vector has a payload with the abstract used to create it, along with the DOI (Digital Object Identifier). ```json { "abstract": "Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n", "DOI": "1612.05191" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper abstract for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR("hkunlp/instructor-xl") sentence = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train." instruction = "Represent the Research Paper abstract for retrieval; Input:" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot" } ``` ## Wolt food Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers. There is also a JSON payload attached to each point, which looks similar to this: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet: ```python from PIL import Image from sentence_transformers import SentenceTransformer image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg" model = SentenceTransformer("clip-ViT-B-32") embedding = model.encode(Image.open(image_path)) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { "location": "https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot" } ```
documentation/datasets.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "User Manual" type: delimiter weight: 20 # Change this weight to change order of sections sitemapExclude: True ---
documentation/1-dl.md
--- title: Qdrant Documentation weight: 10 --- # Documentation **Qdrant (read: quadrant)** is a vector similarity search engine. Use our documentation to develop a production-ready service with a convenient API to store, search, and manage vectors with an additional payload. Qdrant's expanding features allow for all sorts of neural network or semantic-based matching, faceted search, and other applications. ## First-Time Users: There are three ways to use Qdrant: 1. [**Run a Docker image**](quick-start/) if you don't have a Python development environment. Setup a local Qdrant server and storage in a few moments. 2. [**Get the Python client**](https://github.com/qdrant/qdrant-client) if you're familiar with Python. Just `pip install qdrant-client`. The client also supports an in-memory database. 3. [**Spin up a Qdrant Cloud cluster:**](cloud/) the recommended method to run Qdrant in production. Read [Quickstart](cloud/quickstart-cloud/) to setup your first instance. ### Recommended Workflow: ![Local mode workflow](https://raw.githubusercontent.com/qdrant/qdrant-client/master/docs/images/try-develop-deploy.png) First, try Qdrant locally using the [Qdrant Client](https://github.com/qdrant/qdrant-client) and with the help of our [Tutorials](tutorials/) and Guides. Develop a sample app from our [Examples](examples/) list and try it using a [Qdrant Docker](guides/installation/) container. Then, when you are ready for production, deploy to a Free Tier [Qdrant Cloud](cloud/) cluster. ### Try Qdrant with Practice Data: You may always use our [Practice Datasets](datasets/) to build with Qdrant. This page will be regularly updated with dataset snapshots you can use to bootstrap complete projects. ## Popular Topics: | Tutorial | Description | Tutorial| Description | |----------------------------------------------------|----------------------------------------------|---------|------------------| | [Installation](guides/installation/) | Different ways to install Qdrant. | [Collections](concepts/collections/) | Learn about the central concept behind Qdrant. | | [Configuration](guides/configuration/) | Update the default configuration. | [Bulk Upload](tutorials/bulk-upload/) | Efficiently upload a large number of vectors. | | [Optimization](tutorials/optimize/) | Optimize Qdrant's resource usage. | [Multitenancy](tutorials/multiple-partitions/) | Setup Qdrant for multiple independent users. | ## Common Use Cases: Qdrant is ideal for deploying applications based on the matching of embeddings produced by neural network encoders. Check out the [Examples](examples/) section to learn more about common use cases. Also, you can visit the [Tutorials](tutorials/) page to learn how to work with Qdrant in different ways. | Use Case | Description | Stack | |-----------------------|----------------------------------------------|--------| | [Semantic Search for Beginners](tutorials/search-beginners/) | Build a search engine locally with our most basic instruction set. | Qdrant | | [Build a Simple Neural Search](tutorials/neural-search/) | Build and deploy a neural search. [Check out the live demo app.](https://demo.qdrant.tech/#/) | Qdrant, BERT, FastAPI | | [Build a Search with Aleph Alpha](tutorials/aleph-alpha-search/) | Build a simple semantic search that combines text and image data. | Qdrant, Aleph Alpha | | [Developing Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
documentation/_index.md
--- title: Contribution Guidelines weight: 35 draft: true --- # How to contribute If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant. Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation. You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h). If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community. For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md). If you have problems with code or architecture understanding - reach us at any time. Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!
documentation/contribution-guidelines.md
--- title: API Reference weight: 20 type: external-link external_url: https://qdrant.github.io/qdrant/redoc/index.html sitemapExclude: True ---
documentation/api-reference.md
--- title: OpenAI weight: 800 aliases: [ ../integrations/openai/ ] --- # OpenAI Qdrant can also easily work with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings). There is an official OpenAI Python package that simplifies obtaining them, and it might be installed with pip: ```bash pip install openai ``` Once installed, the package exposes the method allowing to retrieve the embedding for given text. OpenAI requires an API key that has to be provided either as an environmental variable `OPENAI_API_KEY` or set in the source code directly, as presented below: ```python import openai import qdrant_client from qdrant_client.http.models import Batch # Choose one of the available models: # https://platform.openai.com/docs/models/embeddings embedding_model = "text-embedding-ada-002" openai_client = openai.Client( api_key="<< your_api_key >>" ) response = openai_client.embeddings.create( input="The best vector database", model=embedding_model, ) qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[response.data[0].embedding], ), ) ```
documentation/embeddings/openai.md
--- title: AWS Bedrock weight: 1000 --- # Bedrock Embeddings You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). You'll need the following information from your AWS account: - Region - Access key ID - Secret key To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key). With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536. ```python # Install the required dependencies # pip install boto3 qdrant_client import json import boto3 from qdrant_client import QdrantClient, models session = boto3.Session() bedrock_client = session.client( "bedrock-runtime", region_name="<YOUR_AWS_REGION>", aws_access_key_id="<YOUR_AWS_ACCESS_KEY_ID>", aws_secret_access_key="<YOUR_AWS_SECRET_KEY>", ) qdrant_client = QdrantClient(location="http://localhost:6333") qdrant_client.create_collection( "{collection_name}", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), ) body = json.dumps({"inputText": "Some text to generate embeddings for"}) response = bedrock_client.invoke_model( body=body, modelId="amazon.titan-embed-text-v1", accept="application/json", contentType="application/json", ) response_body = json.loads(response.get("body").read()) qdrant_client.upsert( "{collection_name}", points=[models.PointStruct(id=1, vector=response_body["embedding"])], ) ``` ```javascript // Install the required dependencies // npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest import { BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; import { QdrantClient } from '@qdrant/js-client-rest'; const main = async () => { const bedrockClient = new BedrockRuntimeClient({ region: "<YOUR_AWS_REGION>", credentials: { accessKeyId: "<YOUR_AWS_ACCESS_KEY_ID>",, secretAccessKey: "<YOUR_AWS_SECRET_KEY>", }, }); const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' }); await qdrantClient.createCollection("{collection_name}", { vectors: { size: 1536, distance: 'Cosine', } }); const response = await bedrockClient.send( new InvokeModelCommand({ modelId: "amazon.titan-embed-text-v1", body: JSON.stringify({ inputText: "Some text to generate embeddings for", }), contentType: "application/json", accept: "application/json", }) ); const body = new TextDecoder().decode(response.body); await qdrantClient.upsert("{collection_name}", { points: [ { id: 1, vector: JSON.parse(body).embedding, }, ], }); } main(); ```
documentation/embeddings/bedrock.md
--- title: Aleph Alpha weight: 900 aliases: [ ../integrations/aleph-alpha/ ] --- Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be installed with pip: ```bash pip install aleph-alpha-client ``` There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might be done in the following way: ```python import qdrant_client from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, ImagePrompt ) from qdrant_client.http.models import Batch aa_token = "<< your_token >>" model = "luminous-base" qdrant_client = qdrant_client.QdrantClient() async with AsyncClient(token=aa_token) as client: prompt = ImagePrompt.from_file("./path/to/the/image.jpg") prompt = Prompt.from_image(prompt) query_params = { "prompt": prompt, "representation": SemanticRepresentation.Symmetric, "compress_to_size": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed( request=query_request, model=model ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=[query_response.embedding], ) ) ``` If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input text into the `Prompt.from_text` method.
documentation/embeddings/aleph-alpha.md
--- title: Cohere weight: 700 aliases: [ ../integrations/cohere/ ] --- # Cohere Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that might be installed as any other package: ```bash pip install cohere ``` The embeddings returned by co.embed API might be used directly in the Qdrant client's calls: ```python import cohere import qdrant_client from qdrant_client.http.models import Batch cohere_client = cohere.Client("<< your_api_key >>") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="large", texts=["The best vector database"], ).embeddings, ), ) ``` If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the "[Question Answering as a Service with Cohere and Qdrant](https://qdrant.tech/articles/qa-with-cohere-and-qdrant/)" article. ## Embed v3 Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for. - `input_type="search_document"` - for documents to store in Qdrant - `input_type="search_query"` - for search queries to find the most relevant documents - `input_type="classification"` - for classification tasks - `input_type="clustering"` - for text clustering While implementing semantic search applications, such as RAG, you should use `input_type="search_document"` for the indexed documents and `input_type="search_query"` for the search queries. The following example shows how to index documents with the Embed v3 model: ```python import cohere import qdrant_client from qdrant_client.http.models import Batch cohere_client = cohere.Client("<< your_api_key >>") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=[1], vectors=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_document", # Input type for documents texts=["Qdrant is the a vector database written in Rust"], ).embeddings, ), ) ``` Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model: ```python qdrant_client.search( collection_name="MyCollection", query=cohere_client.embed( model="embed-english-v3.0", # New Embed v3 model input_type="search_query", # Input type for search queries texts=["The best vector database"], ).embeddings[0], ) ``` <aside role="status"> According to Cohere's documentation, all v3 models can use dot product, cosine similarity, and Euclidean distance as the similarity metric, as all metrics return identical rankings. </aside>
documentation/embeddings/cohere.md
--- title: "Nomic" weight: 1100 --- # Nomic The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder. While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1), you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text). Once installed, you can configure it with the official Python client or through direct HTTP requests. <aside role="status">Using Nomic Text Embeddings requires configuring the Nomic API token</aside> You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings are obtained for documents and queries. The `task_type` parameter defines the embeddings that you get. For documents, set the `task_type` to `search_document`: ```python from qdrant_client import QdrantClient, models from nomic import embed output = embed.text( texts=["Qdrant is the best vector database!"], model="nomic-embed-text-v1", task_type="search_document", ) qdrant_client = QdrantClient() qdrant_client.upsert( collection_name="my-collection", points=models.Batch( ids=[1], vectors=output["embeddings"], ), ) ``` To query the collection, set the `task_type` to `search_query`: ```python output = embed.text( texts=["What is the best vector database?"], model="nomic-embed-text-v1", task_type="search_query", ) qdrant_client.search( collection_name="my-collection", query=output["embeddings"][0], ) ``` For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
documentation/embeddings/nomic.md
--- title: Gemini weight: 700 --- # Gemini Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package: Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model. In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized. The Embedding Model API supports various task types, outlined as follows: 1. `retrieval_query`: Specifies the given text is a query in a search/retrieval setting. 2. `retrieval_document`: Specifies the given text is a document from the corpus being searched. 3. `semantic_similarity`: Specifies the given text will be used for Semantic Text Similarity. 4. `classification`: Specifies that the given text will be classified. 5. `clustering`: Specifies that the embeddings will be used for clustering. 6. `task_type_unspecified`: Unset value, which will default to one of the other values. If you're building a semantic search application, such as RAG, you should use `task_type="retrieval_document"` for the indexed documents and `task_type="retrieval_query"` for the search queries. The following example shows how to do this with Qdrant: ## Setup ```bash pip install google-generativeai ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python import pathlib import google.generativeai as genai import qdrant_client GEMINI_API_KEY = "YOUR GEMINI API KEY" # add your key here genai.configure(api_key=GEMINI_API_KEY) result = genai.embed_content( model="models/embedding-001", content="Qdrant is the best vector search engine to use with Gemini", task_type="retrieval_document", title="Qdrant x Gemini", ) ``` The returned result is a dictionary with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document. ## Indexing documents with Qdrant ```python from qdrant_client.http.models import Batch qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name="GeminiCollection", points=Batch( ids=[1], vectors=genai.embed_content( model="models/embedding-001", content="Qdrant is the best vector search engine to use with Gemini", task_type="retrieval_document", title="Qdrant x Gemini", )["embedding"], ), ) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python qdrant_client.search( collection_name="GeminiCollection", query=genai.embed_content( model="models/embedding-001", content="What is the best vector database to use with Gemini?", task_type="retrieval_query", )["embedding"], ) ``` ## Using Gemini Embedding Models with Binary Quantization You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model: At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|----------| | limit | | | | | | | | | | rescore | False | True | False | True | False | True | | 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 | | 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 | | 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 | | 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** | That's it! You can now use Gemini Embedding Models with Qdrant!
documentation/embeddings/gemini.md
--- title: Jina Embeddings weight: 800 aliases: [ ../integrations/jina-embeddings/ ] --- # Jina Embeddings Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens. To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production. ```python import qdrant_client import requests from qdrant_client.http.models import Distance, VectorParams from qdrant_client.http.models import Batch # Provide Jina API key and choose one of the available models. # You can get a free trial key here: https://jina.ai/embeddings/ JINA_API_KEY = "jina_xxxxxxxxxxx" MODEL = "jina-embeddings-v2-base-en" # or "jina-embeddings-v2-base-en" EMBEDDING_SIZE = 768 # 512 for small variant # Get embeddings from the API url = "https://api.jina.ai/v1/embeddings" headers = { "Content-Type": "application/json", "Authorization": f"Bearer {JINA_API_KEY}", } data = { "input": ["Your text string goes here", "You can send multiple texts"], "model": MODEL, } response = requests.post(url, headers=headers, json=data) embeddings = [d["embedding"] for d in response.json()["data"]] # Index the embeddings into Qdrant qdrant_client = qdrant_client.QdrantClient(":memory:") qdrant_client.create_collection( collection_name="MyCollection", vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT), ) qdrant_client.upsert( collection_name="MyCollection", points=Batch( ids=list(range(len(embeddings))), vectors=embeddings, ), ) ```
documentation/embeddings/jina-embeddings.md
--- title: Embeddings weight: 33 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: true --- | Embedding | |---| | [Gemini](./gemini/) | | [Aleph Alpha](./aleph-alpha/) | | [Cohere](./cohere/) | | [Jina](./jina-emebddngs/) | | [OpenAI](./openai/) |
documentation/embeddings/_index.md
--- title: Database Optimization weight: 3 --- ## Database Optimization Strategies ### How do I reduce memory usage? The primary source of memory usage vector data. There are several ways to address that: - Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors. - Configure on-disk vector storage The choice of the approach depends on your requirements. Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant. ### How do you choose machine configuration? There are two main scenarios of Qdrant usage in terms of resource consumption: - **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM. - **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/). ### I configured on-disk vector storage, but memory usage is still high. Why? Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service. If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM. Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM. As a result, the Qdrant process might use more memory than the minimum required to run the service. > Unused RAM is wasted RAM If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes. ### My requests are very slow or time out. What should I do? There are several possible reasons for that: - **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index). - **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/). - **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.
documentation/faq/database-optimization.md
--- title: Fundamentals weight: 1 --- ## Qdrant Fundamentals ### How many collections can I create? As much as you want, but be aware that each collection requires additional resources. It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead. We consider creating a collection for each user/dialog/document as an antipattern. Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial. ### My search results contain vectors with null values. Why? By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results. But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`. If you're still seeing `"vector": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method. ### How can I search without a vector? You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection. ### Does Qdrant support a full-text search or a hybrid search? Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case. That includes both the interface and the performance. What Qdrant can do: - Search with full-text filters - Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases) - Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/) What Qdrant plans to introduce in the future: - Support for sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models What Qdrant doesn't plan to support: - BM25 or other non-vector-based retrieval or ranking functions - Built-in ontologies or knowledge graphs - Query analyzers and other NLP tools Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines. Read more about [our approach](../../../articles/hybrid-search/) to hybrid search. ### How do I upload a large number of vectors into a Qdrant collection? Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial. ### Can I only store quantized vectors and discard full precision vectors? No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc. ## Qdrant Cloud ### Is it possible to scale down a Qdrant Cloud cluster? In general, no. There's no way to scale down the underlying disk storage. But in some cases, we might be able to help you with that through manual intervention, but it's not guaranteed. ## Versioning ### How do I avoid issues when updating to the latest version? We only guarantee compatibility if you update between consequent versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`. ### Do you guarantee compatibility across versions? In case your version is older, we guarantee only compatibility between two consecutive minor versions. While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code.
documentation/faq/qdrant-fundamentals.md
--- title: FAQ weight: 41 is_empty: true ---
documentation/faq/_index.md
--- title: Multitenancy weight: 12 aliases: - ../tutorials/multiple-partitions --- # Configure Multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up. **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. ## Partition by payload When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users. 1. Add a `group_id` field to each vector in the collection. ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "payload": {"group_id": "user_1"}, "vector": [0.9, 0.1, 0.1] }, { "id": 2, "payload": {"group_id": "user_1"}, "vector": [0.1, 0.9, 0.1] }, { "id": 3, "payload": {"group_id": "user_2"}, "vector": [0.1, 0.1, 0.9] }, ] } ``` ```python client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1, payload={"group_id": "user_1"}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={"group_id": "user_1"}, vector=[0.1, 0.9, 0.1], ), models.PointStruct( id=3, payload={"group_id": "user_2"}, vector=[0.1, 0.1, 0.9], ), ], ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.upsert("{collection_name}", { points: [ { id: 1, payload: { group_id: "user_1" }, vector: [0.9, 0.1, 0.1], }, { id: 2, payload: { group_id: "user_1" }, vector: [0.1, 0.9, 0.1], }, { id: 3, payload: { group_id: "user_2" }, vector: [0.1, 0.1, 0.9], }, ], }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::PointStruct}; use serde_json::json; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .upsert_points_blocking( "{collection_name}".to_string(), None, vec![ PointStruct::new( 1, vec![0.9, 0.1, 0.1], json!( {"group_id": "user_1"} ) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.1, 0.9, 0.1], json!( {"group_id": "user_1"} ) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.1, 0.1, 0.9], json!( {"group_id": "user_2"} ) .try_into() .unwrap(), ), ], None, ) .await?; ``` ```java import java.util.List; import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( "{collection_name}", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of("group_id", value("user_1"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of("group_id", value("user_1"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.9f)) .putAllPayload(Map.of("group_id", value("user_2"))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { ["group_id"] = "user_1" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { ["group_id"] = "user_1" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { ["group_id"] = "user_2" } } } ); ``` 2. Use a filter along with `group_id` to filter vectors for each user. ```http POST /collections/{collection_name}/points/search { "filter": { "must": [ { "key": "group_id", "match": { "value": "user_1" } } ] }, "vector": [0.1, 0.1, 0.9], "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_filter=models.Filter( must=[ models.FieldCondition( key="group_id", match=models.MatchValue( value="user_1", ), ) ] ), query_vector=[0.1, 0.1, 0.9], limit=10, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { filter: { must: [{ key: "group_id", match: { value: "user_1" } }], }, vector: [0.1, 0.1, 0.9], limit: 10, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{Condition, Filter, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), filter: Some(Filter::must([Condition::matches( "group_id", "user_1".to_string(), )])), vector: vec![0.1, 0.1, 0.9], limit: 10, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter( Filter.newBuilder().addMust(matchKeyword("group_id", "user_1")).build()) .addAllVector(List.of(0.1f, 0.1f, 0.9f)) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.1f, 0.1f, 0.9f }, filter: MatchKeyword("group_id", "user_1"), limit: 10 ); ``` ## Calibrate performance The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "hnsw_config": { "payload_m": 16, "m": 0 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, hnsw_config: { payload_m: 16, m: 0, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), hnsw_config: Some(HnswConfigDiff { payload_m: Some(16), m: Some(0), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 } ); ``` 3. Create keyword payload index for `group_id` field. ```http PUT /collections/{collection_name}/index { "field_name": "group_id", "field_schema": "keyword" } ``` ```python client.create_payload_index( collection_name="{collection_name}", field_name="group_id", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` ```typescript client.createPayloadIndex("{collection_name}", { field_name: "group_id", field_schema: "keyword", }); ``` ```rust use qdrant_client::{client::QdrantClient, qdrant::FieldType}; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_field_index( "{collection_name}", "group_id", FieldType::Keyword, None, None, ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "group_id", PayloadSchsemaType.Keyword, null, null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync(collectionName: "{collection_name}", fieldName: "group_id"); ``` ## Limitations One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
documentation/guides/multiple-partitions.md
--- title: Administration weight: 10 aliases: - ../administration --- # Administration Qdrant exposes administration tools which enable to modify at runtime the behavior of a qdrant instance without changing its configuration manually. ## Locking A locking API enables users to restrict the possible operations on a qdrant process. It is important to mention that: - The configuration is not persistent therefore it is necessary to lock again following a restart. - Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup. Lock request sample: ```http POST /locks { "error_message": "write is forbidden", "write": true } ``` Write flags enables/disables write lock. If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage. However, deletion operations or updates are not forbidden under the write lock. This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data. You can optionally provide the error message that should be used for error responses to users. ## Recovery mode *Available as of v1.2.0* Recovery mode can help in situations where Qdrant fails to start repeatedly. When starting in recovery mode, Qdrant only loads collection metadata to prevent going out of memory. This allows you to resolve out of memory situations, for example, by deleting a collection. After resolving Qdrant can be restarted normally to continue operation. In recovery mode, collection operations are limited to [deleting](../../concepts/collections/#delete-collection) a collection. That is because only collection metadata is loaded during recovery. To enable recovery mode with the Qdrant Docker image you must set the environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try to start normally first, and restarts in recovery mode if initialisation fails due to an out of memory error. This behavior is disabled by default. If using a Qdrant binary, recovery mode can be enabled by setting a recovery message in an environment variable, such as `QDRANT__STORAGE__RECOVERY_MODE="My recovery message"`.
documentation/guides/administration.md
--- title: Troubleshooting weight: 170 aliases: - ../tutorials/common-errors --- # Solving common errors ## Too many files open (OS error 24) Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log: ```text Error: Too many files open (OS error 24) ``` In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container: ```bash docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest ``` The command above will set both soft and hard limits to `10000`. If you are not using Docker, the following command will change the limit for the current user session: ```bash ulimit -n 10000 ``` Please note, the command should be executed before you run Qdrant server.
documentation/guides/common-errors.md
--- title: Configuration weight: 160 aliases: - ../configuration --- # Configuration To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files. The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). To change the default configuration, add a new configuration file and specify the path with `--config-path path/to/custom_config.yaml`. If running in production mode, you could also choose to overwrite `config/production.yaml`. See [ordering](#order-and-priority) for details on how configurations are loaded. The [Installation](../installation) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods. ## Order and priority *Effective as of v1.2.1* Multiple configurations may be loaded on startup. All of them are merged into a single effective configuration that is used by Qdrant. Configurations are loaded in the following order, if present: 1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml)) 2. File `config/config.yaml` 3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`) 4. File `config/local.yaml` 5. Config provided with `--config-path PATH` (if set) 6. [Environment variables](#environment-variables) This list is from least to most significant. Properties in later configurations will overwrite those loaded before it. For example, a property set with `--config-path` will overwrite those in other files. Most of these files are included by default in the Docker container. But it is likely that they are absent on your local machine if you run the `qdrant` binary manually. If file 2 or 3 are not found, a warning is shown on startup. If file 5 is provided but not found, an error is shown on startup. Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`. ## Environment variables It is possible to set configuration properties using environment variables. Environment variables are always the most significant and cannot be overwritten (see [ordering](#order-and-priority)). All environment variables are prefixed with `QDRANT__` and are separated with `__`. These variables: ```bash QDRANT__LOG_LEVEL=INFO QDRANT__SERVICE__HTTP_PORT=6333 QDRANT__SERVICE__ENABLE_TLS=1 QDRANT__TLS__CERT=./tls/cert.pem QDRANT__TLS__CERT_TTL=3600 ``` result in this configuration: ```yaml log_level: INFO service: http_port: 6333 enable_tls: true tls: cert: ./tls/cert.pem cert_ttl: 3600 ``` To run Qdrant locally with a different HTTP port you could use: ```bash QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant ``` ## Configuration file example ```yaml log_level: INFO storage: # Where to store all the data storage_path: ./storage # Where to store snapshots snapshots_path: ./snapshots # Where to store temporary files # If null, temporary snapshot are stored in: storage/snapshots_temp/ temp_path: null # If true - point's payload will not be stored in memory. # It will be read from the disk every time it is requested. # This setting saves RAM by (slightly) increasing the response time. # Note: those payload values that are involved in filtering and are indexed - remain in RAM. on_disk_payload: true # Maximum number of concurrent updates to shard replicas # If `null` - maximum concurrency is used. update_concurrency: null # Write-ahead-log related configuration wal: # Size of a single WAL segment wal_capacity_mb: 32 # Number of WAL segments to create ahead of actual data requirement wal_segments_ahead: 0 # Normal node - receives all updates and answers all queries node_type: "Normal" # Listener node - receives all updates, but does not answer search/read queries # Useful for setting up a dedicated backup node # node_type: "Listener" performance: # Number of parallel threads used for search operations. If 0 - auto selection. max_search_threads: 0 # Max total number of threads, which can be used for running optimization processes across all collections. # Note: Each optimization thread will also use `max_indexing_threads` for index building. # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads` max_optimization_threads: 1 # Prevent DDoS of too many concurrent updates in distributed mode. # One external update usually triggers multiple internal updates, which breaks internal # timings. For example, the health check timing and consensus timing. # If null - auto selection. update_rate_limit: null optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 # Target amount of segments optimizer will try to keep. # Real amount of segments may vary depending on multiple parameters: # - Amount of stored points # - Current write RPS # # It is recommended to select default number of segments as a factor of the number of search threads, # so that each segment would be handled evenly by one of the threads. # If `default_segment_number = 0`, will be automatically selected by the number of available CPUs default_segment_number: 0 # Do not create segments larger this size (in KiloBytes). # Large segments might require disproportionately long indexation times, # therefore it makes sense to limit the size of segments. # # If indexation speed have more priority for your - make this parameter lower. # If search speed is more important - make this parameter higher. # Note: 1Kb = 1 vector of size 256 # If not set, will be automatically selected considering the number of available CPUs. max_segment_size_kb: null # Maximum size (in KiloBytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # To enable memmap storage, lower the threshold # Note: 1Kb = 1 vector of size 256 # To explicitly disable mmap optimization, set to `0`. # If not set, will be disabled by default. memmap_threshold_kb: null # Maximum size (in KiloBytes) of vectors allowed for plain index. # Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md # Note: 1Kb = 1 vector of size 256 # To explicitly disable vector indexing, set to `0`. # If not set, the default value will be used. indexing_threshold_kb: 20000 # Interval between forced flushes. flush_interval_sec: 5 # Max number of threads, which can be used for optimization per collection. # Note: Each optimization thread will also use `max_indexing_threads` for index building. # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads` # If `max_optimization_threads = 0`, optimization will be disabled. max_optimization_threads: 1 # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold_kb: 10000 # Number of parallel threads used for background index building. If 0 - auto selection. max_indexing_threads: 0 # Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false on_disk: false # Custom M param for hnsw graph built for payload index. If not set, default M will be used. payload_m: null service: # Maximum size of POST data in a single request in megabytes max_request_size_mb: 32 # Number of parallel workers used for serving the api. If 0 - equal to the number of available cores. # If missing - Same as storage.max_search_threads max_workers: 0 # Host to bind the service on host: 0.0.0.0 # HTTP(S) port to bind the service on http_port: 6333 # gRPC port to bind the service on. # If `null` - gRPC is disabled. Default: null # Comment to disable gRPC: grpc_port: 6334 # Enable CORS headers in REST API. # If enabled, browsers would be allowed to query REST endpoints regardless of query origin. # More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS # Default: true enable_cors: true # Enable HTTPS for the REST and gRPC API enable_tls: false # Check user HTTPS client certificate against CA file specified in tls config verify_https_client_certificate: false # Set an api-key. # If set, all requests must include a header with the api-key. # example header: `api-key: <API-KEY>` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # api_key: your_secret_api_key_here # Set an api-key for read-only operations. # If set, all requests must include a header with the api-key. # example header: `api-key: <API-KEY>` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # read_only_api_key: your_secret_read_only_api_key_here cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: false # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Use TLS for communication between peers enable_tls: false # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected nodes earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 # Set to true to prevent service from sending usage statistics to the developers. # Read more: https://qdrant.tech/documentation/guides/telemetry telemetry_disabled: false # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Server certificate chain file cert: ./tls/cert.pem # Server private key file key: ./tls/key.pem # Certificate authority certificate file. # This certificate will be used to validate the certificates # presented by other nodes during inter-cluster communication. # # If verify_https_client_certificate is true, it will verify # HTTPS client certificate # # Required if cluster.p2p.enable_tls is true. ca_cert: ./tls/cacert.pem # TTL in seconds to reload certificate from disk, useful for certificate rotations. # Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication). # If `null` - TTL is disabled. cert_ttl: 3600 ``` ## Validation *Available since v1.1.1* The configuration is validated on startup. If a configuration is loaded but validation fails, a warning is logged. E.g.: ```text WARN Settings configuration file has validation errors: WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000 ``` The server will continue to operate. Any validation errors should be fixed as soon as possible though to prevent problematic behavior.
documentation/guides/configuration.md
--- title: Optimize Resources weight: 11 aliases: - ../tutorials/optimize --- # Optimize Qdrant Different use cases have different requirements for balancing between memory, speed, and precision. Qdrant is designed to be flexible and customizable so you can tune it to your needs. ![Trafeoff](/docs/tradeoff.png) Let's look deeper into each of those possible optimization scenarios. ## Prefer low memory footprint with high speed search The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads. Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads. To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration: ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` `mmmap_threshold` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM. Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision. ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "rescore": false } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 10 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams(rescore=False) ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { quantization: { rescore: false, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{QuantizationSearchParams, SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { quantization: Some(QuantizationSearchParams { rescore: Some(false), ..Default::default() }), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setRescore(false).build()) .build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Rescore = false } }, limit: 3 ); ``` ## Prefer high precision with low memory footprint In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "hnsw_config": { "on_disk": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), hnsw_config=models.HnswConfigDiff(on_disk=True), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, hnsw_config: { on_disk: true, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), hnsw_config: Some(HnswConfigDiff { on_disk: Some(true), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, hnswConfig: new HnswConfigDiff { OnDisk = true } ); ``` In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM. ```json ... "hnsw_config": { "m": 64, "ef_construct": 512, "on_disk": true } ... ``` The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search. You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS. ## Prefer high precision with high speed search For high speed and high precision search it is critical to keep as much data in RAM as possible. By default, Qdrant follows this approach, but you can tune it to your needs. It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "memmap_threshold": 20000 }, "quantization_config": { "scalar": { "type": "int8", "always_ram": true } } } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { memmap_threshold: 20000, }, quantization_config: { scalar: { type: "int8", always_ram: true, }, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { memmap_threshold: Some(20000), ..Default::default() }), quantization_config: Some(QuantizationConfig { quantization: Some(Quantization::Scalar(ScalarQuantization { r#type: QuantizationType::Int8.into(), always_ram: Some(true), ..Default::default() })), }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` There are also some search-time parameters you can use to tune the search accuracy and speed: ```http POST /collections/{collection_name}/points/search { "params": { "hnsw_ef": 128, "exact": false }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", search_params=models.SearchParams(hnsw_ef=128, exact=False), query_vector=[0.2, 0.1, 0.9, 0.7], limit=3, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.search("{collection_name}", { vector: [0.2, 0.1, 0.9, 0.7], params: { hnsw_ef: 128, exact: false, }, limit: 3, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{SearchParams, SearchPoints}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".to_string(), vector: vec![0.2, 0.1, 0.9, 0.7], params: Some(SearchParams { hnsw_ef: Some(128), exact: Some(false), ..Default::default() }), limit: 3, ..Default::default() }) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { HnswEf = 128, Exact = false }, limit: 3 ); ``` - `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512. - `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth. ## Latency vs Throughput - There are two main approaches to measure the speed of search: - latency of the request - the time from the moment request is submitted to the moment a response is received - throughput - the number of requests per second the system can handle Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another. To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\. You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "default_segment_number": 16 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=16), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { default_segment_number: 16, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { default_segment_number: Some(16), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 } ); ``` To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel. To do that, you can configure qdrant to use minimal number of segments, which is usually 2. Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index. ```http PUT /collections/{collection_name} { "vectors": { "size": 768, "distance": "Cosine" }, "optimizers_config": { "default_segment_number": 2 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=2), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 768, distance: "Cosine", }, optimizers_config: { default_segment_number: 2, }, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams, VectorsConfig, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".to_string(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 768, distance: Distance::Cosine.into(), ..Default::default() })), }), optimizers_config: Some(OptimizersConfigDiff { default_segment_number: Some(2), ..Default::default() }), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 } ); ```
documentation/guides/optimize.md
--- title: Telemetry weight: 150 aliases: - ../telemetry --- # Telemetry Qdrant collects anonymized usage statistics from users in order to improve the engine. You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion). ## Why do we collect telemetry? We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios. We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations. In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used. Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance. To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code. With this information, we can make Qdrant faster for everyone. ## What information is collected? There are 3 types of information that we collect: * System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance. * Performance - information about timings and counters of various pieces of code. * Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us. ### We **never** collect the following information: - User's IP address - Any data that can be used to identify the user or the user's organization - Any data, stored in the collections - Any names of the collections - Any URLs ## How do we anonymize data? We understand that some users may be concerned about the privacy of their data. That is why we make an extra effort to ensure your privacy. There are several different techniques that we use to anonymize the data: - We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances. - We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000. - We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry. - All urls are hashed as well. You can see exact version of anomymized collected data by accessing the [telemetry API](https://qdrant.github.io/qdrant/redoc/index.html#tag/service/operation/telemetry) with `anonymize=true` parameter. For example, <http://localhost:6333/telemetry?details_level=6&anonymize=true> ## Deactivate telemetry You can deactivate telemetry by: - setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true` - setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files - using cli option `--disable-telemetry` Any of these options will prevent Qdrant from sending any telemetry data. If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions) ## Request information deletion We provide an email address so that users can request the complete removal of their data from all of our tools. To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation. You can find this identifier in the telemetry API response (`"id"` field), or in the logs of your Qdrant instance. Any questions regarding the management of the data we collect can also be sent to this email address.
documentation/guides/telemetry.md
--- title: Distributed Deployment weight: 100 aliases: - ../distributed_deployment --- # Distributed deployment Since version v0.8.0 Qdrant supports a distributed deployment mode. In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability. To enable distributed deployment - enable the cluster mode in the [configuration](../configuration) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`. ```yaml cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: true # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected node earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 ``` By default, Qdrant will use port `6335` for its internal communication. All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations. Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached: ```bash ./qdrant --uri 'http://qdrant_node_1:6335' ``` Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster. To do this, they need to be provided with a bootstrap URL: ```bash ./qdrant --bootstrap 'http://qdrant_node_1:6335' ``` The URL of the new peers themselves will be calculated automatically from the IP address of their request. But it is also possible to provide them individually using the `--uri` argument. ```text USAGE: qdrant [OPTIONS] OPTIONS: --bootstrap <URI> Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified - this peer will be considered as a first in a new deployment --uri <URI> Uri of this peer. Other peers should be able to reach it by this uri. This value has to be supplied if this is the first peer in a new deployment. In case this is not the first peer and it bootstraps the value is optional. If not supplied then qdrant will take internal grpc port from config and derive the IP address of this peer on bootstrap peer (receiving side) ``` After a successful synchronization you can observe the state of the cluster through the [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster): ```http GET /cluster ``` Example result: ```json { "result": { "status": "enabled", "peer_id": 11532566549086892000, "peers": { "9834046559507417430": { "uri": "http://172.18.0.3:6335/" }, "11532566549086892528": { "uri": "http://qdrant_node_1:6335/" } }, "raft_info": { "term": 1, "commit": 4, "pending_operations": 1, "leader": 11532566549086892000, "role": "Leader" } }, "status": "ok", "time": 5.731e-06 } ``` ## Raft Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure. Operations on points, on the other hand, do not go through the consensus infrastructure. Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead. In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes. Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes. In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them. Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied. You may use the cluster [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster) to check the state of the consensus. ## Sharding A Collection in Qdrant is made of one or more shards. A shard is an independent store of points which is able to perform all operations provided by collections. There are two methods of distributing points across shards: - **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior. - **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding) Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result. When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster: ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6 ); ``` We recommend setting the number of shards to be a multiple of the number of nodes you are currently running in your cluster. For example, if you have 3 nodes, 6 shards could be a good option. Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations. ### Moving shards *Available as of v0.9.0* Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime. Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/collection_cluster_info). Use the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to initiate the shard transfer: ```http POST /collections/{collection_name}/cluster { "move_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995 } } ``` <aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside> After the transfer is initiated, the service will process it based on the used [transfer method](#shard-transfer-method) keeping both shards in sync. Once the transfer is completed, the old shard is deleted from the source node. In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer). ```http DELETE /cluster/peer/{peer_id} ``` After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown. ### User-defined sharding *Available as of v1.7.0* Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned. A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. To enable user-defined sharding, set `sharding_method` to `custom` during collection creation: ```http PUT /collections/{collection_name} { "shard_number": 1, "sharding_method": "custom" // ... other collection parameters } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", shard_number=1, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key("{collection_name}", "user_1") ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { shard_number: 1, sharding_method: "custom", // ... other collection parameters }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{CreateCollection, ShardingMethod}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), shard_number: Some(1), sharding_method: Some(ShardingMethod::Custom), // ... other collection parameters ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.ShardingMethod; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") // ... other collection parameters .setShardNumber(1) .setShardingMethod(ShardingMethod.Custom) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", // ... other collection parameters shardNumber: 1, shardingMethod: ShardingMethod.Custom ); ``` In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings: ```json { "shard_number": 1, "sharding_method": "custom", "replication_factor": 2 } ``` Then you will have `1 * 10 * 2 = 20` total physical shards in the collection. To specify the shard for each point, you need to provide the `shard_key` field in the upsert request: ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1111, "vector": [0.1, 0.2, 0.3] }, ] "shard_key": "user_1" } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.upsert( collection_name="{collection_name}", points=[ models.PointStruct( id=1111, vector=[0.1, 0.2, 0.3], ), ], shard_key_selector="user_1", ) ``` ```typescript client.upsertPoints("{collection_name}", { points: [ { id: 1111, vector: [0.1, 0.2, 0.3], }, ], shard_key: "user_1", }); ``` ```rust use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType}; client .upsert_points_blocking( "{collection_name}", Some(vec![shard_key::Key::String("user_1".into())]), vec![ PointStruct::new( 1111, vec![0.1, 0.2, 0.3], Default::default(), ), ], None, ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(111)) .setVectors(vectors(0.1f, 0.2f, 0.3f)) .build())) .setShardKeySelector(shardKeySelector("user_1")) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } } }, shardKeySelector: new ShardKeySelector { ShardKeys = { new List<ShardKey> { "user_id" } } } ); ``` <aside role="alert"> Using the same point ID across multiple shard keys is <strong>not supported<sup>*</sup></strong> and should be avoided. </aside> <sup> <strong>*</strong> When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys. This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check. </sup> Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards. Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed. <img src="/docs/sharding-per-day.png" alt="Sharding per day" width="500" height="600"> ### Shard transfer method *Available as of v1.7.0* There are different methods for transferring, such as moving or replicating, a shard to another node. Depending on what performance and guarantees you'd like to have and how you'd like to manage your cluster, you likely want to choose a specific method. Each method has its own pros and cons. Which is fastest depends on the size and state of a shard. Available shard transfer methods are: - `stream_records`: _(default)_ transfer shard by streaming just its records to the target node in batches. - `snapshot`: transfer shard including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots) automatically. Each has pros, cons and specific requirements, which are: | Method: | Stream records | Snapshot | |:---|:---|:---| | **Connection** | <ul><li>Requires internal gRPC API <small>(port 6335)</small></li></ul> | <ul><li>Requires internal gRPC API <small>(port 6335)</small></li><li>Requires REST API <small>(port 6333)</small></li></ul> | | **HNSW index** | <ul><li>Doesn't transfer index</li><li>Will reindex on target node</li></ul> | <ul><li>Index is transferred with a snapshot</li><li>Immediately ready on target node</li></ul> | | **Quantization** | <ul><li>Doesn't transfer quantized data</li><li>Will re-quantize on target node</li></ul> | <ul><li>Quantized data is transferred with a snapshot</li><li>Immediately ready on target node</li></ul> | | **Consistency** | <ul><li>Weak data consistency</li><li>Unordered updates on target node[^unordered]</li></ul> | <ul><li>Strong data consistency</li><li>Ordered updates on target node[^ordered]</li></ul> | | **Disk space** | <ul><li>No extra disk space required</li></ul> | <ul><li>Extra disk space required for snapshot on both nodes</li></ul> | [^unordered]: Weak data consistency and unordered updates: All records are streamed to the target node in order. New updates are received on the target node in parallel, while the transfer of records is still happening. We therefore have `weak` ordering, regardless of what [ordering](#write-ordering) is used for updates. [^ordered]: Strong data consistency and ordered updates: A snapshot of the shard is created, it is transferred and recovered on the target node. That ensures the state of the shard is kept consistent. New updates are queued on the source node, and transferred in order to the target node. Updates therefore have the same [ordering](#write-ordering) as the user selects, making `strong` ordering possible. To select a shard transfer method, specify the `method` like: ```http POST /collections/{collection_name}/cluster { "move_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995, "method": "snapshot" } } ``` The `stream_records` transfer method is the simplest available. It simply transfers all shard records in batches to the target node until it has transferred all of them, keeping both shards in sync. It will also make sure the transferred shard indexing process is keeping up before performing a final switch. The method has two common disadvantages: 1. It does not transfer index or quantization data, meaning that the shard has to be optimized again on the new node, which can be very expensive. 2. The consistency and ordering guarantees are `weak`[^unordered], which is not suitable for some applications. Because it is so simple, it's also very robust, making it a reliable choice if the above cons are acceptable in your use case. If your cluster is unstable and out of resources, it's probably best to use the `stream_records` transfer method, because it is unlikely to fail. The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots) to transfer a shard. A snapshot is created automatically. It is then transferred and restored on the target node. After this is done, the snapshot is removed from both nodes. While the snapshot/transfer/restore operation is happening, the source node queues up all new operations. All queued updates are then sent in order to the target shard to bring it into the same state as the source. There are two important benefits: 1. It transfers index and quantization data, so that the shard does not have to be optimized again on the target node, making them immediately available. This way, Qdrant ensures that there will be no degradation in performance at the end of the transfer. Especially on large shards, this can give a huge performance improvement. 2. The consistency and ordering guarantees can be `strong`[^ordered], required for some applications. The `stream_records` method is currently used as default. This may change in the future. ## Replication *Available as of v0.11.0* Qdrant allows you to replicate shards between nodes in the cluster. Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster. This ensures the availability of the data in case of node failures, except if all replicas are lost. ### Replication factor When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to "1", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection. Currently, the replication factor of a collection can only be configured at creation time. ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6, "replication_factor": 2, } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, replication_factor: 2, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), replication_factor: Some(2), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2 ); ``` This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards. Since a replication factor of "2" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand. ### Creating new shard replicas It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html?v=v0.11.0#tag/cluster/operation/update_collection_cluster). A replica can be added on a specific peer by specifying the peer from which to replicate. ```http POST /collections/{collection_name}/cluster { "replicate_shard": { "shard_id": 0, "from_peer_id": 381894127, "to_peer_id": 467122995 } } ``` <aside role="status">You likely want to select a specific <a href="#shard-transfer-method">shard transfer method</a> to get desired performance and guarantees.</aside> And a replica can be removed on a specific peer. ```http POST /collections/{collection_name}/cluster { "drop_replica": { "shard_id": 0, "peer_id": 381894127 } } ``` Keep in mind that a collection must contain at least one active replica of a shard. ### Error handling Replicas can be in different states: - Active: healthy and ready to serve traffic - Dead: unhealthy and not ready to serve traffic - Partial: currently under resynchronization before activation A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic. A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically. This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation. ### Node Failure Recovery Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable. No system is immune to this. But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation. Let's walk through them from best to worst. **Recover with replicated collection** If the number of failed nodes is less than the replication factor of the collection, then no data is lost. Your cluster should still be able to perform read, search and update queries. Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed. **Recreate node with replicated collections** If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node. To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API. Apply the `force` flag if necessary. When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation. Use the [Replicate Shard Operation](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to create another copy of the shard on the newly connected node. It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery. Building a completely automatic process of collection scaling would require control over the cluster machines themself. Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that. **Recover from snapshot** If there are no copies of data in the cluster, it is still possible to recover from a snapshot. Follow the same steps to detach failed node and create a new one in the cluster: * To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API. Apply the `force` flag if necessary. * Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Snapshot recovery, used in single-node deployment, is different from cluster one. Consensus manages all metadata about all collections and does not require snapshots to recover it. But you can use snapshots to recover missing shards of the collections. Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it. The service will download the specified snapshot of the collection and recover shards with data from it. Once all shards of the collection are recovered, the collection will become operational again. ## Consistency guarantees By default, Qdrant focuses on availability and maximum throughput of search operations. For the majority of use cases, this is a preferable trade-off. During the normal state of operation, it is possible to search and modify data from any peers in the cluster. Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster. - reads are using a partial fan-out strategy to optimize latency and availability - writes are executed in parallel on all active sharded replicas ![Embeddings](/docs/concurrent-operations-replicas.png) However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc. Qdrant provides a few options to control consistency guarantees: - `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations. - Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low. - Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical. ### Write consistency factor The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default. It can be configured at the collection's creation time. ```http PUT /collections/{collection_name} { "vectors": { "size": 300, "distance": "Cosine" }, "shard_number": 6, "replication_factor": 2, "write_consistency_factor": 2, } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{collection_name}", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, write_consistency_factor=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 300, distance: "Cosine", }, shard_number: 6, replication_factor: 2, write_consistency_factor: 2, }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig}, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .create_collection(&CreateCollection { collection_name: "{collection_name}".into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 300, distance: Distance::Cosine.into(), ..Default::default() })), }), shard_number: Some(6), replication_factor: Some(2), write_consistency_factor: Some(2), ..Default::default() }) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .setWriteConsistencyFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2, writeConsistencyFactor: 2 ); ``` Write operations will fail if the number of active replicas is less than the `write_consistency_factor`. ### Read consistency Read `consistency` can be specified for most read requests and will ensure that the returned result is consistent across cluster nodes. - `all` will query all nodes and return points, which present on all of them - `majority` will query all nodes and return points, which present on the majority of them - `quorum` will query randomly selected majority of nodes and return points, which present on all of them - `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them - default `consistency` is `1` ```http POST /collections/{collection_name}/points/search?consistency=majority { "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "params": { "hnsw_ef": 128, "exact": false }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 3 } ``` ```python client.search( collection_name="{collection_name}", query_filter=models.Filter( must=[ models.FieldCondition( key="city", match=models.MatchValue( value="London", ), ) ] ), search_params=models.SearchParams(hnsw_ef=128, exact=False), query_vector=[0.2, 0.1, 0.9, 0.7], limit=3, consistency="majority", ) ``` ```typescript client.search("{collection_name}", { filter: { must: [{ key: "city", match: { value: "London" } }], }, params: { hnsw_ef: 128, exact: false, }, vector: [0.2, 0.1, 0.9, 0.7], limit: 3, consistency: "majority", }); ``` ```rust use qdrant_client::{ client::QdrantClient, qdrant::{ read_consistency::Value, Condition, Filter, ReadConsistency, ReadConsistencyType, SearchParams, SearchPoints, }, }; let client = QdrantClient::from_url("http://localhost:6334").build()?; client .search_points(&SearchPoints { collection_name: "{collection_name}".into(), filter: Some(Filter::must([Condition::matches( "city", "London".into(), )])), params: Some(SearchParams { hnsw_ef: Some(128), exact: Some(false), ..Default::default() }), vector: vec![0.2, 0.1, 0.9, 0.7], limit: 3, read_consistency: Some(ReadConsistency { value: Some(Value::Type(ReadConsistencyType::Majority.into())), }), ..Default::default() }) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ReadConsistency; import io.qdrant.client.grpc.Points.ReadConsistencyType; import io.qdrant.client.grpc.Points.SearchParams; import io.qdrant.client.grpc.Points.SearchPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .searchAsync( SearchPoints.newBuilder() .setCollectionName("{collection_name}") .setFilter(Filter.newBuilder().addMust(matchKeyword("city", "London")).build()) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(true).build()) .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f)) .setLimit(3) .setReadConsistency( ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient("localhost", 6334); await client.SearchAsync( collectionName: "{collection_name}", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword("city", "London"), searchParams: new SearchParams { HnswEf = 128, Exact = true }, limit: 3, readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority } ); ``` ### Write ordering Write `ordering` can be specified for any write request to serialize it through a single "leader" node, which ensures that all write operations (issued with the same `ordering`) are performed and observed sequentially. - `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered. - `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change. - `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down. ```http PUT /collections/{collection_name}/points?ordering=strong { "batch": { "ids": [1, 2, 3], "payloads": [ {"color": "red"}, {"color": "green"}, {"color": "blue"} ], "vectors": [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9] ] } } ``` ```python client.upsert( collection_name="{collection_name}", points=models.Batch( ids=[1, 2, 3], payloads=[ {"color": "red"}, {"color": "green"}, {"color": "blue"}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], ), ordering="strong", ) ``` ```typescript client.upsert("{collection_name}", { batch: { ids: [1, 2, 3], payloads: [{ color: "red" }, { color: "green" }, { color: "blue" }], vectors: [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], }, ordering: "strong", }); ``` ```rust use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType}; use serde_json::json; client .upsert_points_blocking( "{collection_name}", None, vec![ PointStruct::new( 1, vec![0.9, 0.1, 0.1], json!({ "color": "red" }) .try_into() .unwrap(), ), PointStruct::new( 2, vec![0.1, 0.9, 0.1], json!({ "color": "green" }) .try_into() .unwrap(), ), PointStruct::new( 3, vec![0.1, 0.1, 0.9], json!({ "color": "blue" }) .try_into() .unwrap(), ), ], Some(WriteOrdering { r#type: WriteOrderingType::Strong.into(), }), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; import io.qdrant.client.grpc.Points.WriteOrdering; import io.qdrant.client.grpc.Points.WriteOrderingType; client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName("{collection_name}") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of("color", value("red"))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of("color", value("green"))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.94f)) .putAllPayload(Map.of("color", value("blue"))) .build())) .setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.UpsertAsync( collectionName: "{collection_name}", points: new List<PointStruct> { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { ["city"] = "red" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { ["city"] = "green" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { ["city"] = "blue" } } }, ordering: WriteOrderingType.Strong ); ``` ## Listener mode <aside role="alert">This is an experimental feature, its behavior may change in the future.</aside> In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations. There are several scenarios where this can be useful: - Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time. - Listener node can be used to syncronize data into another region, while still performing search operations in the local region. To enable listener mode, set `node_type` to `Listener` in the config file: ```yaml storage: node_type: "Listener" ``` Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage. All shards, stored on the listener node, will be converted to the `Listener` state. Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL. This mechanism should allow to minimize upsert latency in case of parallel snapshotting. ## Consensus Checkpointing Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state. This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes. For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state. In long-running clusters, this can take a long time, and the log can grow very large. To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state. To use this feature, simply call the `/cluster/recover` API on required node: ```http POST /cluster/recover ``` This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application. In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation.
documentation/guides/distributed_deployment.md
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
161
Edit dataset card