diff --git "a/documents.csv" "b/documents.csv"
--- "a/documents.csv"
+++ "b/documents.csv"
@@ -1,1122 +1,1239 @@
-text,source
+text,source
"---
-draft: false
+logos:
-title: Food Discovery
+ - /img/customers-logo/discord.svg
-short_description: Qdrant Food Discovery Demo recommends more similar meals based on how they look
+ - /img/customers-logo/johnson-and-johnson.svg
-description: This demo uses data from Delivery Service. Users may like or dislike the photo of a dish, and the app will recommend more similar meals based on how they look. It's also possible to choose to view results from the restaurants within the delivery radius.
+ - /img/customers-logo/perplexity.svg
-preview_image: /demo/food-discovery-demo.png
+ - /img/customers-logo/mozilla.svg
-link: https://food-discovery.qdrant.tech/
+ - /img/customers-logo/voiceflow.svg
-weight: 2
+ - /img/customers-logo/bosch-digital.svg
-sitemapExclude: True
+sitemapExclude: true
----
-",demo/demo-2.md
+---",customers/logo-cards-1.md
"---
-draft: false
+review: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. Qdrant came out on top in each category... ultimately, it wasn't much of a contest.”
-title: E-commerce products categorization
+names: Alex Webb
-short_description: E-commerce products categorization demo from Qdrant vector database
+positions: Director of Engineering, CB Insights
-description: This demo shows how you can use vector database in e-commerce. Enter the name of the product and the application will understand which category it belongs to, based on the multi-language model. The dots represent clusters of products.
+avatar:
-preview_image: /demo/products_categorization_demo.jpg
+ src: /img/customers/alex-webb.svg
-link: https://qdrant.to/extreme-classification-demo
+ alt: Alex Webb Avatar
-weight: 3
+logo:
-sitemapExclude: True
+ src: /img/brands/cb-insights.svg
+
+ alt: Logo
+
+sitemapExclude: true
---
-",demo/demo-3.md
-"---
-draft: false
-title: Startup Search
+",customers/customers-testimonial1.md
+"---
-short_description: Qdrant Startup Search. This demo uses short descriptions of startups to perform a semantic search
+title: Customers
-description: This demo uses short descriptions of startups to perform a semantic search. Each startup description converted into a vector using a pre-trained SentenceTransformer model and uploaded to the Qdrant vector search engine. Demo service processes text input with the same model and uses its output to query Qdrant for similar vectors. You can turn neural search on and off to compare the result with regular full-text search.
+description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing.
-preview_image: /demo/startup_search_demo.jpg
+caseStudy:
-link: https://qdrant.to/semantic-search-demo
+ logo:
-weight: 1
+ src: /img/customers-case-studies/customer-logo.svg
-sitemapExclude: True
+ alt: Logo
----
-",demo/demo-1.md
-"---
+ title: Recommendation Engine with Qdrant Vector Database
-page_title: Vector Search Demos and Examples
+ description: Dailymotion leverages Qdrant to optimize its video recommendation engine, managing over 420 million videos and processing 13 million recommendations daily. With this, Dailymotion was able to reduced content processing times from hours to minutes and increased user interactions and click-through rates by more than 3x.
-description: Interactive examples and demos of vector search based applications developed with Qdrant vector search engine.
+ link:
-title: Vector Search Demos
+ text: Read Case Study
-section_title: Interactive Live Examples
+ url: /blog/case-study-dailymotion/
----",demo/_index.md
-"---
+ image:
-title: Examples
+ src: /img/customers-case-studies/case-study.png
-weight: 25
+ alt: Preview
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
+cases:
-is_empty: false
+- id: 0
----
+ logo:
+ src: /img/customers-case-studies/visua.svg
+ alt: Visua Logo
-# Sample Use Cases
+ image:
+ src: /img/customers-case-studies/case-visua.png
+ alt: The hands of a person in a medical gown holding a tablet against the background of a pharmacy shop
-Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example.
+ title: VISUA improves quality control process for computer vision with anomaly detection by 10x.
+ link:
+ text: Read Story
-| Example | Description | Stack |
+ url: /blog/case-study-visua/
-|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------|
+- id: 1
-| [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant |
+ logo:
-| [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant |
+ src: /img/customers-case-studies/dust.svg
-| [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant |
+ alt: Dust Logo
-| [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant |
+ image:
-| [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere |
+ src: /img/customers-case-studies/case-dust.png
-| [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant |
+ alt: A man in a jeans shirt is holding a smartphone, only his hands are visible. In the foreground, there is an image of a robot surrounded by chat and sound waves.
-| [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
+ title: Dust uses Qdrant for RAG, achieving millisecond retrieval, reducing costs by 50%, and boosting scalability.
-| [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs | OpenAI, Qdrant, FastEmbed |
-",documentation/examples.md
-"---
+ link:
-title: Release notes
+ text: Read Story
-weight: 42
+ url: /blog/dust-and-qdrant/
-type: external-link
+- id: 2
-external_url: https://github.com/qdrant/qdrant/releases
+ logo:
-sitemapExclude: True
+ src: /img/customers-case-studies/iris-agent.svg
----
+ alt: Logo
+ image:
+ src: /img/customers-case-studies/case-iris-agent.png
+ alt: Hands holding a smartphone, styled smartphone interface visualisation in the foreground. First-person view
-",documentation/release-notes.md
-"---
+ title: IrisAgent uses Qdrant for RAG to automate support, and improve resolution times, transforming customer service.
-title: Benchmarks
+ link:
-weight: 33
+ text: Read Story
-draft: true
+ url: /blog/iris-agent-qdrant/
+
+sitemapExclude: true
---
-",documentation/benchmarks.md
+",customers/customers-case-studies.md
"---
-title: Community links
+review: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!”
-weight: 42
+names: Kyle Tobin
----
+positions: Principal, Cognizant
+avatar:
+ src: /img/customers/kyle-tobin.png
-# Community Contributions
+ alt: Kyle Tobin Avatar
+logo:
+ src: /img/brands/cognizant.svg
-Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors.
+ alt: Cognizant Logo
+sitemapExclude: true
+---
-| Link | Description | Stack |
-|------|------------------------------|--------|
+",customers/customers-testimonial2.md
+"---
-| [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone |
+logos:
-| [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex |
+ - /img/customers-logo/gitbook.svg
-| [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
-",documentation/community-links.md
-"---
+ - /img/customers-logo/deloitte.svg
-title: Quickstart
+ - /img/customers-logo/disney.svg
-weight: 11
+sitemapExclude: true
-aliases:
+---",customers/logo-cards-3.md
+"---
- - quick_start
+title: Vector Space Wall
----
+link:
-# Quickstart
+ url: https://testimonial.to/qdrant/all
+ text: Submit Your Testimonial
+testimonials:
-In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query.
+- id: 0
+ name: Jonathan Eisenzopf
+ position: Chief Strategy and Research Officer at Talkmap
-
+ avatar:
+ src: /img/customers/jonathan-eisenzopf.svg
+ alt: Avatar
-## Download and run
+ text: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.”
+- id: 1
+ name: Angel Luis Almaraz Sánchez
-First, download the latest Qdrant image from Dockerhub:
+ position: Full Stack | DevOps
+ avatar:
+ src: /img/customers/angel-luis-almaraz-sanchez.svg
-```bash
+ alt: Avatar
-docker pull qdrant/qdrant
+ text: Thank you, great work, Qdrant is my favorite option for similarity search.
-```
+- id: 2
+ name: Shubham Krishna
+ position: ML Engineer @ ML6
-Then, run the service:
+ avatar:
+ src: /img/customers/shubham-krishna.svg
+ alt: Avatar
-```bash
+ text: Go ahead and checkout Qdrant. I plan to build a movie retrieval search where you can ask anything regarding a movie based on the vector embeddings generated by a LLM. It can also be used for getting recommendations.
-docker run -p 6333:6333 -p 6334:6334 \
+- id: 3
- -v $(pwd)/qdrant_storage:/qdrant/storage:z \
+ name: Kwok Hing LEON
- qdrant/qdrant
+ position: Data Science
-```
+ avatar:
+ src: /img/customers/kwok-hing-leon.svg
+ alt: Avatar
-Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see.
+ text: Check out qdrant for improving searches. Bye to non-semantic KM engines.
+- id: 4
+ name: Ankur S
-Qdrant is now accessible:
+ position: Building
+ avatar:
+ src: /img/customers/ankur-s.svg
-- REST API: [localhost:6333](http://localhost:6333)
+ alt: Avatar
-- Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard)
+ text: Quadrant is a great vector database. There is a real sense of thought behind the api!
-- GRPC API: [localhost:6334](http://localhost:6334)
+- id: 5
+ name: Yasin Salimibeni View Yasin Salimibeni’s profile
+ position: AI Evangelist | Generative AI Product Designer | Entrepreneur | Mentor
-## Initialize the client
+ avatar:
+ src: /img/customers/yasin-salimibeni-view-yasin-salimibeni.svg
+ alt: Avatar
-```python
+ text: Great work. I just started testing Qdrant Azure and I was impressed by the efficiency and speed. Being deploy-ready on large cloud providers is a great plus. Way to go!
-from qdrant_client import QdrantClient
+- id: 6
+ name: Marcel Coetzee
+ position: Data and AI Plumber
-client = QdrantClient(""localhost"", port=6333)
+ avatar:
-```
+ src: /img/customers/marcel-coetzee.svg
+ alt: Avatar
+ text: Using Qdrant as a blazing fact vector store for a stealth project of mine. It offers fantasic functionality for semantic search ✨
-```typescript
+- id: 7
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ name: Andrew Rove
+ position: Principal Software Engineer
+ avatar:
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ src: /img/customers/andrew-rove.svg
-```
+ alt: Avatar
+ text: We have been using Qdrant in production now for over 6 months to store vectors for cosine similarity search and it is way more stable and faster than our old ElasticSearch vector index.
No merging segments, no red indexes at random times. It just works and was super easy to deploy via docker to our cluster.
It’s faster, cheaper to host, and more stable, and open source to boot!
+- id: 8
-```rust
+ name: Josh Lloyd
-use qdrant_client::client::QdrantClient;
+ position: ML Engineer
+ avatar:
+ src: /img/customers/josh-lloyd.svg
-// The Rust client uses Qdrant's GRPC interface
+ alt: Avatar
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ text: I'm using Qdrant to search through thousands of documents to find similar text phrases for question answering. Qdrant's awesome filtering allows me to slice along metadata while I'm at it! 🚀 and it's fast ⏩🔥
-```
+- id: 9
+ name: Leonard Püttmann
+ position: data scientist
-```java
+ avatar:
-import io.qdrant.client.QdrantClient;
+ src: /img/customers/leonard-puttmann.svg
-import io.qdrant.client.QdrantGrpcClient;
+ alt: Avatar
+ text: Amidst the hype around vector databases, Qdrant is by far my favorite one. It's super fast (written in Rust) and open-source! At Kern AI we use Qdrant for fast document retrieval and to do quick similarity search for text data.
+- id: 10
-// The Java client uses Qdrant's GRPC interface
+ name: Stanislas Polu
-QdrantClient client = new QdrantClient(
+ position: Software Engineer & Co-Founder, Dust
- QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ avatar:
-```
+ src: /img/customers/stanislas-polu.svg
+ alt: Avatar
+ text: Qdrant's the best. By. Far.
-```csharp
+- id: 11
-using Qdrant.Client;
+ name: Sivesh Sukumar
+ position: Investor at Balderton
+ avatar:
-// The C# client uses Qdrant's GRPC interface
+ src: /img/customers/sivesh-sukumar.svg
-var client = new QdrantClient(""localhost"", 6334);
+ alt: Avatar
-```
+ text: We're using Qdrant to help segment and source Europe's next wave of extraordinary companies!
+- id: 12
+ name: Saksham Gupta
-
+ position: AI Governance Machine Learning Engineer
+ avatar:
+ src: /img/customers/saksham-gupta.svg
-## Create a collection
+ alt: Avatar
+ text: Looking forward to using Qdrant vector similarity search in the clinical trial space! OpenAI Embeddings + Qdrant = Match made in heaven!
+- id: 12
-You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors.
+ name: Rishav Dash
+ position: Data Scientist
+ avatar:
-```python
+ src: /img/customers/rishav-dash.svg
-from qdrant_client.http.models import Distance, VectorParams
+ alt: Avatar
+ text: awesome stuff 🔥
+sitemapExclude: true
-client.create_collection(
+---
+",customers/customers-vector-space-wall.md
+"---
- collection_name=""test_collection"",
+title: Customers
- vectors_config=VectorParams(size=4, distance=Distance.DOT),
+description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing.
-)
+sitemapExclude: true
-```
+---
+",customers/customers-hero.md
+"---
-```typescript
+title: Customers
-await client.createCollection(""test_collection"", {
+description: Customers
- vectors: { size: 4, distance: ""Dot"" },
+build:
-});
+ render: always
-```
+cascade:
+- build:
+ list: local
-```rust
+ publishResources: false
-use qdrant_client::qdrant::{vectors_config::Config, VectorParams, VectorsConfig};
+ render: never
+---
+",customers/_index.md
+"---
+logos:
-client
+ - /img/customers-logo/flipkart.svg
- .create_collection(&CreateCollection {
+ - /img/customers-logo/x.svg
- collection_name: ""test_collection"".to_string(),
+ - /img/customers-logo/quora.svg
- vectors_config: Some(VectorsConfig {
+sitemapExclude: true
- config: Some(Config::Params(VectorParams {
+---",customers/logo-cards-2.md
+"---
- size: 4,
+title: Qdrant Demos and Tutorials
- distance: Distance::Dot.into(),
+description: Experience firsthand how Qdrant powers intelligent search, anomaly detection, and personalized recommendations, showcasing the full capabilities of vector search to revolutionize data exploration and insights.
- ..Default::default()
+cards:
- })),
+ - id: 0
- }),
+ title: Semantic Search Demo - Startup Search
- ..Default::default()
+ paragraphs:
- })
+ - id: 0
- .await?;
+ content: This demo leverages a pre-trained SentenceTransformer model to perform semantic searches on startup descriptions, transforming them into vectors for the Qdrant engine.
-```
+ - id: 1
+ content: Enter a query to see how neural search compares to traditional full-text search, with the option to toggle neural search on and off for direct comparison.
+ link:
-```java
+ text: View Demo
-import io.qdrant.client.grpc.Collections.Distance;
+ url: https://qdrant.to/semantic-search-demo
-import io.qdrant.client.grpc.Collections.VectorParams;
+ - id: 1
+ title: Semantic Search and Recommendations Demo - Food Discovery
+ paragraphs:
-client.createCollectionAsync(""test_collection"",
+ - id: 0
- VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get();
+ content: Explore personalized meal recommendations with our demo, using Delivery Service data. Like or dislike dish photos to refine suggestions based on visual appeal.
-```
+ - id: 1
+ content: Filter options allow for restaurant selections within your delivery area, tailoring your dining experience to your preferences.
+ link:
-```csharp
+ text: View Demo
-using Qdrant.Client.Grpc;
+ url: https://food-discovery.qdrant.tech/
+ - id: 2
+ title: Categorization Demo - E-Commerce Products
-await client.CreateCollectionAsync(
+ paragraphs:
- collectionName: ""test_collection"",
+ - id: 0
- vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot }
+ content: Discover the power of vector databases in e-commerce through our demo. Simply input a product name and watch as our multi-language model intelligently categorizes it. The dots you see represent product clusters, highlighting our system's efficient categorization.
-);
+ link:
-```
+ text: View Demo
+ url: https://qdrant.to/extreme-classification-demo
+ - id: 3
-
+ title: Code Search Demo - Explore Qdrant's Codebase
-
+ paragraphs:
+ - id: 0
+ content: Semantic search isn't just for natural language. By combining results from two models, qdrant is able to locate relevant code snippets down to the exact line.
-## Add vectors
+ link:
+ text: View Demo
+ url: https://code-search.qdrant.tech/
-Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector:
+---",demo/_index.md
+"---
+content: Learn more about all features that are supported on Qdrant Cloud.
+link:
-```python
+ text: Qdrant Features
-from qdrant_client.http.models import PointStruct
+ url: /qdrant-vector-database/
+sitemapExclude: true
+---
+",qdrant-cloud/qdrant-cloud-features-link.md
+"---
-operation_info = client.upsert(
+title: Qdrant Cloud
- collection_name=""test_collection"",
+description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure.
- wait=True,
+startFree:
- points=[
+ text: Start Free
- PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={""city"": ""Berlin""}),
+ url: https://cloud.qdrant.io/
- PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={""city"": ""London""}),
+contactUs:
- PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={""city"": ""Moscow""}),
+ text: Contact us
- PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={""city"": ""New York""}),
+ url: /contact-us/
- PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={""city"": ""Beijing""}),
+icon:
- PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={""city"": ""Mumbai""}),
+ src: /icons/fill/lightning-purple.svg
- ],
+ alt: Lightning
-)
+content: ""Learn how to get up and running in minutes:""
+#video:
+# src: /
-print(operation_info)
+# button: Watch Demo
-```
+# icon:
+# src: /icons/outline/play-white.svg
+# alt: Play
-```typescript
+# preview: /img/qdrant-cloud-demo.png
-const operationInfo = await client.upsert(""test_collection"", {
+sitemapExclude: true
- wait: true,
+---
- points: [
- { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: ""Berlin"" } },
+",qdrant-cloud/qdrant-cloud-hero.md
+"---
- { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: ""London"" } },
+items:
- { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: ""Moscow"" } },
+- id: 0
- { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: ""New York"" } },
+ title: Run Anywhere
- { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: ""Beijing"" } },
+ description: Available on AWS, Google Cloud, and Azure regions globally for deployment flexibility and quick data access.
- { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: ""Mumbai"" } },
+ image:
- ],
+ src: /img/qdrant-cloud-bento-cards/run-anywhere-graphic.png
-});
+ alt: Run anywhere graphic
+- id: 1
+ title: Simple Setup and Start Free
-console.debug(operationInfo);
+ description: Deploying a cluster via the Qdrant Cloud Console takes only a few seconds and scales up as needed.
-```
+ image:
+ src: /img/qdrant-cloud-bento-cards/simple-setup-illustration.png
+ alt: Simple setup illustration
-```rust
+- id: 2
-use qdrant_client::qdrant::PointStruct;
+ title: Efficient Resource Management
-use serde_json::json;
+ description: Dramatically reduce memory usage with built-in compression options and offload data to disk.
+ image:
+ src: /img/qdrant-cloud-bento-cards/efficient-resource-management.png
-let points = vec![
+ alt: Efficient resource management diagram
- PointStruct::new(
+- id: 3
- 1,
+ title: Zero-downtime Upgrades
- vec![0.05, 0.61, 0.76, 0.74],
+ description: Uninterrupted service during scaling and model updates for continuous operation and deployment flexibility.
- json!(
+ link:
- {""city"": ""Berlin""}
+ text: Cluster Scaling
- )
+ url: /documentation/cloud/cluster-scaling/
- .try_into()
+ image:
- .unwrap(),
+ src: /img/qdrant-cloud-bento-cards/zero-downtime-upgrades.png
- ),
+ alt: Zero downtime upgrades illustration
- PointStruct::new(
+- id: 4
- 2,
+ title: Continuous Backups
- vec![0.19, 0.81, 0.75, 0.11],
+ description: Automated, configurable backups for data safety and easy restoration to previous states.
- json!(
+ link:
- {""city"": ""London""}
+ text: Backups
- )
+ url: /documentation/cloud/backups/
- .try_into()
+ image:
- .unwrap(),
+ src: /img/qdrant-cloud-bento-cards/continuous-backups.png
- ),
+ alt: Continuous backups illustration
- // ..truncated
+sitemapExclude: true
-];
+---
+",qdrant-cloud/qdrant-cloud-bento-cards.md
+"---
-let operation_info = client
+title: ""Qdrant Cloud: Scalable Managed Cloud Services""
- .upsert_points_blocking(""test_collection"".to_string(), None, points, None)
+url: cloud
- .await?;
+description: ""Discover Qdrant Cloud, the cutting-edge managed cloud for scalable, high-performance AI applications. Manage and deploy your vector data with ease today.""
+build:
+ render: always
-dbg!(operation_info);
+cascade:
-```
+- build:
+ list: local
+ publishResources: false
-```java
+ render: never
-import java.util.List;
+---
+",qdrant-cloud/_index.md
+"---
-import java.util.Map;
+logo:
+ title: Our Logo
+ description: ""The Qdrant logo represents a paramount expression of our core brand identity. With consistent placement, sizing, clear space, and color usage, our logo affirms its recognition across all platforms.""
-import static io.qdrant.client.PointIdFactory.id;
+ logoCards:
-import static io.qdrant.client.ValueFactory.value;
+ - id: 0
-import static io.qdrant.client.VectorsFactory.vectors;
+ logo:
+ src: /img/brand-resources-logos/logo.svg
+ alt: Logo Full Color
-import io.qdrant.client.grpc.Points.PointStruct;
+ title: Logo Full Color
-import io.qdrant.client.grpc.Points.UpdateResult;
+ link:
+ url: /img/brand-resources-logos/logo.svg
+ text: Download
-UpdateResult operationInfo =
+ - id: 1
- client
+ logo:
- .upsertAsync(
+ src: /img/brand-resources-logos/logo-black.svg
- ""test_collection"",
+ alt: Logo Black
- List.of(
+ title: Logo Black
- PointStruct.newBuilder()
+ link:
- .setId(id(1))
+ url: /img/brand-resources-logos/logo-black.svg
- .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+ text: Download
- .putAllPayload(Map.of(""city"", value(""Berlin"")))
+ - id: 2
- .build(),
+ logo:
- PointStruct.newBuilder()
+ src: /img/brand-resources-logos/logo-white.svg
- .setId(id(2))
+ alt: Logo White
- .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
+ title: Logo White
- .putAllPayload(Map.of(""city"", value(""London"")))
+ link:
- .build(),
+ url: /img/brand-resources-logos/logo-white.svg
- PointStruct.newBuilder()
+ text: Download
- .setId(id(3))
+ logomarkTitle: Logomark
- .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
+ logomarkCards:
- .putAllPayload(Map.of(""city"", value(""Moscow"")))
+ - id: 0
- .build()))
+ logo:
- // Truncated
+ src: /img/brand-resources-logos/logomark.svg
- .get();
+ alt: Logomark Full Color
+ title: Logomark Full Color
+ link:
-System.out.println(operationInfo);
+ url: /img/brand-resources-logos/logomark.svg
-```
+ text: Download
+ - id: 1
+ logo:
-```csharp
+ src: /img/brand-resources-logos/logomark-black.svg
-using Qdrant.Client.Grpc;
+ alt: Logomark Black
+ title: Logomark Black
+ link:
-var operationInfo = await client.UpsertAsync(
+ url: /img/brand-resources-logos/logomark-black.svg
- collectionName: ""test_collection"",
+ text: Download
- points: new List
+ - id: 2
- {
+ logo:
- new()
+ src: /img/brand-resources-logos/logomark-white.svg
- {
+ alt: Logomark White
- Id = 1,
+ title: Logomark White
- Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f },
+ link:
- Payload = { [""city""] = ""Berlin"" }
+ url: /img/brand-resources-logos/logomark-white.svg
- },
+ text: Download
- new()
+colors:
- {
+ title: Colors
- Id = 2,
+ description: Our brand colors play a crucial role in maintaining a cohesive visual identity. The careful balance of these colors ensures a consistent and impactful representation of Qdrant, reinforcing our commitment to excellence and precision in every aspect of our work.
- Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f },
+ cards:
- Payload = { [""city""] = ""London"" }
+ - id: 0
- },
+ name: Amaranth
- new()
+ type: HEX
- {
+ code: ""DC244C""
- Id = 3,
+ - id: 1
- Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f },
+ name: Blue
- Payload = { [""city""] = ""Moscow"" }
+ type: HEX
- },
+ code: ""2F6FF0""
- // Truncated
+ - id: 2
- }
+ name: Violet
-);
+ type: HEX
+ code: ""8547FF""
+ - id: 3
-Console.WriteLine(operationInfo);
+ name: Teal
-```
+ type: HEX
+ code: ""038585""
+ - id: 4
-**Response:**
+ name: Black
+ type: HEX
+ code: ""090E1A""
-```python
+ - id: 5
-operation_id=0 status=
+ name: White
-```
+ type: HEX
+ code: ""FFFFFF""
+typography:
-```typescript
+ title: Typography
-{ operation_id: 0, status: 'completed' }
+ description: Main typography is Satoshi, this is employed for both UI and marketing purposes. Headlines are set in Bold (600), while text is rendered in Medium (500).
-```
+ example: AaBb
+ specimen: ""ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789 !@#$%^&*()""
+ link:
-```rust
+ url: https://api.fontshare.com/v2/fonts/download/satoshi
-PointsOperationResponse {
+ text: Download
- result: Some(UpdateResult {
+trademarks:
- operation_id: 0,
+ title: Trademarks
- status: Completed,
+ description: All features associated with the Qdrant brand are safeguarded by relevant trademark, copyright, and intellectual property regulations. Utilization of the Qdrant trademark must adhere to the specified Qdrant Trademark Standards for Use.
Should you require clarification or seek permission to utilize these resources, feel free to reach out to us at
- }),
+ link:
- time: 0.006347708,
+ url: ""mailto:info@qdrant.com""
-}
+ text: info@qdrant.com.
-```
+sitemapExclude: true
+---
+",brand-resources/brand-resources-content.md
+"---
+title: Qdrant Brand Resources
-```java
+buttons:
-operation_id: 0
+- id: 0
-status: Completed
+ url: ""#logo""
-```
+ text: Logo
+- id: 1
+ url: ""#colors""
-```csharp
+ text: Colors
-{ ""operationId"": ""0"", ""status"": ""Completed"" }
+- id: 2
-```
+ url: ""#typography""
+ text: Typography
+- id: 3
-## Run a query
+ url: ""#trademarks""
+ text: Trademarks
+sitemapExclude: true
-Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`?
+---
+",brand-resources/brand-resources-hero.md
+"---
-```python
+title: brand-resources
-search_result = client.search(
+description: brand-resources
- collection_name=""test_collection"", query_vector=[0.2, 0.1, 0.9, 0.7], limit=3
+build:
-)
+ render: always
+cascade:
+- build:
-print(search_result)
+ list: local
-```
+ publishResources: false
+ render: never
+---
+",brand-resources/_index.md
+"---
-```typescript
+title: Cloud Quickstart
-let searchResult = await client.search(""test_collection"", {
+weight: 4
- vector: [0.2, 0.1, 0.9, 0.7],
+aliases:
- limit: 3,
+ - quickstart-cloud
-});
+ - ../cloud-quick-start
+ - cloud-quick-start
+ - cloud-quickstart
-console.debug(searchResult);
+ - cloud/quickstart-cloud/
-```
+---
+# How to Get Started With Qdrant Cloud
-```rust
-use qdrant_client::qdrant::SearchPoints;
+
+
You can try vector search on Qdrant Cloud in three steps.
+ Instructions are below, but the video is faster:
-let search_result = client
- .search_points(&SearchPoints {
- collection_name: ""test_collection"".to_string(),
+## Setup a Qdrant Cloud cluster
- vector: vec![0.2, 0.1, 0.9, 0.7],
- limit: 3,
- with_payload: Some(true.into()),
+1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or Github credentials.
- ..Default::default()
+2. Go to **Overview** and follow the onboarding instructions under **Create First Cluster**.
- })
- .await?;
+![create a cluster](/docs/gettingstarted/gui-quickstart/create-cluster.png)
-dbg!(search_result);
-```
+3. When you create it, you will receive an API key. You will need to copy and paste it soon.
+4. Your new cluster will be created under **Clusters**. Give it a few moments to provision.
-```java
-import java.util.List;
+## Access the cluster dashboard
-import io.qdrant.client.grpc.Points.ScoredPoint;
+1. Go to your **Clusters**. Under **Actions**, open the **Dashboard**.
-import io.qdrant.client.grpc.Points.SearchPoints;
+2. Paste your new API key here. If you lost it, make another in **Access Management**.
+3. The key will grant you access to your Qdrant instance. Now you can see the cluster Dashboard.
-import static io.qdrant.client.WithPayloadSelectorFactory.enable;
+![access the dashboard](/docs/gettingstarted/gui-quickstart/access-dashboard.png)
-List searchResult =
- client
+## Try the Tutorial sandbox
- .searchAsync(
- SearchPoints.newBuilder()
- .setCollectionName(""test_collection"")
+1. Open the interactive **Tutorial**. Here, you can test basic Qdrant API requests.
- .setLimit(3)
+2. Using the **Quickstart** instructions, create a collection, add vectors and run a search.
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+3. The output on the right will show you some basic semantic search results.
- .setWithPayload(enable(true))
- .build())
- .get();
+![interactive-tutorial](/docs/gettingstarted/gui-quickstart/interactive-tutorial.png)
-
-System.out.println(searchResult);
-```
+## That's vector search!
+You can stay in the sandbox and continue trying our different API calls.
+When ready, use the Console and our complete REST API to try other operations.
-```csharp
-var searchResult = await client.SearchAsync(
- collectionName: ""test_collection"",
+## What's next?
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- limit: 3,
- payloadSelector: true
+Now that you have a Qdrant Cloud cluster up and running, you should [test remote access](/documentation/cloud/authentication/#test-cluster-access) with a Qdrant Client.
-);
+",documentation/quickstart-cloud.md
+"---
+title: Release Notes
-Console.WriteLine(searchResult);
+weight: 24
-```
+type: external-link
+external_url: https://github.com/qdrant/qdrant/releases
+sitemapExclude: True
-**Response:**
+---
-```python
-ScoredPoint(id=4, version=0, score=1.362, payload={""city"": ""New York""}, vector=None),
+",documentation/release-notes.md
+"---
-ScoredPoint(id=1, version=0, score=1.273, payload={""city"": ""Berlin""}, vector=None),
+title: Benchmarks
-ScoredPoint(id=3, version=0, score=1.208, payload={""city"": ""Moscow""}, vector=None)
+weight: 33
-```
+draft: true
+---
+",documentation/benchmarks.md
+"---
+title: Community links
-```typescript
+weight: 42
-[
+draft: true
- {
+---
- id: 4,
- version: 0,
- score: 1.362,
+# Community Contributions
- payload: null,
- vector: null,
- },
+Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors.
- {
- id: 1,
- version: 0,
+| Link | Description | Stack |
- score: 1.273,
+|------|------------------------------|--------|
- payload: null,
+| [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone |
- vector: null,
+| [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex |
- },
+| [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js |
+",documentation/community-links.md
+"---
- {
+title: Local Quickstart
- id: 3,
+weight: 5
- version: 0,
+aliases:
- score: 1.208,
+ - quick_start
- payload: null,
+ - quick-start
- vector: null,
+ - quickstart
- },
+---
-];
+# How to Get Started with Qdrant Locally
-```
+In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query.
-```rust
-SearchResponse {
- result: [
+
- ScoredPoint {
- id: Some(PointId {
- point_id_options: Some(Num(4)),
+## Download and run
- }),
- payload: {},
- score: 1.362,
+First, download the latest Qdrant image from Dockerhub:
- version: 0,
- vectors: None,
- },
+```bash
- ScoredPoint {
+docker pull qdrant/qdrant
- id: Some(PointId {
+```
- point_id_options: Some(Num(1)),
- }),
- payload: {},
+Then, run the service:
- score: 1.273,
- version: 0,
- vectors: None,
+```bash
- },
+docker run -p 6333:6333 -p 6334:6334 \
- ScoredPoint {
+ -v $(pwd)/qdrant_storage:/qdrant/storage:z \
- id: Some(PointId {
+ qdrant/qdrant
- point_id_options: Some(Num(3)),
+```
- }),
- payload: {},
- score: 1.208,
+Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see.
- version: 0,
- vectors: None,
- },
+Qdrant is now accessible:
- ],
- time: 0.003635125,
-}
+- REST API: [localhost:6333](http://localhost:6333)
-```
+- Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard)
+- GRPC API: [localhost:6334](http://localhost:6334)
-```java
-[id {
+## Initialize the client
- num: 4
-}
-payload {
+```python
+
+from qdrant_client import QdrantClient
- key: ""city""
- value {
- string_value: ""New York""
+client = QdrantClient(url=""http://localhost:6333"")
- }
+```
-}
-score: 1.362
-version: 1
+```typescript
-, id {
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- num: 1
-}
-payload {
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- key: ""city""
+```
- value {
- string_value: ""Berlin""
- }
+```rust
-}
+use qdrant_client::Qdrant;
-score: 1.273
-version: 1
-, id {
+// The Rust client uses Qdrant's gRPC interface
- num: 3
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-}
+```
-payload {
- key: ""city""
- value {
+```java
- string_value: ""Moscow""
+import io.qdrant.client.QdrantClient;
- }
+import io.qdrant.client.QdrantGrpcClient;
-}
-score: 1.208
-version: 1
+// The Java client uses Qdrant's gRPC interface
-]
+QdrantClient client = new QdrantClient(
+
+ QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
```
@@ -1124,7385 +1241,7113 @@ version: 1
```csharp
-[
-
- {
+using Qdrant.Client;
- ""id"": {
- ""num"": ""4""
- },
+// The C# client uses Qdrant's gRPC interface
- ""payload"": {
+var client = new QdrantClient(""localhost"", 6334);
- ""city"": {
+```
- ""stringValue"": ""New York""
- }
- },
+```go
- ""score"": 1.362,
+import ""github.com/qdrant/go-client/qdrant""
- ""version"": ""7""
- },
- {
+// The Go client uses Qdrant's gRPC interface
- ""id"": {
+client, err := qdrant.NewClient(&qdrant.Config{
- ""num"": ""1""
+ Host: ""localhost"",
- },
+ Port: 6334,
- ""payload"": {
+})
- ""city"": {
+```
- ""stringValue"": ""Berlin""
- }
- },
+
- ""score"": 1.273,
- ""version"": ""7""
- },
+## Create a collection
- {
- ""id"": {
- ""num"": ""3""
+You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors.
- },
- ""payload"": {
- ""city"": {
+```python
- ""stringValue"": ""Moscow""
+from qdrant_client.models import Distance, VectorParams
- }
- },
- ""score"": 1.208,
+client.create_collection(
- ""version"": ""7""
+ collection_name=""test_collection"",
- }
+ vectors_config=VectorParams(size=4, distance=Distance.DOT),
-]
+)
```
-The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default.
+```typescript
-See [payload and vector in the result](../concepts/search#payload-and-vector-in-the-result) on how to enable it.
+await client.createCollection(""test_collection"", {
+ vectors: { size: 4, distance: ""Dot"" },
+});
-## Add a filter
+```
-We can narrow down the results further by filtering by payload. Let's find the closest results that include ""London"".
+```rust
+use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder};
-```python
-from qdrant_client.http.models import Filter, FieldCondition, MatchValue
+client
+ .create_collection(
+ CreateCollectionBuilder::new(""test_collection"")
-search_result = client.search(
+ .vectors_config(VectorParamsBuilder::new(4, Distance::Dot)),
- collection_name=""test_collection"",
+ )
- query_vector=[0.2, 0.1, 0.9, 0.7],
+ .await?;
- query_filter=Filter(
+```
- must=[FieldCondition(key=""city"", match=MatchValue(value=""London""))]
- ),
- with_payload=True,
+```java
- limit=3,
+import io.qdrant.client.grpc.Collections.Distance;
-)
+import io.qdrant.client.grpc.Collections.VectorParams;
-print(search_result)
+client.createCollectionAsync(""test_collection"",
-```
+ VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get();
+```
-```typescript
-searchResult = await client.search(""test_collection"", {
+```csharp
- vector: [0.2, 0.1, 0.9, 0.7],
+using Qdrant.Client.Grpc;
- filter: {
- must: [{ key: ""city"", match: { value: ""London"" } }],
- },
+await client.CreateCollectionAsync(collectionName: ""test_collection"", vectorsConfig: new VectorParams
- with_payload: true,
+{
- limit: 3,
+ Size = 4, Distance = Distance.Dot
});
+```
-console.debug(searchResult);
-```
+```go
+import (
+ ""context""
-```rust
-use qdrant_client::qdrant::{Condition, Filter, SearchPoints};
+ ""github.com/qdrant/go-client/qdrant""
+)
-let search_result = client
- .search_points(&SearchPoints {
- collection_name: ""test_collection"".to_string(),
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ CollectionName: ""{collection_name}"",
- filter: Some(Filter::all([Condition::matches(
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
- ""city"",
+ Size: 4,
- ""London"".to_string(),
+ Distance: qdrant.Distance_Cosine,
- )])),
+ }),
- limit: 2,
+})
- ..Default::default()
+```
- })
- .await?;
+## Add vectors
-dbg!(search_result);
-```
+Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector:
-```java
+```python
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+from qdrant_client.models import PointStruct
-List searchResult =
+operation_info = client.upsert(
- client
+ collection_name=""test_collection"",
- .searchAsync(
+ wait=True,
- SearchPoints.newBuilder()
+ points=[
- .setCollectionName(""test_collection"")
+ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={""city"": ""Berlin""}),
- .setLimit(3)
+ PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={""city"": ""London""}),
- .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")))
+ PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={""city"": ""Moscow""}),
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={""city"": ""New York""}),
- .setWithPayload(enable(true))
+ PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={""city"": ""Beijing""}),
- .build())
+ PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={""city"": ""Mumbai""}),
- .get();
+ ],
+
+)
-System.out.println(searchResult);
+print(operation_info)
```
-```csharp
+```typescript
-using static Qdrant.Client.Grpc.Conditions;
+const operationInfo = await client.upsert(""test_collection"", {
+
+ wait: true,
+ points: [
+ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: ""Berlin"" } },
-var searchResult = await client.SearchAsync(
+ { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: ""London"" } },
- collectionName: ""test_collection"",
+ { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: ""Moscow"" } },
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: ""New York"" } },
- filter: MatchKeyword(""city"", ""London""),
+ { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: ""Beijing"" } },
- limit: 3,
+ { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: ""Mumbai"" } },
- payloadSelector: true
+ ],
-);
+});
-Console.WriteLine(searchResult);
+console.debug(operationInfo);
```
-**Response:**
+```rust
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
-```python
-ScoredPoint(id=2, version=0, score=0.871, payload={""city"": ""London""}, vector=None)
+let points = vec![
-```
+ PointStruct::new(1, vec![0.05, 0.61, 0.76, 0.74], [(""city"", ""Berlin"".into())]),
+ PointStruct::new(2, vec![0.19, 0.81, 0.75, 0.11], [(""city"", ""London"".into())]),
+ PointStruct::new(3, vec![0.36, 0.55, 0.47, 0.94], [(""city"", ""Moscow"".into())]),
-```typescript
+ // ..truncated
-[
+];
- {
- id: 2,
- version: 0,
+let response = client
- score: 0.871,
+ .upsert_points(UpsertPointsBuilder::new(""test_collection"", points).wait(true))
- payload: { city: ""London"" },
+ .await?;
- vector: null,
- },
-];
+dbg!(response);
```
-```rust
+```java
-SearchResponse {
+import java.util.List;
- result: [
+import java.util.Map;
- ScoredPoint {
- id: Some(
- PointId {
+import static io.qdrant.client.PointIdFactory.id;
- point_id_options: Some(
+import static io.qdrant.client.ValueFactory.value;
- Num(
+import static io.qdrant.client.VectorsFactory.vectors;
- 2,
- ),
- ),
+import io.qdrant.client.grpc.Points.PointStruct;
- },
+import io.qdrant.client.grpc.Points.UpdateResult;
- ),
- payload: {
- ""city"": Value {
+UpdateResult operationInfo =
- kind: Some(
+ client
- StringValue(
+ .upsertAsync(
- ""London"",
+ ""test_collection"",
- ),
+ List.of(
- ),
+ PointStruct.newBuilder()
- },
+ .setId(id(1))
- },
+ .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
- score: 0.871,
+ .putAllPayload(Map.of(""city"", value(""Berlin"")))
- version: 0,
+ .build(),
- vectors: None,
+ PointStruct.newBuilder()
- },
+ .setId(id(2))
- ],
+ .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
- time: 0.004001083,
+ .putAllPayload(Map.of(""city"", value(""London"")))
-}
+ .build(),
-```
+ PointStruct.newBuilder()
+ .setId(id(3))
+ .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
-```java
+ .putAllPayload(Map.of(""city"", value(""Moscow"")))
-[id {
+ .build()))
- num: 2
+ // Truncated
-}
+ .get();
-payload {
- key: ""city""
- value {
+System.out.println(operationInfo);
- string_value: ""London""
+```
- }
-}
-score: 0.871
+```csharp
-version: 1
+using Qdrant.Client.Grpc;
-]
-```
+var operationInfo = await client.UpsertAsync(collectionName: ""test_collection"", points: new List
+{
-```csharp
+ new()
-[
+ {
- {
+ Id = 1,
- ""id"": {
+ Vectors = new float[]
- ""num"": ""2""
+ {
- },
+ 0.05f, 0.61f, 0.76f, 0.74f
- ""payload"": {
+ },
- ""city"": {
+ Payload = {
- ""stringValue"": ""London""
+ [""city""] = ""Berlin""
- }
+ }
},
- ""score"": 0.871,
+ new()
- ""version"": ""7""
+ {
- }
+ Id = 2,
-]
+ Vectors = new float[]
-```
+ {
+ 0.19f, 0.81f, 0.75f, 0.11f
+ },
-
+ Payload = {
+ [""city""] = ""London""
+ }
-You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score.
+ },
+ new()
+ {
-## Next steps
+ Id = 3,
+ Vectors = new float[]
+ {
-Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates.
+ 0.36f, 0.55f, 0.47f, 0.94f
+ },
+ Payload = {
-To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/).
+ [""city""] = ""Moscow""
+ }
+ },
-**Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
-",documentation/quick-start.md
-"---
+ // Truncated
-#Delimiter files are used to separate the list of documentation pages into sections.
+});
-title: ""Getting Started""
-type: delimiter
-weight: 8 # Change this weight to change order of sections
+Console.WriteLine(operationInfo);
-sitemapExclude: True
+```
----",documentation/0-dl.md
-"---
-#Delimiter files are used to separate the list of documentation pages into sections.
-title: ""Integrations""
+```go
-type: delimiter
+import (
-weight: 30 # Change this weight to change order of sections
+ ""context""
-sitemapExclude: True
+ ""fmt""
----",documentation/2-dl.md
-"---
-title: Roadmap
-weight: 32
+ ""github.com/qdrant/go-client/qdrant""
-draft: true
+)
----
+operationInfo, err := client.Upsert(context.Background(), &qdrant.UpsertPoints{
-# Qdrant 2023 Roadmap
+ CollectionName: ""test_collection"",
+ Points: []*qdrant.PointStruct{
+ {
-Goals of the release:
+ Id: qdrant.NewIDNum(1),
+ Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
+ Payload: qdrant.NewValueMap(map[string]any{""city"": ""Berlin""}),
-* **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back.
+ },
- * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version.
+ {
- * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions.
+ Id: qdrant.NewIDNum(2),
-* **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable.
+ Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11),
-* **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly.
+ Payload: qdrant.NewValueMap(map[string]any{""city"": ""London""}),
-* **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc.
+ },
+ {
+ Id: qdrant.NewIDNum(3),
-## Milestones
+ Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94),
+ Payload: qdrant.NewValueMap(map[string]any{""city"": ""Moscow""}),
+ },
-* :atom_symbol: Quantization support
+ // Truncated
- * [ ] Scalar quantization f32 -> u8 (4x compression)
+ },
- * [ ] Advanced quantization (8x and 16x compression)
+})
- * [ ] Support for binary vectors
+if err != nil {
+ panic(err)
+}
----
+fmt.Println(operationInfo)
+```
-* :arrow_double_up: Scalability
- * [ ] Automatic replication factor adjustment
+**Response:**
- * [ ] Automatic shard distribution on cluster scaling
- * [ ] Repartitioning support
+```python
+operation_id=0 status=
----
+```
-* :eyes: Search scenarios
+```typescript
- * [ ] Diversity search - search for vectors that are different from each other
+{ operation_id: 0, status: 'completed' }
- * [ ] Sparse vectors search - search for vectors with a small number of non-zero values
+```
- * [ ] Grouping requests - search within payload-defined groups
- * [ ] Different scenarios for recommendation API
+```rust
+PointsOperationResponse {
----
+ result: Some(
-
+ UpdateResult {
-* Additionally
+ operation_id: Some(
- * [ ] Extend full-text filtering support
+ 0,
- * [ ] Support for phrase queries
+ ),
- * [ ] Support for logical operators
+ status: Completed,
- * [ ] Simplify update of collection parameters
-",documentation/roadmap.md
-"---
+ },
-title: Interfaces
+ ),
-weight: 14
+ time: 0.00094027,
----
+}
+```
-# Interfaces
+```java
+operation_id: 0
-Qdrant supports these ""official"" clients.
+status: Completed
+```
-> **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language
-using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json)
+```csharp
-or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions.
+{ ""operationId"": ""0"", ""status"": ""Completed"" }
+```
-## Client Libraries
-||Client Repository|Installation|Version|
+```go
-|-|-|-|-|
+operation_id:0 status:Acknowledged
-|[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)|
+```
-|![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)|
-|![rust](/docs/misc/rust.webp)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)|
-|![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)|
+## Run a query
-|![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)|
-|![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)|
+Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`?
+```python
-## API Reference
+search_result = client.query_points(
+ collection_name=""test_collection"", query=[0.2, 0.1, 0.9, 0.7], limit=3
+).points
-All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype.
+print(search_result)
-|API|Documentation|
+```
-|-|-|
-| REST API |[OpenAPI Specification](https://qdrant.github.io/qdrant/redoc/index.html)|
-| gRPC API| [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md)|
+```typescript
+let searchResult = await client.query(
+ ""test_collection"", {
-### gRPC Interface
+ query: [0.2, 0.1, 0.9, 0.7],
+ limit: 3
+});
-The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method.
+console.debug(searchResult.points);
-As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port.
+```
-```yaml
+```rust
-service:
+use qdrant_client::qdrant::QueryPointsBuilder;
- grpc_port: 6334
-```
-
+let search_result = client
-
+ .query(
-Running the service inside of Docker will look like this:
+ QueryPointsBuilder::new(""test_collection"")
+ .query(vec![0.2, 0.1, 0.9, 0.7])
+ )
-```bash
+ .await?;
-docker run -p 6333:6333 -p 6334:6334 \
- -v $(pwd)/qdrant_storage:/qdrant/storage:z \
- qdrant/qdrant
+dbg!(search_result);
```
-**When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application.
+```java
+import java.util.List;
-## Qdrant Web UI
+import io.qdrant.client.grpc.Points.ScoredPoint;
+import io.qdrant.client.grpc.Points.QueryPoints;
-Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points.
+import static io.qdrant.client.QueryFactory.nearest;
-In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots.
+List searchResult =
-![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png)
+ client.queryAsync(QueryPoints.newBuilder()
+ .setCollectionName(""test_collection"")
+ .setLimit(3)
-### Accessing the Web UI
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .build()).get();
+
-First, run the Docker container:
+System.out.println(searchResult);
+```
-```bash
-docker run -p 6333:6333 -p 6334:6334 \
+```csharp
- -v $(pwd)/qdrant_storage:/qdrant/storage:z \
+var searchResult = await client.QueryAsync(
- qdrant/qdrant
+ collectionName: ""test_collection"",
-```
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ limit: 3,
+);
-The GUI is available at `http://localhost:6333/dashboard`
+Console.WriteLine(searchResult);
+```
+```go
+import (
-",documentation/interfaces.md
-"---
+ ""context""
-#Delimiter files are used to separate the list of documentation pages into sections.
+ ""fmt""
-title: ""Support""
-type: delimiter
-weight: 40 # Change this weight to change order of sections
+ ""github.com/qdrant/go-client/qdrant""
-sitemapExclude: True
+)
----",documentation/3-dl.md
-"---
-title: Practice Datasets
-weight: 41
+searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{
----
+ CollectionName: ""test_collection"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+})
-# Common Datasets in Snapshot Format
+if err != nil {
+ panic(err)
+}
-You may find that creating embeddings from datasets is a very resource-intensive task.
-If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page.
-These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance.
+fmt.Println(searchResult)
+```
-## Available datasets
+**Response:**
-Our snapshots are usually generated from publicly available datasets, which are often used for
-non-commercial or academic purposes. The following datasets are currently available. Please click
+```json
-on a dataset name to see its detailed description.
+[
+ {
+ ""id"": 4,
-| Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub |
+ ""version"": 0,
-|--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
+ ""score"": 1.362,
-| [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) |
+ ""payload"": null,
-| [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) |
+ ""vector"": null
-| [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) |
+ },
+ {
+ ""id"": 1,
-Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot)
+ ""version"": 0,
-using the Qdrant CLI upon startup or through the API.
+ ""score"": 1.273,
+ ""payload"": null,
+ ""vector"": null
-## Qdrant on Hugging Face
+ },
+ {
+ ""id"": 3,
-
+ }
+]
+```
-[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
-datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to
-provide you with datasets containing neural embeddings that you can use to practice with Qdrant
+The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default.
-and build your applications based on semantic search. **Please let us know if you'd like to see
+See [payload and vector in the result](../concepts/search/#payload-and-vector-in-the-result) on how to enable it.
-a specific dataset!**
+## Add a filter
-If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index),
-or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/).
+We can narrow down the results further by filtering by payload. Let's find the closest results that include ""London"".
-## Arxiv.org
+```python
+from qdrant_client.models import Filter, FieldCondition, MatchValue
-[Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple
-fields. Operated by Cornell University, arXiv allows researchers to share their findings with
-the scientific community and receive feedback before they undergo peer review for formal
+search_result = client.query_points(
-publication. Its archives host millions of scholarly articles, making it an invaluable resource
+ collection_name=""test_collection"",
-for those looking to explore the cutting edge of scientific research. With a high frequency of
+ query=[0.2, 0.1, 0.9, 0.7],
-daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset
+ query_filter=Filter(
-that is ripe for mining, analysis, and the development of future innovations.
+ must=[FieldCondition(key=""city"", match=MatchValue(value=""London""))]
+ ),
+ with_payload=True,
-
+print(search_result)
-### Arxiv.org titles
+```
-This dataset contains embeddings generated from the paper titles only. Each vector has a
+```typescript
-payload with the title used to create it, along with the DOI (Digital Object Identifier).
+searchResult = await client.query(""test_collection"", {
+ query: [0.2, 0.1, 0.9, 0.7],
+ filter: {
-```json
+ must: [{ key: ""city"", match: { value: ""London"" } }],
-{
+ },
- ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"",
+ with_payload: true,
- ""DOI"": ""1612.05191""
+ limit: 3,
-}
+});
+
+
+
+console.debug(searchResult);
```
-The embeddings generated with InstructorXL model have been generated using the following
+```rust
-instruction:
+use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder};
-> Represent the Research Paper title for retrieval; Input:
+let search_result = client
+ .query(
+ QueryPointsBuilder::new(""test_collection"")
-The following code snippet shows how to generate embeddings using the InstructorXL model:
+ .query(vec![0.2, 0.1, 0.9, 0.7])
+ .filter(Filter::must([Condition::matches(
+ ""city"",
-```python
+ ""London"".to_string(),
-from InstructorEmbedding import INSTRUCTOR
+ )]))
+ .with_payload(true),
+ )
-model = INSTRUCTOR(""hkunlp/instructor-xl"")
+ .await?;
-sentence = ""3D ActionSLAM: wearable person tracking in multi-floor environments""
-instruction = ""Represent the Research Paper title for retrieval; Input:""
-embeddings = model.encode([[instruction, sentence]])
+dbg!(search_result);
```
-The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot).
-
+```java
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-#### Importing the dataset
+List searchResult =
-The easiest way to use the provided dataset is to recover it via the API by passing the
+ client.queryAsync(QueryPoints.newBuilder()
-URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
+ .setCollectionName(""test_collection"")
-code snippet shows how to create a new collection and fill it with the snapshot data:
+ .setLimit(3)
+ .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")))
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-```http request
+ .setWithPayload(enable(true))
-PUT /collections/{collection_name}/snapshots/recover
+ .build()).get();
-{
- ""location"": ""https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot""
-}
+System.out.println(searchResult);
```
-### Arxiv.org abstracts
+```csharp
+using static Qdrant.Client.Grpc.Conditions;
-This dataset contains embeddings generated from the paper abstracts. Each vector has a
-payload with the abstract used to create it, along with the DOI (Digital Object Identifier).
+var searchResult = await client.QueryAsync(
+ collectionName: ""test_collection"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-```json
+ filter: MatchKeyword(""city"", ""London""),
-{
+ limit: 3,
- ""abstract"": ""Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n"",
+ payloadSelector: true
- ""DOI"": ""1612.05191""
+);
-}
-```
+Console.WriteLine(searchResult);
+```
-The embeddings generated with InstructorXL model have been generated using the following
-instruction:
+```go
+import (
-> Represent the Research Paper abstract for retrieval; Input:
+ ""context""
+ ""fmt""
-The following code snippet shows how to generate embeddings using the InstructorXL model:
+ ""github.com/qdrant/go-client/qdrant""
+)
-```python
-from InstructorEmbedding import INSTRUCTOR
+searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""test_collection"",
-model = INSTRUCTOR(""hkunlp/instructor-xl"")
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-sentence = ""The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.""
+ Filter: &qdrant.Filter{
-instruction = ""Represent the Research Paper abstract for retrieval; Input:""
+ Must: []*qdrant.Condition{
-embeddings = model.encode([[instruction, sentence]])
+ qdrant.NewMatch(""city"", ""London""),
-```
+ },
+ },
+ WithPayload: qdrant.NewWithPayload(true),
-The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot).
+})
+if err != nil {
+ panic(err)
-#### Importing the dataset
+}
-The easiest way to use the provided dataset is to recover it via the API by passing the
+fmt.Println(searchResult)
-URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
+```
-code snippet shows how to create a new collection and fill it with the snapshot data:
+**Response:**
-```http request
-PUT /collections/{collection_name}/snapshots/recover
-{
+```json
- ""location"": ""https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot""
+[
-}
+ {
-```
+ ""id"": 2,
+ ""version"": 0,
+ ""score"": 0.871,
-## Wolt food
+ ""payload"": {
+ ""city"": ""London""
+ },
-Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of
+ ""vector"": null
-food images from the Wolt app. Each point in the collection represents a dish with a single
+ }
-image. The image is represented as a vector of 512 float numbers. There is also a JSON
-
-payload attached to each point, which looks similar to this:
+]
+```
-```json
-{
+
- ""cafe"": {
- ""address"": ""VGX7+6R2 Vecchia Napoli, Valletta"",
- ""categories"": [""italian"", ""pasta"", ""pizza"", ""burgers"", ""mediterranean""],
+You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score.
- ""location"": {""lat"": 35.8980154, ""lon"": 14.5145106},
- ""menu_id"": ""610936a4ee8ea7a56f4a372a"",
- ""name"": ""Vecchia Napoli Is-Suq Tal-Belt"",
+## Next steps
- ""rating"": 9,
- ""slug"": ""vecchia-napoli-skyparks-suq-tal-belt""
- },
+Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates.
- ""description"": ""Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli"",
- ""image"": ""https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg"",
- ""name"": ""L'Amatriciana""
+To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/).
-}
-```
+**Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup.
+",documentation/quickstart.md
+"---
+title: Qdrant Cloud API
-The embeddings generated with clip-ViT-B-32 model have been generated using the following
+weight: 10
-code snippet:
+---
+# Qdrant Cloud API
-```python
-from PIL import Image
+The Qdrant Cloud API lets you manage Cloud accounts and their respective Qdrant clusters. You can use this API to manage your clusters, authentication methods, and cloud configurations.
-from sentence_transformers import SentenceTransformer
+| REST API | Documentation |
-image_path = ""5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg""
+| -------- | ------------------------------------------------------------------------------------ |
+| v.0.1.0 | [OpenAPI Specification](https://cloud.qdrant.io/pa/v1/docs) |
-model = SentenceTransformer(""clip-ViT-B-32"")
-embedding = model.encode(Image.open(image_path))
+**Note:** This is not the Qdrant REST API. For core product APIs & SDKs, see our list of [interfaces](/documentation/interfaces/)
-```
+## Authentication: Connecting to Cloud API
-The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot).
+To interact with the Qdrant Cloud API, you must authenticate using an API key. Each request to the API must include the API key in the **Authorization** header. The API key acts as a bearer token and grants access to your account’s resources.
-#### Importing the dataset
+You can create a Cloud API key in the Cloud Console UI. Go to **Access Management** > **Qdrant Cloud API Keys**.
+![Authentication](/documentation/cloud/authentication.png)
-The easiest way to use the provided dataset is to recover it via the API by passing the
-URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
+**Note:** Ensure that the API key is kept secure and not exposed in public repositories or logs. Once authenticated, the API allows you to manage clusters, collections, and perform other operations available to your account.
-code snippet shows how to create a new collection and fill it with the snapshot data:
+## Sample API Request
-```http request
-PUT /collections/{collection_name}/snapshots/recover
-{
+Here's an example of a basic request to **list all clusters** in your Qdrant Cloud account:
- ""location"": ""https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot""
-}
-```
-",documentation/datasets.md
-"---
+```bash
-#Delimiter files are used to separate the list of documentation pages into sections.
+curl -X 'GET' \
-title: ""User Manual""
+ 'https://cloud.qdrant.io/pa/v1/accounts//clusters' \
-type: delimiter
+ -H 'accept: application/json' \
-weight: 20 # Change this weight to change order of sections
+ -H 'Authorization: '
-sitemapExclude: True
+```
----",documentation/1-dl.md
-"---
-title: Qdrant Documentation
-weight: 10
+This request will return a list of clusters associated with your account in JSON format.
----
-# Documentation
+## Cluster Management
+Use these endpoints to create and manage your Qdrant database clusters. The API supports fine-grained control over cluster resources (CPU, RAM, disk), node configurations, tolerations, and other operational characteristics across all cloud providers (AWS, GCP, Azure) and their respective regions in Qdrant Cloud, as well as Hybrid Cloud.
-**Qdrant (read: quadrant)** is a vector similarity search engine. Use our documentation to develop a production-ready service with a convenient API to store, search, and manage vectors with an additional payload. Qdrant's expanding features allow for all sorts of neural network or semantic-based matching, faceted search, and other applications.
+ - **Get Cluster by ID**: Retrieve detailed information about a specific cluster using the cluster ID and associated account ID.
+ - **Delete Cluster**: Remove a cluster, with optional deletion of backups.
+ - **Update Cluster**: Apply modifications to a cluster's configuration.
-## First-Time Users:
+ - **List Clusters**: Get all clusters associated with a specific account, filtered by region or other criteria.
+ - **Create Cluster**: Add new clusters to the account with configurable parameters such as nodes, cloud provider, and regions.
+ - **Get Booking**: Manage hosting across various cloud providers (AWS, GCP, Azure) and their respective regions.
-There are three ways to use Qdrant:
+## Cluster Authentication Management
-1. [**Run a Docker image**](quick-start/) if you don't have a Python development environment. Setup a local Qdrant server and storage in a few moments.
+Use these endpoints to manage your cluster API keys.
-2. [**Get the Python client**](https://github.com/qdrant/qdrant-client) if you're familiar with Python. Just `pip install qdrant-client`. The client also supports an in-memory database.
+ - **List API Keys**: Retrieve all API keys associated with an account.
-3. [**Spin up a Qdrant Cloud cluster:**](cloud/) the recommended method to run Qdrant in production. Read [Quickstart](cloud/quickstart-cloud/) to setup your first instance.
+ - **Create API Key**: Generate a new API key for programmatic access.
+ - **Delete API Key**: Revoke access by deleting a specific API key.
+ - **Update API Key**: Modify attributes of an existing API key.
-### Recommended Workflow:
-![Local mode workflow](https://raw.githubusercontent.com/qdrant/qdrant-client/master/docs/images/try-develop-deploy.png)
+",documentation/qdrant-cloud-api.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
+title: ""Getting Started""
-First, try Qdrant locally using the [Qdrant Client](https://github.com/qdrant/qdrant-client) and with the help of our [Tutorials](tutorials/) and Guides. Develop a sample app from our [Examples](examples/) list and try it using a [Qdrant Docker](guides/installation/) container. Then, when you are ready for production, deploy to a Free Tier [Qdrant Cloud](cloud/) cluster.
+type: delimiter
+weight: 1 # Change this weight to change order of sections
+sitemapExclude: True
-### Try Qdrant with Practice Data:
+_build:
+ publishResources: false
+ render: never
-You may always use our [Practice Datasets](datasets/) to build with Qdrant. This page will be regularly updated with dataset snapshots you can use to bootstrap complete projects.
+---",documentation/0-dl.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
+title: ""Integrations""
-## Popular Topics:
+type: delimiter
+weight: 14 # Change this weight to change order of sections
+sitemapExclude: True
-| Tutorial | Description | Tutorial| Description |
+_build:
-|----------------------------------------------------|----------------------------------------------|---------|------------------|
+ publishResources: false
-| [Installation](guides/installation/) | Different ways to install Qdrant. | [Collections](concepts/collections/) | Learn about the central concept behind Qdrant. |
+ render: never
-| [Configuration](guides/configuration/) | Update the default configuration. | [Bulk Upload](tutorials/bulk-upload/) | Efficiently upload a large number of vectors. |
+---",documentation/2-dl.md
+"---
-| [Optimization](tutorials/optimize/) | Optimize Qdrant's resource usage. | [Multitenancy](tutorials/multiple-partitions/) | Setup Qdrant for multiple independent users. |
+title: Roadmap
+weight: 32
+draft: true
-## Common Use Cases:
+---
-Qdrant is ideal for deploying applications based on the matching of embeddings produced by neural network encoders. Check out the [Examples](examples/) section to learn more about common use cases. Also, you can visit the [Tutorials](tutorials/) page to learn how to work with Qdrant in different ways.
+# Qdrant 2023 Roadmap
-| Use Case | Description | Stack |
+Goals of the release:
-|-----------------------|----------------------------------------------|--------|
-| [Semantic Search for Beginners](tutorials/search-beginners/) | Build a search engine locally with our most basic instruction set. | Qdrant |
-| [Build a Simple Neural Search](tutorials/neural-search/) | Build and deploy a neural search. [Check out the live demo app.](https://demo.qdrant.tech/#/) | Qdrant, BERT, FastAPI |
+* **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back.
-| [Build a Search with Aleph Alpha](tutorials/aleph-alpha-search/) | Build a simple semantic search that combines text and image data. | Qdrant, Aleph Alpha |
+ * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version.
-| [Developing Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant |
+ * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions.
-| [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant |
+* **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable.
-| [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant |
+* **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly.
-| [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant |
+* **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc.
-| [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere |
-| [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant |
-| [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant | ",documentation/_index.md
-"---
+## Milestones
-title: Contribution Guidelines
-weight: 35
-draft: true
+* :atom_symbol: Quantization support
----
+ * [ ] Scalar quantization f32 -> u8 (4x compression)
+ * [ ] Advanced quantization (8x and 16x compression)
+ * [ ] Support for binary vectors
-# How to contribute
+---
-If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant.
-Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation.
-You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h).
+* :arrow_double_up: Scalability
-If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community.
+ * [ ] Automatic replication factor adjustment
+ * [ ] Automatic shard distribution on cluster scaling
+ * [ ] Repartitioning support
-For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md).
-If you have problems with code or architecture understanding - reach us at any time.
-Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!",documentation/contribution-guidelines.md
-"---
+---
-title: API Reference
-weight: 20
-type: external-link
+* :eyes: Search scenarios
-external_url: https://qdrant.github.io/qdrant/redoc/index.html
+ * [ ] Diversity search - search for vectors that are different from each other
-sitemapExclude: True
+ * [ ] Sparse vectors search - search for vectors with a small number of non-zero values
----",documentation/api-reference.md
-"---
+ * [ ] Grouping requests - search within payload-defined groups
-title: OpenAI
+ * [ ] Different scenarios for recommendation API
-weight: 800
-aliases: [ ../integrations/openai/ ]
---
+
+* Additionally
-# OpenAI
+ * [ ] Extend full-text filtering support
+ * [ ] Support for phrase queries
+ * [ ] Support for logical operators
-Qdrant can also easily work with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings).
+ * [ ] Simplify update of collection parameters
+",documentation/roadmap.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
+title: ""Managed Services""
-There is an official OpenAI Python package that simplifies obtaining them, and it might be installed with pip:
+type: delimiter
+weight: 7 # Change this weight to change order of sections
+sitemapExclude: True
-```bash
+_build:
-pip install openai
+ publishResources: false
-```
+ render: never
+---",documentation/4-dl.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
-Once installed, the package exposes the method allowing to retrieve the embedding for given text. OpenAI requires an API key that has to be provided either as an environmental variable `OPENAI_API_KEY` or set in the source code directly, as presented below:
+title: ""Examples""
+type: delimiter
+weight: 17 # Change this weight to change order of sections
-```python
+sitemapExclude: True
-import openai
+_build:
-import qdrant_client
+ publishResources: false
+ render: never
+---",documentation/3-dl.md
+"---
-from qdrant_client.http.models import Batch
+title: Practice Datasets
+weight: 23
+---
-# Choose one of the available models:
-# https://platform.openai.com/docs/models/embeddings
-embedding_model = ""text-embedding-ada-002""
+# Common Datasets in Snapshot Format
-openai_client = openai.Client(
+You may find that creating embeddings from datasets is a very resource-intensive task.
- api_key=""<< your_api_key >>""
+If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page.
-)
+These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance.
-response = openai_client.embeddings.create(
- input=""The best vector database"",
- model=embedding_model,
+## Available datasets
-)
+Our snapshots are usually generated from publicly available datasets, which are often used for
-qdrant_client = qdrant_client.QdrantClient()
+non-commercial or academic purposes. The following datasets are currently available. Please click
-qdrant_client.upsert(
+on a dataset name to see its detailed description.
- collection_name=""MyCollection"",
- points=Batch(
- ids=[1],
+| Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub |
- vectors=[response.data[0].embedding],
+|--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
- ),
+| [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) |
-)
+| [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) |
-```
+| [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) |
-",documentation/embeddings/openai.md
-"---
-title: AWS Bedrock
+Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot)
-weight: 1000
+using the Qdrant CLI upon startup or through the API.
----
+## Qdrant on Hugging Face
-# Bedrock Embeddings
+
-You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
+
+
+
-You'll need the following information from your AWS account:
+
-- Region
+[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
-- Access key ID
+datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to
-- Secret key
+provide you with datasets containing neural embeddings that you can use to practice with Qdrant
+and build your applications based on semantic search. **Please let us know if you'd like to see
+a specific dataset!**
-To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key).
+If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index),
-With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536.
+or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/).
-```python
+## Arxiv.org
-# Install the required dependencies
-# pip install boto3 qdrant_client
+[Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple
+fields. Operated by Cornell University, arXiv allows researchers to share their findings with
-import json
+the scientific community and receive feedback before they undergo peer review for formal
-import boto3
+publication. Its archives host millions of scholarly articles, making it an invaluable resource
+for those looking to explore the cutting edge of scientific research. With a high frequency of
+daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset
-from qdrant_client import QdrantClient, models
+that is ripe for mining, analysis, and the development of future innovations.
-session = boto3.Session()
+
-bedrock_client = session.client(
- ""bedrock-runtime"",
- region_name="""",
+### Arxiv.org titles
- aws_access_key_id="""",
- aws_secret_access_key="""",
-)
+This dataset contains embeddings generated from the paper titles only. Each vector has a
+payload with the title used to create it, along with the DOI (Digital Object Identifier).
-qdrant_client = QdrantClient(location=""http://localhost:6333"")
+```json
+{
-qdrant_client.create_collection(
+ ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"",
- ""{collection_name}"",
+ ""DOI"": ""1612.05191""
- vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
+}
-)
+```
-body = json.dumps({""inputText"": ""Some text to generate embeddings for""})
+The embeddings generated with InstructorXL model have been generated using the following
+instruction:
-response = bedrock_client.invoke_model(
- body=body,
+> Represent the Research Paper title for retrieval; Input:
- modelId=""amazon.titan-embed-text-v1"",
- accept=""application/json"",
- contentType=""application/json"",
+The following code snippet shows how to generate embeddings using the InstructorXL model:
-)
+```python
-response_body = json.loads(response.get(""body"").read())
+from InstructorEmbedding import INSTRUCTOR
-qdrant_client.upsert(
+model = INSTRUCTOR(""hkunlp/instructor-xl"")
- ""{collection_name}"",
+sentence = ""3D ActionSLAM: wearable person tracking in multi-floor environments""
- points=[models.PointStruct(id=1, vector=response_body[""embedding""])],
+instruction = ""Represent the Research Paper title for retrieval; Input:""
-)
+embeddings = model.encode([[instruction, sentence]])
```
-```javascript
+The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot).
-// Install the required dependencies
-// npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest
+#### Importing the dataset
-import {
- BedrockRuntimeClient,
+The easiest way to use the provided dataset is to recover it via the API by passing the
- InvokeModelCommand,
+URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
-} from ""@aws-sdk/client-bedrock-runtime"";
+code snippet shows how to create a new collection and fill it with the snapshot data:
-import { QdrantClient } from '@qdrant/js-client-rest';
+```http request
-const main = async () => {
+PUT /collections/{collection_name}/snapshots/recover
- const bedrockClient = new BedrockRuntimeClient({
+{
- region: """",
+ ""location"": ""https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot""
- credentials: {
+}
- accessKeyId: """",,
+```
- secretAccessKey: """",
- },
- });
+### Arxiv.org abstracts
- const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' });
+This dataset contains embeddings generated from the paper abstracts. Each vector has a
+payload with the abstract used to create it, along with the DOI (Digital Object Identifier).
- await qdrantClient.createCollection(""{collection_name}"", {
- vectors: {
+```json
- size: 1536,
+{
- distance: 'Cosine',
+ ""abstract"": ""Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n"",
- }
+ ""DOI"": ""1612.05191""
- });
+}
+```
- const response = await bedrockClient.send(
- new InvokeModelCommand({
+The embeddings generated with InstructorXL model have been generated using the following
- modelId: ""amazon.titan-embed-text-v1"",
+instruction:
- body: JSON.stringify({
- inputText: ""Some text to generate embeddings for"",
- }),
+> Represent the Research Paper abstract for retrieval; Input:
- contentType: ""application/json"",
- accept: ""application/json"",
- })
+The following code snippet shows how to generate embeddings using the InstructorXL model:
- );
+```python
- const body = new TextDecoder().decode(response.body);
+from InstructorEmbedding import INSTRUCTOR
- await qdrantClient.upsert(""{collection_name}"", {
+model = INSTRUCTOR(""hkunlp/instructor-xl"")
- points: [
+sentence = ""The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.""
- {
+instruction = ""Represent the Research Paper abstract for retrieval; Input:""
- id: 1,
+embeddings = model.encode([[instruction, sentence]])
- vector: JSON.parse(body).embedding,
+```
- },
- ],
- });
+The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot).
-}
+#### Importing the dataset
-main();
-```
-",documentation/embeddings/bedrock.md
-"---
-title: Aleph Alpha
+The easiest way to use the provided dataset is to recover it via the API by passing the
-weight: 900
+URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
-aliases: [ ../integrations/aleph-alpha/ ]
+code snippet shows how to create a new collection and fill it with the snapshot data:
----
+```http request
-Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both
+PUT /collections/{collection_name}/snapshots/recover
-in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be
+{
-installed with pip:
+ ""location"": ""https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot""
+}
+```
-```bash
-pip install aleph-alpha-client
-```
+## Wolt food
-There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might
+Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of
-be done in the following way:
+food images from the Wolt app. Each point in the collection represents a dish with a single
+image. The image is represented as a vector of 512 float numbers. There is also a JSON
+payload attached to each point, which looks similar to this:
-```python
-import qdrant_client
+```json
+{
-from aleph_alpha_client import (
+ ""cafe"": {
- Prompt,
+ ""address"": ""VGX7+6R2 Vecchia Napoli, Valletta"",
- AsyncClient,
+ ""categories"": [""italian"", ""pasta"", ""pizza"", ""burgers"", ""mediterranean""],
- SemanticEmbeddingRequest,
+ ""location"": {""lat"": 35.8980154, ""lon"": 14.5145106},
- SemanticRepresentation,
+ ""menu_id"": ""610936a4ee8ea7a56f4a372a"",
- ImagePrompt
+ ""name"": ""Vecchia Napoli Is-Suq Tal-Belt"",
-)
+ ""rating"": 9,
-from qdrant_client.http.models import Batch
+ ""slug"": ""vecchia-napoli-skyparks-suq-tal-belt""
+ },
+ ""description"": ""Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli"",
-aa_token = ""<< your_token >>""
+ ""image"": ""https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg"",
-model = ""luminous-base""
+ ""name"": ""L'Amatriciana""
+}
+```
-qdrant_client = qdrant_client.QdrantClient()
-async with AsyncClient(token=aa_token) as client:
- prompt = ImagePrompt.from_file(""./path/to/the/image.jpg"")
+The embeddings generated with clip-ViT-B-32 model have been generated using the following
- prompt = Prompt.from_image(prompt)
+code snippet:
- query_params = {
+```python
- ""prompt"": prompt,
+from PIL import Image
- ""representation"": SemanticRepresentation.Symmetric,
+from sentence_transformers import SentenceTransformer
- ""compress_to_size"": 128,
- }
- query_request = SemanticEmbeddingRequest(**query_params)
+image_path = ""5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg""
- query_response = await client.semantic_embed(
- request=query_request, model=model
- )
+model = SentenceTransformer(""clip-ViT-B-32"")
-
+embedding = model.encode(Image.open(image_path))
- qdrant_client.upsert(
+```
- collection_name=""MyCollection"",
- points=Batch(
- ids=[1],
+The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot).
- vectors=[query_response.embedding],
- )
- )
+#### Importing the dataset
-```
+The easiest way to use the provided dataset is to recover it via the API by passing the
-If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input
+URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following
-text into the `Prompt.from_text` method.
-",documentation/embeddings/aleph-alpha.md
-"---
+code snippet shows how to create a new collection and fill it with the snapshot data:
-title: Cohere
-weight: 700
-aliases: [ ../integrations/cohere/ ]
+```http request
----
+PUT /collections/{collection_name}/snapshots/recover
+{
+ ""location"": ""https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot""
-# Cohere
+}
+```
+",documentation/datasets.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
-Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that
+title: ""User Manual""
-might be installed as any other package:
+type: delimiter
+weight: 10 # Change this weight to change order of sections
+sitemapExclude: True
-```bash
+_build:
-pip install cohere
+ publishResources: false
-```
+ render: never
+---",documentation/1-dl.md
+"---
+#Delimiter files are used to separate the list of documentation pages into sections.
-The embeddings returned by co.embed API might be used directly in the Qdrant client's calls:
+title: ""Support""
+type: delimiter
+weight: 21 # Change this weight to change order of sections
-```python
+sitemapExclude: True
-import cohere
+_build:
-import qdrant_client
+ publishResources: false
+ render: never
+---",documentation/5-dl.md
+"---
-from qdrant_client.http.models import Batch
+title: Home
+weight: 2
+hideTOC: true
-cohere_client = cohere.Client(""<< your_api_key >>"")
+---
-qdrant_client = qdrant_client.QdrantClient()
+# Documentation
-qdrant_client.upsert(
- collection_name=""MyCollection"",
- points=Batch(
+Qdrant is an AI-native vector dabatase and a semantic search engine. You can use it to extract meaningful information from unstructured data. Want to see how it works? [Clone this repo now](https://github.com/qdrant/qdrant_demo/) and build a search engine in five minutes.
- ids=[1],
- vectors=cohere_client.embed(
- model=""large"",
+|||
- texts=[""The best vector database""],
+|-:|:-|
- ).embeddings,
+|[Cloud Quickstart](/documentation/quickstart-cloud/)|[Local Quickstart](/documentation/quick-start/)|
- ),
-)
-```
+## Ready to start developing?
-If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the
-""[Question Answering as a Service with Cohere and Qdrant](https://qdrant.tech/articles/qa-with-cohere-and-qdrant/)"" article.
+***
Qdrant is open-source and can be self-hosted. However, the quickest way to get started is with our [free tier](https://qdrant.to/cloud) on Qdrant Cloud. It scales easily and provides an UI where you can interact with data.
***
-## Embed v3
+[![Hybrid Cloud](/docs/homepage/cloud-cta.png)](https://qdrant.to/cloud)
-Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional
-parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for.
+## Qdrant's most popular features:
+||||
+|:-|:-|:-|
-- `input_type=""search_document""` - for documents to store in Qdrant
+|[Filtrable HNSW](/documentation/filtering/) Single-stage payload filtering | [Recommendations & Context Search](/documentation/concepts/explore/#explore-the-data) Exploratory advanced search| [Pure-Vector Hybrid Search](/documentation/hybrid-queries/)Full text and semantic search in one|
-- `input_type=""search_query""` - for search queries to find the most relevant documents
+|[Multitenancy](/documentation/guides/multiple-partitions/) Payload-based partitioning|[Custom Sharding](/documentation/guides/distributed_deployment/#sharding) For data isolation and distribution|[Role Based Access Control](/documentation/guides/security/?q=jwt#granular-access-control-with-jwt)Secure JWT-based access |
-- `input_type=""classification""` - for classification tasks
+|[Quantization](/documentation/guides/quantization/) Compress data for drastic speedups|[Multivector Support](/documentation/concepts/vectors/?q=multivect#multivectors) For ColBERT late interaction |[Built-in IDF](/documentation/concepts/indexing/?q=inverse+docu#idf-modifier) Cutting-edge similarity calculation|",documentation/_index.md
+"---
-- `input_type=""clustering""` - for text clustering
+title: Contribution Guidelines
+weight: 35
+draft: true
-While implementing semantic search applications, such as RAG, you should use `input_type=""search_document""` for the
+---
-indexed documents and `input_type=""search_query""` for the search queries. The following example shows how to index
-documents with the Embed v3 model:
+# How to contribute
-```python
-import cohere
+If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant.
-import qdrant_client
+Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation.
+You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h).
+If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community.
-from qdrant_client.http.models import Batch
+For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md).
-cohere_client = cohere.Client(""<< your_api_key >>"")
+If you have problems with code or architecture understanding - reach us at any time.
-qdrant_client = qdrant_client.QdrantClient()
+Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!",documentation/contribution-guidelines.md
+"---
-qdrant_client.upsert(
+title: Bubble
- collection_name=""MyCollection"",
+aliases: [ ../frameworks/bubble/ ]
- points=Batch(
+---
- ids=[1],
- vectors=cohere_client.embed(
- model=""embed-english-v3.0"", # New Embed v3 model
+# Bubble
- input_type=""search_document"", # Input type for documents
- texts=[""Qdrant is the a vector database written in Rust""],
- ).embeddings,
+[Bubble](https://bubble.io/) is a software development platform that enables anyone to build and launch fully functional web applications without writing code.
- ),
-)
-```
+You can use the [Qdrant Bubble plugin](https://bubble.io/plugin/qdrant-1716804374179x344999530386685950) to interface with Qdrant in your workflows.
-Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model:
+## Prerequisites
-```python
+1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
-qdrant_client.search(
+2. An account at [Bubble.io](https://bubble.io/) and an app set up.
- collection_name=""MyCollection"",
- query=cohere_client.embed(
- model=""embed-english-v3.0"", # New Embed v3 model
+## Setting up the plugin
- input_type=""search_query"", # Input type for search queries
- texts=[""The best vector database""],
- ).embeddings[0],
+Navigate to your app's workflows. Select `""Install more plugins actions""`.
-)
-```
+![Install New Plugin](/documentation/frameworks/bubble/install-bubble-plugin.png)
-
-",documentation/embeddings/cohere.md
-"---
-title: ""Nomic""
+![Qdrant Plugin Search](/documentation/frameworks/bubble/qdrant-plugin-search.png)
-weight: 1100
----
+The Qdrant plugin can now be found in the installed plugins section of your workflow. Enter the API key of your Qdrant instance for authentication.
-# Nomic
+![Qdrant Plugin Home](/documentation/frameworks/bubble/qdrant-plugin-home.png)
-The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder.
-While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1),
+The plugin provides actions for upserting, searching, updating and deleting points from your Qdrant collection with dynamic and static values from your Bubble workflow.
-you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
-Once installed, you can configure it with the official Python client or through direct HTTP requests.
+## Further Reading
-
+- [Bubble Academy](https://bubble.io/academy).
+- [Bubble Manual](https://manual.bubble.io/)
+",documentation/platforms/bubble.md
+"---
-You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings
+title: Make.com
-are obtained for documents and queries. The `task_type` parameter defines the embeddings that you get.
+aliases: [ ../frameworks/make/ ]
-For documents, set the `task_type` to `search_document`:
+---
-```python
+# Make.com
-from qdrant_client import QdrantClient, models
-from nomic import embed
+[Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code.
-output = embed.text(
- texts=[""Qdrant is the best vector database!""],
+Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations).
- model=""nomic-embed-text-v1"",
- task_type=""search_document"",
-)
+Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios.
-qdrant_client = QdrantClient()
+![Qdrant Make hero](/documentation/frameworks/make/hero-page.png)
-qdrant_client.upsert(
- collection_name=""my-collection"",
- points=models.Batch(
+## Prerequisites
- ids=[1],
- vectors=output[""embeddings""],
- ),
+Before you start, make sure you have the following:
-)
-```
+1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/).
+2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register).
-To query the collection, set the `task_type` to `search_query`:
+## Setting up a connection
-```python
-output = embed.text(
- texts=[""What is the best vector database?""],
+Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection.
- model=""nomic-embed-text-v1"",
+![Qdrant Make connection](/documentation/frameworks/make/connection.png)
- task_type=""search_query"",
-)
+You can now establish a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/).
-qdrant_client.search(
- collection_name=""my-collection"",
+![Qdrant Make form](/documentation/frameworks/make/connection-form.png)
- query=output[""embeddings""][0],
-)
-```
+## Modules
+
+ Modules represent actions that Make performs with an app.
-For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
-",documentation/embeddings/nomic.md
-"---
-title: Gemini
-weight: 700
+The Qdrant Make app enables you to trigger the following app modules.
----
+![Qdrant Make modules](/documentation/frameworks/make/modules.png)
-# Gemini
+The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules).
-Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package:
+## Next steps
-Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model.
+- Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates).
-In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized.
+- Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios).",documentation/platforms/make.md
+"---
+title: Portable.io
+aliases: [ ../frameworks/portable/ ]
-The Embedding Model API supports various task types, outlined as follows:
+---
-1. `retrieval_query`: Specifies the given text is a query in a search/retrieval setting.
+# Portable
-2. `retrieval_document`: Specifies the given text is a document from the corpus being searched.
-3. `semantic_similarity`: Specifies the given text will be used for Semantic Text Similarity.
-4. `classification`: Specifies that the given text will be classified.
+[Portable](https://portable.io/) is an ELT platform that builds connectors on-demand for data teams. It enables connecting applications to your data warehouse with no code.
-5. `clustering`: Specifies that the embeddings will be used for clustering.
-6. `task_type_unspecified`: Unset value, which will default to one of the other values.
+You can avail the [Qdrant connector](https://portable.io/connectors/qdrant) to build data pipelines from your collections.
+![Qdrant Connector](/documentation/frameworks/portable/home.png)
-If you're building a semantic search application, such as RAG, you should use `task_type=""retrieval_document""` for the indexed documents and `task_type=""retrieval_query""` for the search queries.
+## Prerequisites
-The following example shows how to do this with Qdrant:
+1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
-## Setup
+2. A [Portable account](https://app.portable.io/).
-```bash
+## Setting up the connector
-pip install google-generativeai
-```
+Navigate to the Portable dashboard. Search for `""Qdrant""` in the sources section.
-Let's see how to use the Embedding Model API to embed a document for retrieval.
+![Install New Source](/documentation/frameworks/portable/install.png)
-The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type:
+Configure the connector with your Qdrant instance credentials.
-## Embedding a document
+![Configure connector](/documentation/frameworks/portable/configure.png)
-```python
-import pathlib
+You can now build your flows using data from Qdrant by selecting a [destination](https://app.portable.io/destinations) and scheduling it.
-import google.generativeai as genai
-import qdrant_client
+## Further Reading
-GEMINI_API_KEY = ""YOUR GEMINI API KEY"" # add your key here
+- [Portable API Reference](https://developer.portable.io/api-reference/introduction).
+- [Portable Academy](https://portable.io/learn)
+",documentation/platforms/portable.md
+"---
-genai.configure(api_key=GEMINI_API_KEY)
+title: BuildShip
+aliases: [ ../frameworks/buildship/ ]
+---
-result = genai.embed_content(
- model=""models/embedding-001"",
- content=""Qdrant is the best vector search engine to use with Gemini"",
+# BuildShip
- task_type=""retrieval_document"",
- title=""Qdrant x Gemini"",
-)
+[BuildShip](https://buildship.com/) is a low-code visual builder to create APIs, scheduled jobs, and backend workflows with AI assitance.
-```
+You can use the [Qdrant integration](https://buildship.com/integrations/qdrant) to development workflows with semantic-search capabilites.
-The returned result is a dictionary with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document.
+## Prerequisites
-## Indexing documents with Qdrant
+1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
-```python
+2. A [BuildsShip](https://buildship.app/) for developing workflows.
-from qdrant_client.http.models import Batch
+## Nodes
-qdrant_client = qdrant_client.QdrantClient()
-qdrant_client.upsert(
- collection_name=""GeminiCollection"",
+Nodes are are fundamental building blocks of BuildShip. Each responsible for an operation in your workflow.
- points=Batch(
- ids=[1],
- vectors=genai.embed_content(
+The Qdrant integration includes the following nodes with extensibility if required.
- model=""models/embedding-001"",
- content=""Qdrant is the best vector search engine to use with Gemini"",
- task_type=""retrieval_document"",
+### Add Point
- title=""Qdrant x Gemini"",
- )[""embedding""],
- ),
+![Add Point](/documentation/frameworks/buildship/add.png)
-)
-```
+### Retrieve Points
-## Searching for documents with Qdrant
+![Retrieve Points](/documentation/frameworks/buildship/get.png)
-Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type:
+### Delete Points
-```python
-qdrant_client.search(
+![Delete Points](/documentation/frameworks/buildship/delete.png)
- collection_name=""GeminiCollection"",
- query=genai.embed_content(
- model=""models/embedding-001"",
+### Search Points
- content=""What is the best vector database to use with Gemini?"",
- task_type=""retrieval_query"",
- )[""embedding""],
+![Search Points](/documentation/frameworks/buildship/search.png)
-)
-```
+## Further Reading
-## Using Gemini Embedding Models with Binary Quantization
+- [BuildShip Docs](https://docs.buildship.com/basics/node).
+- [BuildShip Integrations](https://buildship.com/integrations)
+",documentation/platforms/buildship.md
+"---
-You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
+title: Apify
+aliases: [ ../frameworks/apify/ ]
+---
-In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model:
+# Apify
-At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled.
+[Apify](https://apify.com/) is a web scraping and browser automation platform featuring an [app store](https://apify.com/store) with over 1,500 pre-built micro-apps known as Actors. These serverless cloud programs, which are essentially dockers under the hood, are designed for various web automation applications, including data collection.
-| oversampling | | 1 | 1 | 2 | 2 | 3 | 3 |
-|--------------|---------|----------|----------|----------|----------|----------|----------|
-| limit | | | | | | | |
+One such Actor, built especially for AI and RAG applications, is [Website Content Crawler](https://apify.com/apify/website-content-crawler).
-| | rescore | False | True | False | True | False | True |
-| 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 |
-| 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 |
+It's ideal for this purpose because it has built-in HTML processing and data-cleaning functions. That means you can easily remove fluff, duplicates, and other things on a web page that aren't relevant, and provide only the necessary data to the language model.
-| 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 |
-| 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** |
+The Markdown can then be used to feed Qdrant to train AI models or supply them with fresh web content.
-That's it! You can now use Gemini Embedding Models with Qdrant!",documentation/embeddings/gemini.md
-"---
-title: Jina Embeddings
+Qdrant is available as an [official integration](https://apify.com/apify/qdrant-integration) to load Apify datasets into a collection.
-weight: 800
-aliases: [ ../integrations/jina-embeddings/ ]
----
+You can refer to the [Apify documentation](https://docs.apify.com/platform/integrations/qdrant) to set up the integration via the Apify UI.
-# Jina Embeddings
+## Programmatic Usage
-Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens.
+Apify also supports programmatic access to integrations via the [Apify Python SDK](https://docs.apify.com/sdk/python/).
-To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production.
+1. Install the Apify Python SDK by running the following command:
-```python
+ ```sh
-import qdrant_client
+ pip install apify-client
-import requests
+ ```
-from qdrant_client.http.models import Distance, VectorParams
+2. Create a Python script and import all the necessary modules:
-from qdrant_client.http.models import Batch
+ ```python
-# Provide Jina API key and choose one of the available models.
+ from apify_client import ApifyClient
-# You can get a free trial key here: https://jina.ai/embeddings/
-JINA_API_KEY = ""jina_xxxxxxxxxxx""
-MODEL = ""jina-embeddings-v2-base-en"" # or ""jina-embeddings-v2-base-en""
+ APIFY_API_TOKEN = ""YOUR-APIFY-TOKEN""
-EMBEDDING_SIZE = 768 # 512 for small variant
+ OPENAI_API_KEY = ""YOUR-OPENAI-API-KEY""
+ # COHERE_API_KEY = ""YOUR-COHERE-API-KEY""
-# Get embeddings from the API
-url = ""https://api.jina.ai/v1/embeddings""
+ QDRANT_URL = ""YOUR-QDRANT-URL""
+ QDRANT_API_KEY = ""YOUR-QDRANT-API-KEY""
-headers = {
- ""Content-Type"": ""application/json"",
+ client = ApifyClient(APIFY_API_TOKEN)
- ""Authorization"": f""Bearer {JINA_API_KEY}"",
+ ```
-}
+3. Call the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor to crawl the Qdrant documentation and extract text content from the web pages:
-data = {
- ""input"": [""Your text string goes here"", ""You can send multiple texts""],
- ""model"": MODEL,
+ ```python
-}
+ actor_call = client.actor(""apify/website-content-crawler"").call(
+ run_input={""startUrls"": [{""url"": ""https://qdrant.tech/documentation/""}]}
+ )
-response = requests.post(url, headers=headers, json=data)
+ ```
-embeddings = [d[""embedding""] for d in response.json()[""data""]]
+4. Call the Qdrant integration and store all data in the Qdrant Vector Database:
-# Index the embeddings into Qdrant
+ ```python
-qdrant_client = qdrant_client.QdrantClient("":memory:"")
+ qdrant_integration_inputs = {
-qdrant_client.create_collection(
+ ""qdrantUrl"": QDRANT_URL,
- collection_name=""MyCollection"",
+ ""qdrantApiKey"": QDRANT_API_KEY,
- vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT),
+ ""qdrantCollectionName"": ""apify"",
-)
+ ""qdrantAutoCreateCollection"": True,
+ ""datasetId"": actor_call[""defaultDatasetId""],
+ ""datasetFields"": [""text""],
+ ""enableDeltaUpdates"": True,
+ ""deltaUpdatesPrimaryDatasetFields"": [""url""],
-qdrant_client.upsert(
+ ""expiredObjectDeletionPeriodDays"": 30,
- collection_name=""MyCollection"",
+ ""embeddingsProvider"": ""OpenAI"", # ""Cohere""
- points=Batch(
+ ""embeddingsApiKey"": OPENAI_API_KEY,
- ids=list(range(len(embeddings))),
+ ""performChunking"": True,
- vectors=embeddings,
+ ""chunkSize"": 1000,
- ),
+ ""chunkOverlap"": 0,
-)
+ }
+ actor_call = client.actor(""apify/qdrant-integration"").call(run_input=qdrant_integration_inputs)
-```
+ ```
-",documentation/embeddings/jina-embeddings.md
-"---
-title: Embeddings
-weight: 33
+Upon running the script, the data from will be scraped, transformed into vector embeddings and stored in the Qdrant collection.
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
-is_empty: true
----
+## Further Reading
-| Embedding |
+- Apify [Documentation](https://docs.apify.com/)
-|---|
+- Apify [Templates](https://apify.com/templates)
-| [Gemini](./gemini/) |
+- Integration [Source Code](https://github.com/apify/actor-vector-database-integrations)
+",documentation/platforms/apify.md
+"---
-| [Aleph Alpha](./aleph-alpha/) |
+title: PrivateGPT
-| [Cohere](./cohere/) |
+aliases: [ ../integrations/privategpt/, ../frameworks/privategpt/ ]
-| [Jina](./jina-emebddngs/) |
+---
-| [OpenAI](./openai/) |",documentation/embeddings/_index.md
-"---
-title: Database Optimization
-weight: 3
+# PrivateGPT
----
+[PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support.
-## Database Optimization Strategies
+PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents.
-### How do I reduce memory usage?
+## Configuration
-The primary source of memory usage vector data. There are several ways to address that:
+Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000.
-- Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors.
-- Configure on-disk vector storage
+Example:
+```yaml
-The choice of the approach depends on your requirements.
+qdrant:
-Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant.
+ url: ""https://xyz-example.eu-central.aws.cloud.qdrant.io:6333""
+ api_key: """"
+```
-### How do you choose machine configuration?
+The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are:
-There are two main scenarios of Qdrant usage in terms of resource consumption:
+| Field | Description |
+|--------------|-------------|
+| location | If `:memory:` - use in-memory Qdrant instance. If `str` - use it as a `url` parameter.|
-- **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM.
+| url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`. Eg. `http://localhost:6333` |
-- **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/).
+| port | Port of the REST API interface. Default: `6333` |
+| grpc_port | Port of the gRPC interface. Default: `6334` |
+| prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. |
-### I configured on-disk vector storage, but memory usage is still high. Why?
+| https | If `true` - use HTTPS(SSL) protocol.|
+| api_key | API key for authentication in Qdrant Cloud.|
+| prefix | If set, add `prefix` to the REST URL path. Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.|
-Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service.
+| timeout | Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC |
-If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM.
+| host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.|
+| path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`|
+| force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.|
-Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM.
-As a result, the Qdrant process might use more memory than the minimum required to run the service.
+## Next steps
-> Unused RAM is wasted RAM
+Find the PrivateGPT docs [here](https://docs.privategpt.dev/).
+",documentation/platforms/privategpt.md
+"---
+title: Pipedream
-If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes.
+aliases: [ ../frameworks/pipedream/ ]
+---
+# Pipedream
-### My requests are very slow or time out. What should I do?
+[Pipedream](https://pipedream.com/) is a development platform that allows developers to connect many different applications, data sources, and APIs in order to build automated cross-platform workflows. It also offers code-level control with Node.js, Python, Go, or Bash if required.
-There are several possible reasons for that:
+You can use the [Qdrant app](https://pipedream.com/apps/qdrant) in Pipedream to add vector search capabilities to your workflows.
-- **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index).
-- **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/).
-- **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.
-",documentation/faq/database-optimization.md
-"---
+## Prerequisites
-title: Fundamentals
-weight: 1
----
+1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
+2. A [Pipedream project](https://pipedream.com/) to develop your workflows.
-## Qdrant Fundamentals
+## Setting Up
-### How many collections can I create?
+Search for the Qdrant app in your workflow apps.
-As much as you want, but be aware that each collection requires additional resources.
-It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead.
+![Qdrant Pipedream App](/documentation/frameworks/pipedream/qdrant-app.png)
-We consider creating a collection for each user/dialog/document as an antipattern.
+The Qdrant app offers extensible API interface and pre-built actions.
-Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial.
+![Qdrant App Features](/documentation/frameworks/pipedream/app-features.png)
-### My search results contain vectors with null values. Why?
+Select any of the actions of the app to set up a connection.
-By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results.
+![Qdrant Connect Account](/documentation/frameworks/pipedream/app-upsert-action.png)
-But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`.
+Configure connection with the credentials of your Qdrant instance.
-If you're still seeing `""vector"": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method.
+![Qdrant Connection Credentials](/documentation/frameworks/pipedream/app-connection.png)
-### How can I search without a vector?
+You can verify your credentials using the ""Test Connection"" button.
-You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection.
+Once a connection is set up, you can use the app to build workflows with the [2000+ apps supported by Pipedream](https://pipedream.com/apps/).
-### Does Qdrant support a full-text search or a hybrid search?
+## Further Reading
-Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case.
-That includes both the interface and the performance.
+- [Pipedream Documentation](https://pipedream.com/docs).
+- [Qdrant Cloud Authentication](https://qdrant.tech/documentation/cloud/authentication/).
-What Qdrant can do:
+- [Source Code](https://github.com/PipedreamHQ/pipedream/tree/master/components/qdrant)
+",documentation/platforms/pipedream.md
+"---
+title: Ironclad Rivet
+aliases: [ ../frameworks/rivet/ ]
-- Search with full-text filters
+---
-- Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases)
-- Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/)
+# Ironclad Rivet
-What Qdrant plans to introduce in the future:
+[Rivet](https://rivet.ironcladapp.com/) is an Integrated Development Environment (IDE) and library designed for creating AI agents using a visual, graph-based interface.
-- Support for sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models
+Qdrant is available as a [plugin](https://github.com/qdrant/rivet-plugin-qdrant) for building vector-search powered workflows in Rivet.
-What Qdrant doesn't plan to support:
+## Installation
-- BM25 or other non-vector-based retrieval or ranking functions
-- Built-in ontologies or knowledge graphs
+- Open the plugins overlay at the top of the screen.
-- Query analyzers and other NLP tools
+- Search for the official Qdrant plugin.
+- Click the ""Add"" button to install it in your current project.
-Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines.
-Read more about [our approach](../../../articles/hybrid-search/) to hybrid search.
+![Rivet plugin installation](/documentation/frameworks/rivet/installation.png)
-### How do I upload a large number of vectors into a Qdrant collection?
+## Setting up the connection
-Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial.
+You can configure your Qdrant instance credentials in the Rivet settings after installing the plugin.
-### Can I only store quantized vectors and discard full precision vectors?
+![Rivet plugin connection](/documentation/frameworks/rivet/connection.png)
-No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc.
+Once you've configured your credentials, you can right-click on your workspace to add nodes from the plugin and get building!
-## Qdrant Cloud
+![Rivet plugin nodes](/documentation/frameworks/rivet/node.png)
-### Is it possible to scale down a Qdrant Cloud cluster?
+## Further Reading
-In general, no. There's no way to scale down the underlying disk storage.
+- Rivet [Tutorial](https://rivet.ironcladapp.com/docs/tutorial).
-But in some cases, we might be able to help you with that through manual intervention, but it's not guaranteed.
+- Rivet [Documentation](https://rivet.ironcladapp.com/docs).
+- Plugin [Source Code](https://github.com/qdrant/rivet-plugin-qdrant)
+",documentation/platforms/rivet.md
+"---
+title: DocsGPT
-## Versioning
+aliases: [ ../frameworks/docsgpt/ ]
+---
-### How do I avoid issues when updating to the latest version?
+# DocsGPT
-We only guarantee compatibility if you update between consequent versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`.
+[DocsGPT](https://docsgpt.arc53.com/) is an open-source documentation assistant that enables you to build conversational user experiences on top of your data.
-### Do you guarantee compatibility across versions?
+Qdrant is supported as a vectorstore in DocsGPT to ingest and semantically retrieve documents.
-In case your version is older, we guarantee only compatibility between two consecutive minor versions.
-While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code.
+## Configuration
-",documentation/faq/qdrant-fundamentals.md
-"---
-title: FAQ
+Learn how to setup DocsGPT in their [Quickstart guide](https://docs.docsgpt.co.uk/Deploying/Quickstart).
-weight: 41
-is_empty: true
----",documentation/faq/_index.md
-"---
+You can configure DocsGPT with environment variables in a `.env` file.
-title: Multitenancy
-weight: 12
-aliases:
+To configure DocsGPT to use Qdrant as the vector store, set `VECTOR_STORE` to `""qdrant""`.
- - ../tutorials/multiple-partitions
----
-# Configure Multitenancy
+```bash
+echo ""VECTOR_STORE=qdrant"" >> .env
+```
-**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up.
+DocsGPT includes a list of the Qdrant configuration options that you can set as environment variables [here](https://github.com/arc53/DocsGPT/blob/00dfb07b15602319bddb95089e3dab05fac56240/application/core/settings.py#L46-L59).
-**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
+## Further reading
-## Partition by payload
+- [DocsGPT Reference](https://github.com/arc53/DocsGPT)
+",documentation/platforms/docsgpt.md
+"---
-When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users.
+title: Platforms
+weight: 15
+---
-1. Add a `group_id` field to each vector in the collection.
+## Platform Integrations
-```http
-PUT /collections/{collection_name}/points
-{
+| Platform | Description |
- ""points"": [
+| ------------------------------------- | ---------------------------------------------------------------------------------------------------- |
- {
+| [Apify](./apify/) | Platform to build web scrapers and automate web browser tasks. |
- ""id"": 1,
+| [Bubble](./bubble) | Development platform for application development with a no-code interface |
- ""payload"": {""group_id"": ""user_1""},
+| [BuildShip](./buildship) | Low-code visual builder to create APIs, scheduled jobs, and backend workflows. |
- ""vector"": [0.9, 0.1, 0.1]
+| [DocsGPT](./docsgpt/) | Tool for ingesting documentation sources and enabling conversations and queries. |
- },
+| [Make](./make/) | Cloud platform to build low-code workflows by integrating various software applications. |
- {
+| [N8N](./n8n/) | Platform for node-based, low-code workflow automation. |
- ""id"": 2,
+| [Pipedream](./pipedream/) | Platform for connecting apps and developing event-driven automation. |
- ""payload"": {""group_id"": ""user_1""},
+| [Portable.io](./portable/) | Cloud platform for developing and deploying ELT transformations. |
- ""vector"": [0.1, 0.9, 0.1]
+| [PrivateGPT](./privategpt/) | Tool to ask questions about your documents using local LLMs emphasising privacy. |
- },
+| [Rivet](./rivet/) | A visual programming environment for building AI agents with LLMs. |
+",documentation/platforms/_index.md
+"---
- {
+title: N8N
- ""id"": 3,
+aliases: [ ../frameworks/n8n/ ]
- ""payload"": {""group_id"": ""user_2""},
+---
- ""vector"": [0.1, 0.1, 0.9]
- },
- ]
+# N8N
-}
-```
+[N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration.
-```python
-client.upsert(
+Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows.
- collection_name=""{collection_name}"",
- points=[
- models.PointStruct(
+## Prerequisites
- id=1,
- payload={""group_id"": ""user_1""},
- vector=[0.9, 0.1, 0.1],
+1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
- ),
+2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/).
- models.PointStruct(
- id=2,
- payload={""group_id"": ""user_1""},
+## Setting up the vectorstore
- vector=[0.1, 0.9, 0.1],
- ),
- models.PointStruct(
+Select the Qdrant vectorstore from the list of nodes in your workflow editor.
- id=3,
- payload={""group_id"": ""user_2""},
- vector=[0.1, 0.1, 0.9],
+![Qdrant n8n node](/documentation/frameworks/n8n/node.png)
- ),
- ],
-)
+You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters).
-```
+![Qdrant Config](/documentation/frameworks/n8n/config.png)
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Create a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/).
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png)
-client.upsert(""{collection_name}"", {
- points: [
+The vectorstore supports the following operations:
- {
- id: 1,
- payload: { group_id: ""user_1"" },
+- Get Many - Get the top-ranked documents for a query.
- vector: [0.9, 0.1, 0.1],
+- Insert documents - Add documents to the vectorstore.
- },
+- Retrieve documents - Retrieve documents for use with AI nodes.
- {
- id: 2,
- payload: { group_id: ""user_1"" },
+## Further Reading
- vector: [0.1, 0.9, 0.1],
- },
- {
+- N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/).
- id: 3,
+- N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/).
- payload: { group_id: ""user_2"" },
+- [Source Code](https://github.com/n8n-io/n8n/tree/master/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStoreQdrant)",documentation/platforms/n8n.md
+"---
- vector: [0.1, 0.1, 0.9],
+title: Semantic Querying with Airflow and Astronomer
- },
+weight: 36
- ],
+aliases:
-});
+ - /documentation/examples/qdrant-airflow-astronomer/
-```
+---
-```rust
+# Semantic Querying with Airflow and Astronomer
-use qdrant_client::{client::QdrantClient, qdrant::PointStruct};
-use serde_json::json;
+| Time: 45 min | Level: Intermediate | | |
+| ------------ | ------------------- | --- | --- |
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+In this tutorial, you will use Qdrant as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in [Apache Airflow](https://airflow.apache.org/), an open-source tool that lets you setup data-engineering workflows.
-client
- .upsert_points_blocking(
- ""{collection_name}"".to_string(),
+You will write the pipeline as a DAG (Directed Acyclic Graph) in Python. With this, you can leverage the powerful suite of Python's capabilities and libraries to achieve almost anything your data pipeline needs.
- None,
- vec![
- PointStruct::new(
+[Astronomer](https://www.astronomer.io/) is a managed platform that simplifies the process of developing and deploying Airflow projects via its easy-to-use CLI and extensive automation capabilities.
- 1,
- vec![0.9, 0.1, 0.1],
- json!(
+Airflow is useful when running operations in Qdrant based on data events or building parallel tasks for generating vector embeddings. By using Airflow, you can set up monitoring and alerts for your pipelines for full observability.
- {""group_id"": ""user_1""}
- )
- .try_into()
+## Prerequisites
- .unwrap(),
- ),
- PointStruct::new(
+Please make sure you have the following ready:
- 2,
- vec![0.1, 0.9, 0.1],
- json!(
+- A running Qdrant instance. We'll be using a free instance from
- {""group_id"": ""user_1""}
+- The Astronomer CLI. Find the installation instructions [here](https://docs.astronomer.io/astro/cli/install-cli).
- )
+- A [HuggingFace token](https://huggingface.co/docs/hub/en/security-tokens) to generate embeddings.
- .try_into()
- .unwrap(),
- ),
+## Implementation
- PointStruct::new(
- 3,
- vec![0.1, 0.1, 0.9],
+We'll be building a DAG that generates embeddings in parallel for our data corpus and performs semantic retrieval based on user input.
- json!(
- {""group_id"": ""user_2""}
- )
+### Set up the project
- .try_into()
- .unwrap(),
- ),
+The Astronomer CLI makes it very straightforward to set up the Airflow project:
- ],
- None,
- )
+```console
- .await?;
+mkdir qdrant-airflow-tutorial && cd qdrant-airflow-tutorial
+
+astro dev init
```
-```java
+This command generates all of the project files you need to run Airflow locally. You can find a directory called `dags`, which is where we can place our Python DAG files.
-import java.util.List;
-import java.util.Map;
+To use Qdrant within Airflow, install the Qdrant Airflow provider by adding the following to the `requirements.txt` file
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+```text
-import io.qdrant.client.grpc.Points.PointStruct;
+apache-airflow-providers-qdrant
+```
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+### Configure credentials
-client
+We can set up provider connections using the Airflow UI, environment variables or the `airflow_settings.yml` file.
- .upsertAsync(
- ""{collection_name}"",
- List.of(
+Add the following to the `.env` file in the project. Replace the values as per your credentials.
- PointStruct.newBuilder()
- .setId(id(1))
- .setVectors(vectors(0.9f, 0.1f, 0.1f))
+```env
- .putAllPayload(Map.of(""group_id"", value(""user_1"")))
+HUGGINGFACE_TOKEN=""""
- .build(),
+AIRFLOW_CONN_QDRANT_DEFAULT='{
- PointStruct.newBuilder()
+ ""conn_type"": ""qdrant"",
- .setId(id(2))
+ ""host"": ""xyz-example.eu-central.aws.cloud.qdrant.io:6333"",
- .setVectors(vectors(0.1f, 0.9f, 0.1f))
+ ""password"": """"
- .putAllPayload(Map.of(""group_id"", value(""user_1"")))
+}'
- .build(),
+```
- PointStruct.newBuilder()
- .setId(id(3))
- .setVectors(vectors(0.1f, 0.1f, 0.9f))
+### Add the data corpus
- .putAllPayload(Map.of(""group_id"", value(""user_2"")))
- .build()))
- .get();
+Let's add some sample data to work with. Paste the following content into a file called `books.txt` file within the `include` directory.
-```
+```text
-```csharp
+1 | To Kill a Mockingbird (1960) | fiction | Harper Lee's Pulitzer Prize-winning novel explores racial injustice and moral growth through the eyes of young Scout Finch in the Deep South.
-using Qdrant.Client;
+2 | Harry Potter and the Sorcerer's Stone (1997) | fantasy | J.K. Rowling's magical tale follows Harry Potter as he discovers his wizarding heritage and attends Hogwarts School of Witchcraft and Wizardry.
-using Qdrant.Client.Grpc;
+3 | The Great Gatsby (1925) | fiction | F. Scott Fitzgerald's classic novel delves into the glitz, glamour, and moral decay of the Jazz Age through the eyes of narrator Nick Carraway and his enigmatic neighbour, Jay Gatsby.
+4 | 1984 (1949) | dystopian | George Orwell's dystopian masterpiece paints a chilling picture of a totalitarian society where individuality is suppressed and the truth is manipulated by a powerful regime.
+5 | The Catcher in the Rye (1951) | fiction | J.D. Salinger's iconic novel follows disillusioned teenager Holden Caulfield as he navigates the complexities of adulthood and society's expectations in post-World War II America.
-var client = new QdrantClient(""localhost"", 6334);
+6 | Pride and Prejudice (1813) | romance | Jane Austen's beloved novel revolves around the lively and independent Elizabeth Bennet as she navigates love, class, and societal expectations in Regency-era England.
+7 | The Hobbit (1937) | fantasy | J.R.R. Tolkien's adventure follows Bilbo Baggins, a hobbit who embarks on a quest with a group of dwarves to reclaim their homeland from the dragon Smaug.
+8 | The Lord of the Rings (1954-1955) | fantasy | J.R.R. Tolkien's epic fantasy trilogy follows the journey of Frodo Baggins to destroy the One Ring and defeat the Dark Lord Sauron in the land of Middle-earth.
-await client.UpsertAsync(
+9 | The Alchemist (1988) | fiction | Paulo Coelho's philosophical novel follows Santiago, an Andalusian shepherd boy, on a journey of self-discovery and spiritual awakening as he searches for a hidden treasure.
- collectionName: ""{collection_name}"",
+10 | The Da Vinci Code (2003) | mystery/thriller | Dan Brown's gripping thriller follows symbologist Robert Langdon as he unravels clues hidden in art and history while trying to solve a murder mystery with far-reaching implications.
- points: new List
+```
- {
- new()
- {
+Now, the hacking part - writing our Airflow DAG!
- Id = 1,
- Vectors = new[] { 0.9f, 0.1f, 0.1f },
- Payload = { [""group_id""] = ""user_1"" }
+### Write the dag
- },
- new()
- {
+We'll add the following content to a `books_recommend.py` file within the `dags` directory. Let's go over what it does for each task.
- Id = 2,
- Vectors = new[] { 0.1f, 0.9f, 0.1f },
- Payload = { [""group_id""] = ""user_1"" }
+```python
- },
+import os
- new()
+import requests
- {
- Id = 3,
- Vectors = new[] { 0.1f, 0.1f, 0.9f },
+from airflow.decorators import dag, task
- Payload = { [""group_id""] = ""user_2"" }
+from airflow.models.baseoperator import chain
- }
+from airflow.models.param import Param
- }
+from airflow.providers.qdrant.hooks.qdrant import QdrantHook
-);
+from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator
-```
+from pendulum import datetime
+from qdrant_client import models
-2. Use a filter along with `group_id` to filter vectors for each user.
-```http
+QDRANT_CONNECTION_ID = ""qdrant_default""
-POST /collections/{collection_name}/points/search
+DATA_FILE_PATH = ""include/books.txt""
-{
+COLLECTION_NAME = ""airflow_tutorial_collection""
- ""filter"": {
- ""must"": [
- {
+EMBEDDING_MODEL_ID = ""sentence-transformers/all-MiniLM-L6-v2""
- ""key"": ""group_id"",
+EMBEDDING_DIMENSION = 384
- ""match"": {
+SIMILARITY_METRIC = models.Distance.COSINE
- ""value"": ""user_1""
- }
- }
- ]
- },
+def embed(text: str) -> list:
- ""vector"": [0.1, 0.1, 0.9],
+ HUGGINFACE_URL = f""https://api-inference.huggingface.co/pipeline/feature-extraction/{EMBEDDING_MODEL_ID}""
- ""limit"": 10
+ response = requests.post(
-}
+ HUGGINFACE_URL,
-```
+ headers={""Authorization"": f""Bearer {os.getenv('HUGGINGFACE_TOKEN')}""},
+ json={""inputs"": [text], ""options"": {""wait_for_model"": True}},
+ )
-```python
+ return response.json()[0]
-from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+@dag(
+ dag_id=""books_recommend"",
-client.search(
-
- collection_name=""{collection_name}"",
+ start_date=datetime(2023, 10, 18),
- query_filter=models.Filter(
+ schedule=None,
- must=[
+ catchup=False,
- models.FieldCondition(
+ params={""preference"": Param(""Something suspenseful and thrilling."", type=""string"")},
- key=""group_id"",
+)
- match=models.MatchValue(
+def recommend_book():
- value=""user_1"",
+ @task
- ),
+ def import_books(text_file_path: str) -> list:
- )
+ data = []
- ]
+ with open(text_file_path, ""r"") as f:
- ),
+ for line in f:
- query_vector=[0.1, 0.1, 0.9],
+ _, title, genre, description = line.split(""|"")
- limit=10,
+ data.append(
-)
+ {
-```
+ ""title"": title.strip(),
+ ""genre"": genre.strip(),
+ ""description"": description.strip(),
-```typescript
+ }
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ )
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ return data
-client.search(""{collection_name}"", {
+ @task
- filter: {
+ def init_collection():
- must: [{ key: ""group_id"", match: { value: ""user_1"" } }],
+ hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID)
- },
+ if not hook.conn..collection_exists(COLLECTION_NAME):
- vector: [0.1, 0.1, 0.9],
+ hook.conn.create_collection(
- limit: 10,
+ COLLECTION_NAME,
-});
+ vectors_config=models.VectorParams(
-```
+ size=EMBEDDING_DIMENSION, distance=SIMILARITY_METRIC
+ ),
+ )
-```rust
-use qdrant_client::{
- client::QdrantClient,
+ @task
- qdrant::{Condition, Filter, SearchPoints},
+ def embed_description(data: dict) -> list:
-};
+ return embed(data[""description""])
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ books = import_books(text_file_path=DATA_FILE_PATH)
+ embeddings = embed_description.expand(data=books)
-client
- .search_points(&SearchPoints {
+ qdrant_vector_ingest = QdrantIngestOperator(
- collection_name: ""{collection_name}"".to_string(),
+ conn_id=QDRANT_CONNECTION_ID,
- filter: Some(Filter::must([Condition::matches(
+ task_id=""qdrant_vector_ingest"",
- ""group_id"",
+ collection_name=COLLECTION_NAME,
- ""user_1"".to_string(),
+ payload=books,
- )])),
+ vectors=embeddings,
- vector: vec![0.1, 0.1, 0.9],
+ )
- limit: 10,
- ..Default::default()
- })
+ @task
- .await?;
+ def embed_preference(**context) -> list:
-```
+ user_mood = context[""params""][""preference""]
+ response = embed(text=user_mood)
-```java
-import java.util.List;
+ return response
-import io.qdrant.client.QdrantClient;
+ @task
-import io.qdrant.client.QdrantGrpcClient;
+ def search_qdrant(
-import io.qdrant.client.grpc.Points.Filter;
+ preference_embedding: list,
-import io.qdrant.client.grpc.Points.SearchPoints;
+ ) -> None:
+ hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID)
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ result = hook.conn.query_points(
+ collection_name=COLLECTION_NAME,
+ query=preference_embedding,
-client
+ limit=1,
- .searchAsync(
+ with_payload=True,
- SearchPoints.newBuilder()
+ ).points
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder().addMust(matchKeyword(""group_id"", ""user_1"")).build())
+ print(""Book recommendation: "" + result[0].payload[""title""])
- .addAllVector(List.of(0.1f, 0.1f, 0.9f))
+ print(""Description: "" + result[0].payload[""description""])
- .setLimit(10)
- .build())
- .get();
+ chain(
-```
+ init_collection(),
+ qdrant_vector_ingest,
+ search_qdrant(embed_preference()),
-```csharp
+ )
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+recommend_book()
-var client = new QdrantClient(""localhost"", 6334);
+```
-await client.SearchAsync(
+`import_books`: This task reads a text file containing information about the books (like title, genre, and description), and then returns the data as a list of dictionaries.
- collectionName: ""{collection_name}"",
- vector: new float[] { 0.1f, 0.1f, 0.9f },
- filter: MatchKeyword(""group_id"", ""user_1""),
+`init_collection`: This task initializes a collection in the Qdrant database, where we will store the vector representations of the book descriptions.
- limit: 10
-);
-```
+`embed_description`: This is a dynamic task that creates one mapped task instance for each book in the list. The task uses the `embed` function to generate vector embeddings for each description. To use a different embedding model, you can adjust the `EMBEDDING_MODEL_ID`, `EMBEDDING_DIMENSION` values.
-## Calibrate performance
+`embed_user_preference`: Here, we take a user's input and convert it into a vector using the same pre-trained model used for the book descriptions.
-The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead.
+`qdrant_vector_ingest`: This task ingests the book data into the Qdrant collection using the [QdrantIngestOperator](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/1.0.0/), associating each book description with its corresponding vector embeddings.
-By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process.
+`search_qdrant`: Finally, this task performs a search in the Qdrant database using the vectorized user preference. It finds the most relevant book in the collection based on vector similarity.
-To implement this approach, you should:
+### Run the DAG
-1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16.
+Head over to your terminal and run
-2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection.
+```astro dev start```
-```http
+A local Airflow container should spawn. You can now access the Airflow UI at . Visit our DAG by clicking on `books_recommend`.
-PUT /collections/{collection_name}
-{
- ""vectors"": {
+![DAG](/documentation/examples/airflow/demo-dag.png)
- ""size"": 768,
- ""distance"": ""Cosine""
- },
+Hit the PLAY button on the right to run the DAG. You'll be asked for input about your preference, with the default value already filled in.
- ""hnsw_config"": {
- ""payload_m"": 16,
- ""m"": 0
+![Preference](/documentation/examples/airflow/preference-input.png)
- }
-}
-```
+After your DAG run completes, you should be able to see the output of your search in the logs of the `search_qdrant` task.
-```python
+![Output](/documentation/examples/airflow/output.png)
-from qdrant_client import QdrantClient, models
+There you have it, an Airflow pipeline that interfaces with Qdrant! Feel free to fiddle around and explore Airflow. There are references below that might come in handy.
-client = QdrantClient(""localhost"", port=6333)
+## Further reading
-client.create_collection(
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+- [Introduction to Airflow](https://docs.astronomer.io/learn/intro-to-airflow)
- hnsw_config=models.HnswConfigDiff(
+- [Airflow Concepts](https://docs.astronomer.io/learn/category/airflow-concepts)
- payload_m=16,
+- [Airflow Reference](https://airflow.apache.org/docs/)
- m=0,
+- [Astronomer Documentation](https://docs.astronomer.io/)
+",documentation/send-data/qdrant-airflow-astronomer.md
+"---
- ),
+title: Qdrant on Databricks
-)
+weight: 36
-```
+aliases:
+ - /documentation/examples/databricks/
+---
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+# Qdrant on Databricks
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+| Time: 30 min | Level: Intermediate | [Complete Notebook](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4750876096379825/93425612168199/6949977306828869/latest.html) |
+| ------------ | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-client.createCollection(""{collection_name}"", {
- vectors: {
- size: 768,
+[Databricks](https://www.databricks.com/) is a unified analytics platform for working with big data and AI. It's built around Apache Spark, a powerful open-source distributed computing system well-suited for processing large-scale datasets and performing complex analytics tasks.
- distance: ""Cosine"",
- },
- hnsw_config: {
+Apache Spark is designed to scale horizontally, meaning it can handle expensive operations like generating vector embeddings by distributing computation across a cluster of machines. This scalability is crucial when dealing with large datasets.
- payload_m: 16,
- m: 0,
- },
+In this example, we will demonstrate how to vectorize a dataset with dense and sparse embeddings using Qdrant's [FastEmbed](https://qdrant.github.io/fastembed/) library. We will then load this vectorized data into a Qdrant cluster using the [Qdrant Spark connector](/documentation/frameworks/spark/) on Databricks.
-});
-```
+### Setting up a Databricks project
-```rust
-use qdrant_client::{
+- Set up a **[Databricks cluster](https://docs.databricks.com/en/compute/configure.html)** following the official documentation guidelines.
- client::QdrantClient,
- qdrant::{
- vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, VectorParams,
+- Install the **[Qdrant Spark connector](/documentation/frameworks/spark/)** as a library:
- VectorsConfig,
+ - Navigate to the `Libraries` section in your cluster dashboard.
- },
+ - Click on `Install New` at the top-right to open the library installation modal.
-};
+ - Search for `io.qdrant:spark:VERSION` in the Maven packages and click on `Install`.
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ![Install the library](/documentation/examples/databricks/library-install.png)
-client
+- Create a new **[Databricks notebook](https://docs.databricks.com/en/notebooks/index.html)** on your cluster to begin working with your data and libraries.
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
+### Download a dataset
- config: Some(Config::Params(VectorParams {
- size: 768,
- distance: Distance::Cosine.into(),
+- **Install the required dependencies:**
- ..Default::default()
- })),
- }),
+```python
- hnsw_config: Some(HnswConfigDiff {
+%pip install fastembed datasets
- payload_m: Some(16),
+```
- m: Some(0),
- ..Default::default()
- }),
+- **Download the dataset:**
- ..Default::default()
- })
- .await?;
+```python
-```
+from datasets import load_dataset
-```java
+dataset_name = ""tasksource/med""
-import io.qdrant.client.QdrantClient;
+dataset = load_dataset(dataset_name, split=""train"")
-import io.qdrant.client.QdrantGrpcClient;
+# We'll use the first 100 entries from this dataset and exclude some unused columns.
-import io.qdrant.client.grpc.Collections.CreateCollection;
+dataset = dataset.select(range(100)).remove_columns([""gold_label"", ""genre""])
-import io.qdrant.client.grpc.Collections.Distance;
+```
-import io.qdrant.client.grpc.Collections.HnswConfigDiff;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+- **Convert the dataset into a Spark dataframe:**
-QdrantClient client =
+```python
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+dataset.to_parquet(""/dbfs/pq.pq"")
+dataset_df = spark.read.parquet(""file:/dbfs/pq.pq"")
+```
-client
- .createCollectionAsync(
- CreateCollection.newBuilder()
+### Vectorizing the data
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
- VectorsConfig.newBuilder()
+In this section, we'll be generating both dense and sparse vectors for our rows using [FastEmbed](https://qdrant.github.io/fastembed/). We'll create a user-defined function (UDF) to handle this step.
- .setParams(
- VectorParams.newBuilder()
- .setSize(768)
+#### Creating the vectorization function
- .setDistance(Distance.Cosine)
- .build())
- .build())
+```python
- .setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build())
+from fastembed import TextEmbedding, SparseTextEmbedding
- .build())
- .get();
-```
+def vectorize(partition_data):
+ # Initialize dense and sparse models
+ dense_model = TextEmbedding(model_name=""BAAI/bge-small-en-v1.5"")
-```csharp
+ sparse_model = SparseTextEmbedding(model_name=""Qdrant/bm25"")
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+ for row in partition_data:
+ # Generate dense and sparse vectors
-var client = new QdrantClient(""localhost"", 6334);
+ dense_vector = next(dense_model.embed(row.sentence1))
+ sparse_vector = next(sparse_model.embed(row.sentence2))
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+ yield [
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+ row.sentence1, # 1st column: original text
- hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 }
+ row.sentence2, # 2nd column: original text
-);
+ dense_vector.tolist(), # 3rd column: dense vector
-```
+ sparse_vector.indices.tolist(), # 4th column: sparse vector indices
+ sparse_vector.values.tolist(), # 5th column: sparse vector values
+ ]
-3. Create keyword payload index for `group_id` field.
+```
-```http
+We're using the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for dense embeddings and [BM25](https://huggingface.co/Qdrant/bm25) for sparse embeddings.
-PUT /collections/{collection_name}/index
-{
- ""field_name"": ""group_id"",
+#### Applying the UDF on our dataframe
- ""field_schema"": ""keyword""
-}
-```
+Next, let's apply our `vectorize` UDF on our Spark dataframe to generate embeddings.
```python
-client.create_payload_index(
-
- collection_name=""{collection_name}"",
+embeddings = dataset_df.rdd.mapPartitions(vectorize)
- field_name=""group_id"",
+```
- field_schema=models.PayloadSchemaType.KEYWORD,
-)
-```
+The `mapPartitions()` method returns a [Resilient Distributed Dataset (RDD)](https://www.databricks.com/glossary/what-is-rdd) which should then be converted back to a Spark dataframe.
-```typescript
+#### Building the new Spark dataframe with the vectorized data
-client.createPayloadIndex(""{collection_name}"", {
- field_name: ""group_id"",
- field_schema: ""keyword"",
+We'll now create a new Spark dataframe (`embeddings_df`) with the vectorized data using the specified schema.
-});
-```
+```python
+from pyspark.sql.types import StructType, StructField, StringType, ArrayType, FloatType, IntegerType
-```rust
-use qdrant_client::{client::QdrantClient, qdrant::FieldType};
+# Define the schema for the new dataframe
+schema = StructType([
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ StructField(""sentence1"", StringType()),
+ StructField(""sentence2"", StringType()),
+ StructField(""dense_vector"", ArrayType(FloatType())),
-client
+ StructField(""sparse_vector_indices"", ArrayType(IntegerType())),
- .create_field_index(
+ StructField(""sparse_vector_values"", ArrayType(FloatType()))
- ""{collection_name}"",
+])
- ""group_id"",
- FieldType::Keyword,
- None,
+# Create the new dataframe with the vectorized data
- None,
+embeddings_df = spark.createDataFrame(data=embeddings, schema=schema)
- )
+```
- .await?;
-```
+### Uploading the data to Qdrant
-```java
-import io.qdrant.client.QdrantClient;
+- **Create a Qdrant collection:**
-import io.qdrant.client.QdrantGrpcClient;
+ - [Follow the documentation](/documentation/concepts/collections/#create-a-collection) to create a collection with the appropriate configurations. Here's an example request to support both dense and sparse vectors:
-import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+ ```json
-QdrantClient client =
+ PUT /collections/{collection_name}
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ {
+ ""vectors"": {
+ ""dense"": {
-client
+ ""size"": 384,
- .createPayloadIndexAsync(
+ ""distance"": ""Cosine""
- ""{collection_name}"", ""group_id"", PayloadSchsemaType.Keyword, null, null, null, null)
+ }
- .get();
+ },
-```
+ ""sparse_vectors"": {
+ ""sparse"": {}
+ }
-```csharp
+ }
-using Qdrant.Client;
+ ```
-var client = new QdrantClient(""localhost"", 6334);
+- **Upload the dataframe to Qdrant:**
-await client.CreatePayloadIndexAsync(collectionName: ""{collection_name}"", fieldName: ""group_id"");
+```python
-```
+options = {
+ ""qdrant_url"": """",
+ ""api_key"": """",
-## Limitations
+ ""collection_name"": """",
+ ""vector_fields"": ""dense_vector"",
+ ""vector_names"": ""dense"",
-One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
-",documentation/guides/multiple-partitions.md
-"---
+ ""sparse_vector_value_fields"": ""sparse_vector_values"",
-title: Administration
+ ""sparse_vector_index_fields"": ""sparse_vector_indices"",
-weight: 10
+ ""sparse_vector_names"": ""sparse"",
-aliases:
+ ""schema"": embeddings_df.schema.json(),
- - ../administration
+}
----
+embeddings_df.write.format(""io.qdrant.spark.Qdrant"").options(**options).mode(
-# Administration
+ ""append""
+).save()
+```
-Qdrant exposes administration tools which enable to modify at runtime the behavior of a qdrant instance without changing its configuration manually.
+
-A locking API enables users to restrict the possible operations on a qdrant process.
-It is important to mention that:
+Ensure to replace the placeholder values (``, ``, ``) with your actual values. If the `id_field` option is not specified, Qdrant Spark connector generates random UUIDs for each point.
-- The configuration is not persistent therefore it is necessary to lock again following a restart.
-- Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup.
+The command output you should see is similar to:
-Lock request sample:
+```console
+Command took 40.37 seconds -- by xxxxx90@xxxxxx.com at 4/17/2024, 12:13:28 PM on fastembed
-```http
+```
-POST /locks
-{
- ""error_message"": ""write is forbidden"",
+### Conclusion
- ""write"": true
-}
-```
+That wraps up our tutorial! Feel free to explore more functionalities and experiments with different models, parameters, and features available in Databricks, Spark, and Qdrant.
-Write flags enables/disables write lock.
+Happy data engineering!
+",documentation/send-data/databricks.md
+"---
-If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage.
+title: How to Setup Seamless Data Streaming with Kafka and Qdrant
-However, deletion operations or updates are not forbidden under the write lock.
+weight: 49
-This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data.
+aliases:
+ - /examples/data-streaming-kafka-qdrant/
+---
-You can optionally provide the error message that should be used for error responses to users.
+# Setup Data Streaming with Kafka via Confluent
-## Recovery mode
+**Author:** [M K Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/) , research scholar at [IIITDM, Kurnool](https://iiitk.ac.in). Specialist in hallucination mitigation techniques and RAG methodologies.
-*Available as of v1.2.0*
+• [GitHub](https://github.com/pavanjava) • [Medium](https://medium.com/@manthapavankumar11)
-Recovery mode can help in situations where Qdrant fails to start repeatedly.
+## Introduction
-When starting in recovery mode, Qdrant only loads collection metadata to prevent
-going out of memory. This allows you to resolve out of memory situations, for
-example, by deleting a collection. After resolving Qdrant can be restarted
+This guide will walk you through the detailed steps of installing and setting up the [Qdrant Sink Connector](https://github.com/qdrant/qdrant-kafka), building the necessary infrastructure, and creating a practical playground application. By the end of this article, you will have a deep understanding of how to leverage this powerful integration to streamline your data workflows, ultimately enhancing the performance and capabilities of your data-driven real-time semantic search and RAG applications.
-normally to continue operation.
+In this example, original data will be sourced from Azure Blob Storage and MongoDB.
-In recovery mode, collection operations are limited to
-[deleting](../../concepts/collections/#delete-collection) a
-collection. That is because only collection metadata is loaded during recovery.
+![1.webp](/documentation/examples/data-streaming-kafka-qdrant/1.webp)
-To enable recovery mode with the Qdrant Docker image you must set the
+Figure 1: [Real time Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/) with Kafka and Qdrant.
-environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try
-to start normally first, and restarts in recovery mode if initialisation fails
-due to an out of memory error. This behavior is disabled by default.
+## The Architecture:
-If using a Qdrant binary, recovery mode can be enabled by setting a recovery
+## Source Systems
-message in an environment variable, such as
-`QDRANT__STORAGE__RECOVERY_MODE=""My recovery message""`.
-",documentation/guides/administration.md
-"---
-title: Troubleshooting
+The architecture begins with the **source systems**, represented by MongoDB and Azure Blob Storage. These systems are vital for storing and managing raw data. MongoDB, a popular NoSQL database, is known for its flexibility in handling various data formats and its capability to scale horizontally. It is widely used for applications that require high performance and scalability. Azure Blob Storage, on the other hand, is Microsoft’s object storage solution for the cloud. It is designed for storing massive amounts of unstructured data, such as text or binary data. The data from these sources is extracted using **source connectors**, which are responsible for capturing changes in real-time and streaming them into Kafka.
-weight: 170
-aliases:
- - ../tutorials/common-errors
+## Kafka
----
+At the heart of this architecture lies **Kafka**, a distributed event streaming platform capable of handling trillions of events a day. Kafka acts as a central hub where data from various sources can be ingested, processed, and distributed to various downstream systems. Its fault-tolerant and scalable design ensures that data can be reliably transmitted and processed in real-time. Kafka’s capability to handle high-throughput, low-latency data streams makes it an ideal choice for real-time data processing and analytics. The use of **Confluent** enhances Kafka’s functionalities, providing additional tools and services for managing Kafka clusters and stream processing.
-# Solving common errors
+## Qdrant
-## Too many files open (OS error 24)
+The processed data is then routed to **Qdrant**, a highly scalable vector search engine designed for similarity searches. Qdrant excels at managing and searching through high-dimensional vector data, which is essential for applications involving machine learning and AI, such as recommendation systems, image recognition, and natural language processing. The **Qdrant Sink Connector** for Kafka plays a pivotal role here, enabling seamless integration between Kafka and Qdrant. This connector allows for the real-time ingestion of vector data into Qdrant, ensuring that the data is always up-to-date and ready for high-performance similarity searches.
-Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log:
+## Integration and Pipeline Importance
-```text
-Error: Too many files open (OS error 24)
-```
+The integration of these components forms a powerful and efficient data streaming pipeline. The **Qdrant Sink Connector** ensures that the data flowing through Kafka is continuously ingested into Qdrant without any manual intervention. This real-time integration is crucial for applications that rely on the most current data for decision-making and analysis. By combining the strengths of MongoDB and Azure Blob Storage for data storage, Kafka for data streaming, and Qdrant for vector search, this pipeline provides a robust solution for managing and processing large volumes of data in real-time. The architecture’s scalability, fault-tolerance, and real-time processing capabilities are key to its effectiveness, making it a versatile solution for modern data-driven applications.
-In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container:
+## Installation of Confluent Kafka Platform
-```bash
+To install the Confluent Kafka Platform (self-managed locally), follow these 3 simple steps:
-docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest
-```
+**Download and Extract the Distribution Files:**
-The command above will set both soft and hard limits to `10000`.
+- Visit [Confluent Installation Page](https://www.confluent.io/installation/).
+- Download the distribution files (tar, zip, etc.).
-If you are not using Docker, the following command will change the limit for the current user session:
+- Extract the downloaded file using:
```bash
-ulimit -n 10000
+tar -xvf confluent-.tar.gz
```
+or
+```bash
-Please note, the command should be executed before you run Qdrant server.
-",documentation/guides/common-errors.md
-"---
+unzip confluent-.zip
-title: Configuration
+```
-weight: 160
-aliases:
- - ../configuration
+**Configure Environment Variables:**
----
+```bash
-# Configuration
+# Set CONFLUENT_HOME to the installation directory:
+export CONFLUENT_HOME=/path/to/confluent-
-To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files.
+# Add Confluent binaries to your PATH
+export PATH=$CONFLUENT_HOME/bin:$PATH
-The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
+```
-To change the default configuration, add a new configuration file and specify
+**Run Confluent Platform Locally:**
-the path with `--config-path path/to/custom_config.yaml`. If running in
-production mode, you could also choose to overwrite `config/production.yaml`.
-See [ordering](#order-and-priority) for details on how configurations are
+```bash
-loaded.
+# Start the Confluent Platform services:
+confluent local start
+# Stop the Confluent Platform services:
-The [Installation](../installation) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods.
+confluent local stop
+```
-## Order and priority
+## Installation of Qdrant:
-*Effective as of v1.2.1*
+To install and run Qdrant (self-managed locally), you can use Docker, which simplifies the process. First, ensure you have Docker installed on your system. Then, you can pull the Qdrant image from Docker Hub and run it with the following commands:
-Multiple configurations may be loaded on startup. All of them are merged into a
-single effective configuration that is used by Qdrant.
+```bash
+docker pull qdrant/qdrant
+docker run -p 6334:6334 -p 6333:6333 qdrant/qdrant
-Configurations are loaded in the following order, if present:
+```
-1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml))
+This will download the Qdrant image and start a Qdrant instance accessible at `http://localhost:6333`. For more detailed instructions and alternative installation methods, refer to the [Qdrant installation documentation](https://qdrant.tech/documentation/quick-start/).
-2. File `config/config.yaml`
-3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`)
-4. File `config/local.yaml`
+## Installation of Qdrant-Kafka Sink Connector:
-5. Config provided with `--config-path PATH` (if set)
-6. [Environment variables](#environment-variables)
+To install the Qdrant Kafka connector using [Confluent Hub](https://www.confluent.io/hub/), you can utilize the straightforward `confluent-hub install` command. This command simplifies the process by eliminating the need for manual configuration file manipulations. To install the Qdrant Kafka connector version 1.1.0, execute the following command in your terminal:
-This list is from least to most significant. Properties in later configurations
-will overwrite those loaded before it. For example, a property set with
+```bash
-`--config-path` will overwrite those in other files.
+ confluent-hub install qdrant/qdrant-kafka:1.1.0
+```
-Most of these files are included by default in the Docker container. But it is
-likely that they are absent on your local machine if you run the `qdrant` binary
+This command downloads and installs the specified connector directly from Confluent Hub into your Confluent Platform or Kafka Connect environment. The installation process ensures that all necessary dependencies are handled automatically, allowing for a seamless integration of the Qdrant Kafka connector with your existing setup. Once installed, the connector can be configured and managed using the Confluent Control Center or the Kafka Connect REST API, enabling efficient data streaming between Kafka and Qdrant without the need for intricate manual setup.
-manually.
+![2.webp](/documentation/examples/data-streaming-kafka-qdrant/2.webp)
-If file 2 or 3 are not found, a warning is shown on startup.
-If file 5 is provided but not found, an error is shown on startup.
+*Figure 2: Local Confluent platform showing the Source and Sink connectors after installation.*
-Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`.
+Ensure the configuration of the connector once it's installed as below. keep in mind that your `key.converter` and `value.converter` are very important for kafka to safely deliver the messages from topic to qdrant.
-## Environment variables
+```bash
+{
-It is possible to set configuration properties using environment variables.
+ ""name"": ""QdrantSinkConnectorConnector_0"",
-Environment variables are always the most significant and cannot be overwritten
+ ""config"": {
-(see [ordering](#order-and-priority)).
+ ""value.converter.schemas.enable"": ""false"",
+ ""name"": ""QdrantSinkConnectorConnector_0"",
+ ""connector.class"": ""io.qdrant.kafka.QdrantSinkConnector"",
-All environment variables are prefixed with `QDRANT__` and are separated with
+ ""key.converter"": ""org.apache.kafka.connect.storage.StringConverter"",
-`__`.
+ ""value.converter"": ""org.apache.kafka.connect.json.JsonConverter"",
+ ""topics"": ""topic_62,qdrant_kafka.docs"",
+ ""errors.deadletterqueue.topic.name"": ""dead_queue"",
-These variables:
+ ""errors.deadletterqueue.topic.replication.factor"": ""1"",
+ ""qdrant.grpc.url"": ""http://localhost:6334"",
+ ""qdrant.api.key"": ""************""
-```bash
+ }
-QDRANT__LOG_LEVEL=INFO
+}
-QDRANT__SERVICE__HTTP_PORT=6333
+```
-QDRANT__SERVICE__ENABLE_TLS=1
-QDRANT__TLS__CERT=./tls/cert.pem
-QDRANT__TLS__CERT_TTL=3600
+## Installation of MongoDB
-```
+For the Kafka to connect MongoDB as source, your MongoDB instance should be running in a `replicaSet` mode. below is the `docker compose` file which will spin a single node `replicaSet` instance of MongoDB.
-result in this configuration:
+```bash
-```yaml
+version: ""3.8""
-log_level: INFO
-service:
- http_port: 6333
+services:
- enable_tls: true
+ mongo1:
-tls:
+ image: mongo:7.0
- cert: ./tls/cert.pem
+ command: [""--replSet"", ""rs0"", ""--bind_ip_all"", ""--port"", ""27017""]
- cert_ttl: 3600
+ ports:
-```
+ - 27017:27017
+ healthcheck:
+ test: echo ""try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'host.docker.internal:27017'}]}) }"" | mongosh --port 27017 --quiet
-To run Qdrant locally with a different HTTP port you could use:
+ interval: 5s
+ timeout: 30s
+ start_period: 0s
-```bash
+ start_interval: 1s
-QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant
+ retries: 30
-```
+ volumes:
+ - ""mongo1_data:/data/db""
+ - ""mongo1_config:/data/configdb""
-## Configuration file example
+volumes:
-```yaml
+ mongo1_data:
-log_level: INFO
+ mongo1_config:
+```
-storage:
- # Where to store all the data
+Similarly, install and configure source connector as below.
- storage_path: ./storage
+```bash
- # Where to store snapshots
+confluent-hub install mongodb/kafka-connect-mongodb:latest
- snapshots_path: ./snapshots
+```
- # Where to store temporary files
+After installing the `MongoDB` connector, connector configuration should look like this:
- # If null, temporary snapshot are stored in: storage/snapshots_temp/
- temp_path: null
+```bash
+{
- # If true - point's payload will not be stored in memory.
+ ""name"": ""MongoSourceConnectorConnector_0"",
- # It will be read from the disk every time it is requested.
+ ""config"": {
- # This setting saves RAM by (slightly) increasing the response time.
+ ""connector.class"": ""com.mongodb.kafka.connect.MongoSourceConnector"",
- # Note: those payload values that are involved in filtering and are indexed - remain in RAM.
+ ""key.converter"": ""org.apache.kafka.connect.storage.StringConverter"",
- on_disk_payload: true
+ ""value.converter"": ""org.apache.kafka.connect.storage.StringConverter"",
+ ""connection.uri"": ""mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true"",
+ ""database"": ""qdrant_kafka"",
- # Maximum number of concurrent updates to shard replicas
+ ""collection"": ""docs"",
- # If `null` - maximum concurrency is used.
+ ""publish.full.document.only"": ""true"",
- update_concurrency: null
+ ""topic.namespace.map"": ""{\""*\"":\""qdrant_kafka.docs\""}"",
+ ""copy.existing"": ""true""
+ }
- # Write-ahead-log related configuration
+}
- wal:
+```
- # Size of a single WAL segment
- wal_capacity_mb: 32
+## Playground Application
- # Number of WAL segments to create ahead of actual data requirement
- wal_segments_ahead: 0
+As the infrastructure set is completely done, now it's time for us to create a simple application and check our setup. the objective of our application is the data is inserted to Mongodb and eventually it will get ingested into Qdrant also using [Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/).
- # Normal node - receives all updates and answers all queries
+`requirements.txt`
- node_type: ""Normal""
+```bash
- # Listener node - receives all updates, but does not answer search/read queries
+fastembed==0.3.1
- # Useful for setting up a dedicated backup node
+pymongo==4.8.0
- # node_type: ""Listener""
+qdrant_client==1.10.1
+```
- performance:
- # Number of parallel threads used for search operations. If 0 - auto selection.
+`project_root_folder/main.py`
- max_search_threads: 0
- # Max total number of threads, which can be used for running optimization processes across all collections.
- # Note: Each optimization thread will also use `max_indexing_threads` for index building.
+This is just sample code. Nevertheless it can be extended to millions of operations based on your use case.
- # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads`
- max_optimization_threads: 1
+```python
+from pymongo import MongoClient
- # Prevent DDoS of too many concurrent updates in distributed mode.
+from utils.app_utils import create_qdrant_collection
- # One external update usually triggers multiple internal updates, which breaks internal
+from fastembed import TextEmbedding
- # timings. For example, the health check timing and consensus timing.
- # If null - auto selection.
- update_rate_limit: null
+collection_name: str = 'test'
+embed_model_name: str = 'snowflake/snowflake-arctic-embed-s'
+```
- optimizers:
+```python
- # The minimal fraction of deleted vectors in a segment, required to perform segment optimization
+# Step 0: create qdrant_collection
- deleted_threshold: 0.2
+create_qdrant_collection(collection_name=collection_name, embed_model=embed_model_name)
- # The minimal number of vectors in a segment, required to perform segment optimization
+# Step 1: Connect to MongoDB
- vacuum_min_vector_number: 1000
+client = MongoClient('mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true')
- # Target amount of segments optimizer will try to keep.
+# Step 2: Select Database
- # Real amount of segments may vary depending on multiple parameters:
+db = client['qdrant_kafka']
- # - Amount of stored points
- # - Current write RPS
- #
+# Step 3: Select Collection
- # It is recommended to select default number of segments as a factor of the number of search threads,
+collection = db['docs']
- # so that each segment would be handled evenly by one of the threads.
- # If `default_segment_number = 0`, will be automatically selected by the number of available CPUs
- default_segment_number: 0
+# Step 4: Create a Document to Insert
- # Do not create segments larger this size (in KiloBytes).
+description = ""qdrant is a high available vector search engine""
- # Large segments might require disproportionately long indexation times,
+embedding_model = TextEmbedding(model_name=embed_model_name)
- # therefore it makes sense to limit the size of segments.
+vector = next(embedding_model.embed(documents=description)).tolist()
- #
+document = {
- # If indexation speed have more priority for your - make this parameter lower.
+ ""collection_name"": collection_name,
- # If search speed is more important - make this parameter higher.
+ ""id"": 1,
- # Note: 1Kb = 1 vector of size 256
+ ""vector"": vector,
- # If not set, will be automatically selected considering the number of available CPUs.
+ ""payload"": {
- max_segment_size_kb: null
+ ""name"": ""qdrant"",
+ ""description"": description,
+ ""url"": ""https://qdrant.tech/documentation""
- # Maximum size (in KiloBytes) of vectors to store in-memory per segment.
+ }
- # Segments larger than this threshold will be stored as read-only memmaped file.
+}
- # To enable memmap storage, lower the threshold
- # Note: 1Kb = 1 vector of size 256
- # To explicitly disable mmap optimization, set to `0`.
+# Step 5: Insert the Document into the Collection
- # If not set, will be disabled by default.
+result = collection.insert_one(document)
- memmap_threshold_kb: null
+# Step 6: Print the Inserted Document's ID
- # Maximum size (in KiloBytes) of vectors allowed for plain index.
+print(""Inserted document ID:"", result.inserted_id)
- # Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md
+```
- # Note: 1Kb = 1 vector of size 256
- # To explicitly disable vector indexing, set to `0`.
- # If not set, the default value will be used.
+`project_root_folder/utils/app_utils.py`
- indexing_threshold_kb: 20000
+```python
- # Interval between forced flushes.
+from qdrant_client import QdrantClient, models
- flush_interval_sec: 5
+client = QdrantClient(url=""http://localhost:6333"", api_key="""")
- # Max number of threads, which can be used for optimization per collection.
+dimension_dict = {""snowflake/snowflake-arctic-embed-s"": 384}
- # Note: Each optimization thread will also use `max_indexing_threads` for index building.
- # So total number of threads used for optimization will be `max_optimization_threads * max_indexing_threads`
- # If `max_optimization_threads = 0`, optimization will be disabled.
+def create_qdrant_collection(collection_name: str, embed_model: str):
- max_optimization_threads: 1
+ if not client.collection_exists(collection_name=collection_name):
- # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
+ client.create_collection(
- hnsw_index:
+ collection_name=collection_name,
- # Number of edges per node in the index graph. Larger the value - more accurate the search, more space required.
+ vectors_config=models.VectorParams(size=dimension_dict.get(embed_model), distance=models.Distance.COSINE)
- m: 16
+ )
- # Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index.
+```
- ef_construct: 100
- # Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
- # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
+Before we run the application, below is the state of MongoDB and Qdrant databases.
- # in this case full-scan search should be preferred by query planner and additional indexing is not required.
- # Note: 1Kb = 1 vector of size 256
- full_scan_threshold_kb: 10000
+![3.webp](/documentation/examples/data-streaming-kafka-qdrant/3.webp)
- # Number of parallel threads used for background index building. If 0 - auto selection.
- max_indexing_threads: 0
- # Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false
+Figure 3: Initial state: no collection named `test` & `no data` in the `docs` collection of MongodDB.
- on_disk: false
- # Custom M param for hnsw graph built for payload index. If not set, default M will be used.
- payload_m: null
+Once you run the code the data goes into Mongodb and the CDC gets triggered and eventually Qdrant will receive this data.
+![4.webp](/documentation/examples/data-streaming-kafka-qdrant/4.webp)
-service:
+Figure 4: The test Qdrant collection is created automatically.
- # Maximum size of POST data in a single request in megabytes
- max_request_size_mb: 32
+![5.webp](/documentation/examples/data-streaming-kafka-qdrant/5.webp)
- # Number of parallel workers used for serving the api. If 0 - equal to the number of available cores.
+Figure 5: Data is inserted into both MongoDB and Qdrant.
- # If missing - Same as storage.max_search_threads
- max_workers: 0
+## Conclusion:
- # Host to bind the service on
- host: 0.0.0.0
+In conclusion, the integration of **Kafka** with **Qdrant** using the **Qdrant Sink Connector** provides a seamless and efficient solution for real-time data streaming and processing. This setup not only enhances the capabilities of your data pipeline but also ensures that high-dimensional vector data is continuously indexed and readily available for similarity searches. By following the installation and setup guide, you can easily establish a robust data flow from your **source systems** like **MongoDB** and **Azure Blob Storage**, through **Kafka**, and into **Qdrant**. This architecture empowers modern applications to leverage real-time data insights and advanced search capabilities, paving the way for innovative data-driven solutions.",documentation/send-data/data-streaming-kafka-qdrant.md
+"---
+title: Send Data to Qdrant
+weight: 18
- # HTTP(S) port to bind the service on
+---
- http_port: 6333
+## How to Send Your Data to a Qdrant Cluster
- # gRPC port to bind the service on.
- # If `null` - gRPC is disabled. Default: null
- # Comment to disable gRPC:
+| Example | Description | Stack |
- grpc_port: 6334
+|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
+| [Pinecone to Qdrant Data Transfer](https://githubtocolab.com/qdrant/examples/blob/master/data-migration/from-pinecone-to-qdrant.ipynb) | Migrate your vector data from Pinecone to Qdrant. | Qdrant, Vector-io |
+| [Stream Data to Qdrant with Kafka](../send-data/data-streaming-kafka-qdrant/) | Use Confluent to Stream Data to Qdrant via Managed Kafka. | Qdrant, Kafka |
- # Enable CORS headers in REST API.
+| [Qdrant on Databricks](../send-data/databricks/) | Learn how to use Qdrant on Databricks using the Spark connector | Qdrant, Databricks, Apache Spark |
- # If enabled, browsers would be allowed to query REST endpoints regardless of query origin.
+| [Qdrant with Airflow and Astronomer](../send-data/qdrant-airflow-astronomer/) | Build a semantic querying system using Airflow and Astronomer | Qdrant, Airflow, Astronomer |",documentation/send-data/_index.md
+"---
- # More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
+title: Snowflake Models
- # Default: true
+weight: 2900
- enable_cors: true
+---
- # Enable HTTPS for the REST and gRPC API
+# Snowflake
- enable_tls: false
+Qdrant supports working with [Snowflake](https://www.snowflake.com/blog/introducing-snowflake-arctic-embed-snowflakes-state-of-the-art-text-embedding-family-of-models/) text embedding models. You can find all the available models on [HuggingFace](https://huggingface.co/Snowflake).
- # Check user HTTPS client certificate against CA file specified in tls config
- verify_https_client_certificate: false
+### Setting up the Qdrant and Snowflake models
- # Set an api-key.
- # If set, all requests must include a header with the api-key.
+```python
- # example header: `api-key: `
+from qdrant_client import QdrantClient
- #
+from fastembed import TextEmbedding
- # If you enable this you should also enable TLS.
- # (Either above or via an external service like nginx.)
- # Sending an api-key over an unencrypted channel is insecure.
+qclient = QdrantClient("":memory:"")
- #
+embedding_model = TextEmbedding(""snowflake/snowflake-arctic-embed-s"")
- # Uncomment to enable.
- # api_key: your_secret_api_key_here
-
+texts = [
- # Set an api-key for read-only operations.
+ ""Qdrant is the best vector search engine!"",
- # If set, all requests must include a header with the api-key.
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
- # example header: `api-key: `
+]
- #
+```
- # If you enable this you should also enable TLS.
- # (Either above or via an external service like nginx.)
- # Sending an api-key over an unencrypted channel is insecure.
+```typescript
- #
+import {QdrantClient} from '@qdrant/js-client-rest';
- # Uncomment to enable.
+import { pipeline } from '@xenova/transformers';
- # read_only_api_key: your_secret_read_only_api_key_here
+const client = new QdrantClient({ url: 'http://localhost:6333' });
-cluster:
- # Use `enabled: true` to run Qdrant in distributed deployment mode
- enabled: false
+const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s');
- # Configuration of the inter-cluster communication
+const texts = [
- p2p:
+ ""Qdrant is the best vector search engine!"",
- # Port for internal communication between peers
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
- port: 6335
+]
+```
- # Use TLS for communication between peers
- enable_tls: false
+The following example shows how to embed documents with the [`snowflake-arctic-embed-s`](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) model that generates sentence embeddings of size 384.
- # Configuration related to distributed consensus algorithm
+### Embedding documents
- consensus:
- # How frequently peers should ping each other.
- # Setting this parameter to lower value will allow consensus
+```python
- # to detect disconnected nodes earlier, but too frequent
+embeddings = embedding_model.embed(texts)
- # tick period may create significant network and CPU overhead.
+```
- # We encourage you NOT to change this parameter unless you know what you are doing.
- tick_period_ms: 100
+```typescript
+const embeddings = await extractor(texts, { normalize: true, pooling: 'cls' });
+```
-# Set to true to prevent service from sending usage statistics to the developers.
-# Read more: https://qdrant.tech/documentation/guides/telemetry
+### Converting the model outputs to Qdrant points
-telemetry_disabled: false
+```python
+from qdrant_client.models import PointStruct
-# TLS configuration.
-# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
+points = [
-tls:
+ PointStruct(
- # Server certificate chain file
+ id=idx,
- cert: ./tls/cert.pem
+ vector=embedding,
+ payload={""text"": text},
+ )
- # Server private key file
+ for idx, (embedding, text) in enumerate(zip(embeddings, texts))
- key: ./tls/key.pem
+]
+```
- # Certificate authority certificate file.
- # This certificate will be used to validate the certificates
+```typescript
- # presented by other nodes during inter-cluster communication.
+let points = embeddings.tolist().map((embedding, i) => {
- #
+ return {
- # If verify_https_client_certificate is true, it will verify
+ id: i,
- # HTTPS client certificate
+ vector: embedding,
- #
+ payload: {
- # Required if cluster.p2p.enable_tls is true.
+ text: texts[i]
- ca_cert: ./tls/cacert.pem
+ }
+ }
+});
- # TTL in seconds to reload certificate from disk, useful for certificate rotations.
+```
- # Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication).
- # If `null` - TTL is disabled.
- cert_ttl: 3600
+### Creating a collection to insert the documents
-```
+```python
-## Validation
+from qdrant_client.models import VectorParams, Distance
-*Available since v1.1.1*
+COLLECTION_NAME = ""example_collection""
-The configuration is validated on startup. If a configuration is loaded but
+qclient.create_collection(
-validation fails, a warning is logged. E.g.:
+ COLLECTION_NAME,
+ vectors_config=VectorParams(
+ size=384,
-```text
+ distance=Distance.COSINE,
-WARN Settings configuration file has validation errors:
+ ),
-WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger
+)
-WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000
+qclient.upsert(COLLECTION_NAME, points)
```
-The server will continue to operate. Any validation errors should be fixed as
+```typescript
-soon as possible though to prevent problematic behavior.",documentation/guides/configuration.md
-"---
+const COLLECTION_NAME = ""example_collection""
-title: Optimize Resources
-weight: 11
-aliases:
+await client.createCollection(COLLECTION_NAME, {
- - ../tutorials/optimize
+ vectors: {
----
+ size: 384,
+ distance: 'Cosine',
+ }
-# Optimize Qdrant
+});
-Different use cases have different requirements for balancing between memory, speed, and precision.
+await client.upsert(COLLECTION_NAME, {
-Qdrant is designed to be flexible and customizable so you can tune it to your needs.
+ wait: true,
+ points
+});
-![Trafeoff](/docs/tradeoff.png)
+```
-Let's look deeper into each of those possible optimization scenarios.
+### Searching for documents with Qdrant
-## Prefer low memory footprint with high speed search
+Once the documents are added, you can search for the most relevant documents.
-The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads.
+```python
+query_embedding = next(embedding_model.query_embed(""What is the best to use for vector search scaling?""))
-Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads.
+qclient.search(
+ collection_name=COLLECTION_NAME,
-To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration:
+ query_vector=query_embedding,
+)
+```
-```http
-PUT /collections/{collection_name}
-{
+```typescript
- ""vectors"": {
+const query_embedding = await extractor(""What is the best to use for vector search scaling?"", {
- ""size"": 768,
+ normalize: true,
- ""distance"": ""Cosine""
+ pooling: 'cls'
- },
+});
- ""optimizers_config"": {
- ""memmap_threshold"": 20000
- },
+await client.search(COLLECTION_NAME, {
- ""quantization_config"": {
+ vector: query_embedding.tolist()[0],
- ""scalar"": {
+});
- ""type"": ""int8"",
+```
+",documentation/embeddings/snowflake.md
+"
- ""always_ram"": true
+---
- }
+title: Watsonx
- }
+weight: 3000
-}
+aliases:
-```
+ - /documentation/examples/watsonx-search/
+ - /documentation/tutorials/watsonx-search/
+ - /documentation/integrations/watsonx/
-```python
+---
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+# Using Watsonx with Qdrant
-client = QdrantClient(""localhost"", port=6333)
+Watsonx is IBM's platform for AI embeddings, focusing on enterprise-level text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant.
-client.create_collection(
- collection_name=""{collection_name}"",
+## Installation
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
- quantization_config=models.ScalarQuantization(
+You can install the required package using the following pip command:
- scalar=models.ScalarQuantizationConfig(
- type=models.ScalarType.INT8,
- always_ram=True,
+```bash
- ),
+pip install watsonx
- ),
+```
-)
-```
+## Code Example
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+```python
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+import qdrant_client
+from qdrant_client.models import Batch
+from watsonx import Watsonx
-client.createCollection(""{collection_name}"", {
- vectors: {
- size: 768,
+# Initialize Watsonx AI model
- distance: ""Cosine"",
+model = Watsonx(""watsonx-model"")
- },
- optimizers_config: {
- memmap_threshold: 20000,
+# Generate embeddings for enterprise data
- },
+text = ""Watsonx provides enterprise-level NLP solutions.""
- quantization_config: {
+embeddings = model.embed(text)
- scalar: {
- type: ""int8"",
- always_ram: true,
+# Initialize Qdrant client
- },
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
- },
-});
-```
+# Upsert the embedding into Qdrant
+qdrant_client.upsert(
+ collection_name=""EnterpriseData"",
-```rust
+ points=Batch(
-use qdrant_client::{
+ ids=[1],
- client::QdrantClient,
+ vectors=[embeddings],
- qdrant::{
+ )
- quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance,
+)
- OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization,
- VectorParams, VectorsConfig,
- },
+```
+",documentation/embeddings/watsonx.md
+"---
-};
+title: Instruct
+weight: 1800
+---
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+# Using Instruct with Qdrant
-client
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
+Instruct is a specialized provider offering detailed embeddings for instructional content, which can be effectively used with Qdrant. With Instruct every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training.
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 768,
+## Installation
- distance: Distance::Cosine.into(),
- ..Default::default()
- })),
+```bash
- }),
+pip install instruct
- optimizers_config: Some(OptimizersConfigDiff {
+```
- memmap_threshold: Some(20000),
- ..Default::default()
- }),
+Below is an example of how to obtain embeddings using Instruct's API and store them in a Qdrant collection:
- quantization_config: Some(QuantizationConfig {
- quantization: Some(Quantization::Scalar(ScalarQuantization {
- r#type: QuantizationType::Int8.into(),
+```python
- always_ram: Some(true),
+import qdrant_client
- ..Default::default()
+from qdrant_client.models import Batch
- })),
+from instruct import Instruct
- }),
- ..Default::default()
- })
+# Initialize Instruct model
- .await?;
+model = Instruct(""instruct-base"")
-```
+# Generate embeddings for instructional content
-```java
+text = ""Instruct provides detailed embeddings for learning content.""
-import io.qdrant.client.QdrantClient;
+embeddings = model.embed(text)
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
+# Initialize Qdrant client
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
-import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
+# Upsert the embedding into Qdrant
-import io.qdrant.client.grpc.Collections.VectorParams;
+qdrant_client.upsert(
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+ collection_name=""LearningContent"",
+ points=Batch(
+ ids=[1],
-QdrantClient client =
+ vectors=[embeddings],
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ )
+)
-client
- .createCollectionAsync(
+```
+",documentation/embeddings/instruct.md
+"---
- CreateCollection.newBuilder()
+title: GPT4All
- .setCollectionName(""{collection_name}"")
+weight: 1700
- .setVectorsConfig(
+---
- VectorsConfig.newBuilder()
- .setParams(
- VectorParams.newBuilder()
+# Using GPT4All with Qdrant
- .setSize(768)
- .setDistance(Distance.Cosine)
- .build())
+GPT4All offers a range of large language models that can be fine-tuned for various applications. GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.
- .build())
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
+No API calls or GPUs required - you can just download the application and get started. Use GPT4All in Python to program with LLMs implemented with the llama.cpp backend and Nomic's C backend.
- .setQuantizationConfig(
- QuantizationConfig.newBuilder()
- .setScalar(
+## Installation
- ScalarQuantization.newBuilder()
- .setType(QuantizationType.Int8)
- .setAlwaysRam(true)
+You can install the required package using the following pip command:
- .build())
- .build())
- .build())
+```bash
- .get();
+pip install gpt4all
```
-```csharp
+Here is how you might connect to GPT4ALL using Qdrant:
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+```python
+import qdrant_client
-var client = new QdrantClient(""localhost"", 6334);
+from qdrant_client.models import Batch
+from gpt4all import GPT4All
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+# Initialize GPT4All model
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+model = GPT4All(""gpt4all-lora-quantized"")
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
- quantizationConfig: new QuantizationConfig
- {
+# Generate embeddings for a text
- Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
+text = ""GPT4All enables open-source AI applications.""
- }
+embeddings = model.embed(text)
-);
-```
+# Initialize Qdrant client
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-`mmmap_threshold` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM.
+# Upsert the embedding into Qdrant
-Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision.
+qdrant_client.upsert(
+ collection_name=""OpenSourceAI"",
+ points=Batch(
-```http
+ ids=[1],
-POST /collections/{collection_name}/points/search
+ vectors=[embeddings],
-{
+ )
- ""params"": {
+)
- ""quantization"": {
- ""rescore"": false
- }
+```
- },
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+",documentation/embeddings/gpt4all.md
+"---
- ""limit"": 10
+title: Voyage AI
-}
+weight: 3200
-```
+---
-```python
+# Voyage AI
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+Qdrant supports working with [Voyage AI](https://voyageai.com/) embeddings. The supported models' list can be found [here](https://docs.voyageai.com/docs/embeddings).
-client = QdrantClient(""localhost"", port=6333)
+You can generate an API key from the [Voyage AI dashboard]() to authenticate the requests.
-client.search(
- collection_name=""{collection_name}"",
+### Setting up the Qdrant and Voyage clients
- query_vector=[0.2, 0.1, 0.9, 0.7],
- search_params=models.SearchParams(
- quantization=models.QuantizationSearchParams(rescore=False)
+```python
- ),
+from qdrant_client import QdrantClient
-)
+import voyageai
-```
+VOYAGE_API_KEY = """"
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+qclient = QdrantClient("":memory:"")
+vclient = voyageai.Client(api_key=VOYAGE_API_KEY)
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+texts = [
-client.search(""{collection_name}"", {
+ ""Qdrant is the best vector search engine!"",
- vector: [0.2, 0.1, 0.9, 0.7],
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
- params: {
+]
- quantization: {
+```
- rescore: false,
- },
- },
+```typescript
-});
+import {QdrantClient} from '@qdrant/js-client-rest';
-```
+const VOYAGEAI_BASE_URL = ""https://api.voyageai.com/v1/embeddings""
-```rust
+const VOYAGEAI_API_KEY = """"
-use qdrant_client::{
- client::QdrantClient,
- qdrant::{QuantizationSearchParams, SearchParams, SearchPoints},
+const client = new QdrantClient({ url: 'http://localhost:6333' });
-};
+const headers = {
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ""Authorization"": ""Bearer "" + VOYAGEAI_API_KEY,
+ ""Content-Type"": ""application/json""
+}
-client
- .search_points(&SearchPoints {
- collection_name: ""{collection_name}"".to_string(),
+const texts = [
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ ""Qdrant is the best vector search engine!"",
- params: Some(SearchParams {
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
- quantization: Some(QuantizationSearchParams {
+]
- rescore: Some(false),
+```
- ..Default::default()
- }),
- ..Default::default()
+The following example shows how to embed documents with the [`voyage-large-2`](https://docs.voyageai.com/docs/embeddings#model-choices) model that generates sentence embeddings of size 1536.
- }),
- limit: 3,
- ..Default::default()
+### Embedding documents
- })
- .await?;
+
+```python
+
+response = vclient.embed(texts, model=""voyage-large-2"", input_type=""document"")
```
-```java
+```typescript
-import java.util.List;
+let body = {
+ ""input"": texts,
+ ""model"": ""voyage-large-2"",
-import io.qdrant.client.QdrantClient;
+ ""input_type"": ""document"",
-import io.qdrant.client.QdrantGrpcClient;
+}
-import io.qdrant.client.grpc.Points.QuantizationSearchParams;
-import io.qdrant.client.grpc.Points.SearchParams;
-import io.qdrant.client.grpc.Points.SearchPoints;
+let response = await fetch(VOYAGEAI_BASE_URL, {
-QdrantClient client =
+ method: ""POST"",
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ body: JSON.stringify(body),
+ headers
+});
-client
- .searchAsync(
- SearchPoints.newBuilder()
+let response_body = await response.json();
- .setCollectionName(""{collection_name}"")
+```
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
- .setParams(
- SearchParams.newBuilder()
+### Converting the model outputs to Qdrant points
- .setQuantization(
- QuantizationSearchParams.newBuilder().setRescore(false).build())
- .build())
+```python
- .setLimit(3)
+from qdrant_client.models import PointStruct
- .build())
- .get();
-```
+points = [
+ PointStruct(
+ id=idx,
-```csharp
+ vector=embedding,
-using Qdrant.Client;
+ payload={""text"": text},
-using Qdrant.Client.Grpc;
+ )
+ for idx, (embedding, text) in enumerate(zip(response.embeddings, texts))
+]
-var client = new QdrantClient(""localhost"", 6334);
+```
-await client.SearchAsync(
+```typescript
- collectionName: ""{collection_name}"",
+let points = response_body.data.map((data, i) => {
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ return {
- searchParams: new SearchParams
+ id: i,
- {
+ vector: data.embedding,
- Quantization = new QuantizationSearchParams { Rescore = false }
+ payload: {
- },
+ text: texts[i]
- limit: 3
+ }
-);
+ }
+
+});
```
-## Prefer high precision with low memory footprint
+### Creating a collection to insert the documents
-In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index.
+```python
+from qdrant_client.models import VectorParams, Distance
-```http
-PUT /collections/{collection_name}
+COLLECTION_NAME = ""example_collection""
-{
- ""vectors"": {
- ""size"": 768,
+qclient.create_collection(
- ""distance"": ""Cosine""
+ COLLECTION_NAME,
- },
+ vectors_config=VectorParams(
- ""optimizers_config"": {
+ size=1536,
- ""memmap_threshold"": 20000
+ distance=Distance.COSINE,
- },
+ ),
- ""hnsw_config"": {
+)
- ""on_disk"": true
+qclient.upsert(COLLECTION_NAME, points)
- }
+```
-}
-```
+```typescript
+const COLLECTION_NAME = ""example_collection""
-```python
-from qdrant_client import QdrantClient, models
+await client.createCollection(COLLECTION_NAME, {
+ vectors: {
-client = QdrantClient(""localhost"", port=6333)
+ size: 1536,
+ distance: 'Cosine',
+ }
-client.create_collection(
+});
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
+await client.upsert(COLLECTION_NAME, {
- hnsw_config=models.HnswConfigDiff(on_disk=True),
+ wait: true,
-)
+ points
-```
+});
+```
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+### Searching for documents with Qdrant
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+Once the documents are added, you can search for the most relevant documents.
-client.createCollection(""{collection_name}"", {
+```python
- vectors: {
+response = vclient.embed(
- size: 768,
+ [""What is the best to use for vector search scaling?""],
- distance: ""Cosine"",
+ model=""voyage-large-2"",
- },
+ input_type=""query"",
- optimizers_config: {
+)
- memmap_threshold: 20000,
- },
- hnsw_config: {
+qclient.search(
- on_disk: true,
+ collection_name=COLLECTION_NAME,
- },
+ query_vector=response.embeddings[0],
-});
+)
```
-```rust
+```typescript
-use qdrant_client::{
+body = {
- client::QdrantClient,
+ ""input"": [""What is the best to use for vector search scaling?""],
- qdrant::{
+ ""model"": ""voyage-large-2"",
- vectors_config::Config, CreateCollection, Distance, HnswConfigDiff, OptimizersConfigDiff,
+ ""input_type"": ""query"",
- VectorParams, VectorsConfig,
+};
- },
-};
+response = await fetch(VOYAGEAI_BASE_URL, {
+ method: ""POST"",
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ body: JSON.stringify(body),
+ headers
+});
-client
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
+response_body = await response.json();
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 768,
+await client.search(COLLECTION_NAME, {
- distance: Distance::Cosine.into(),
+ vector: response_body.data[0].embedding,
- ..Default::default()
+});
- })),
+```
+",documentation/embeddings/voyage.md
+"---
- }),
+title: Together AI
- optimizers_config: Some(OptimizersConfigDiff {
+weight: 3000
- memmap_threshold: Some(20000),
+---
- ..Default::default()
- }),
- hnsw_config: Some(HnswConfigDiff {
+# Using Together AI with Qdrant
- on_disk: Some(true),
- ..Default::default()
- }),
+Together AI focuses on collaborative AI embeddings that enhance multi-user search scenarios when integrated with Qdrant.
- ..Default::default()
- })
- .await?;
+## Installation
-```
+You can install the required package using the following pip command:
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+```bash
-import io.qdrant.client.grpc.Collections.CreateCollection;
+pip install togetherai
-import io.qdrant.client.grpc.Collections.Distance;
+```
-import io.qdrant.client.grpc.Collections.HnswConfigDiff;
+## Integration Example
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+```python
+import qdrant_client
+from qdrant_client.models import Batch
-QdrantClient client =
+from togetherai import TogetherAI
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+# Initialize Together AI model
-client
+model = TogetherAI(""togetherai-collab"")
- .createCollectionAsync(
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
+# Generate embeddings for collaborative content
- .setVectorsConfig(
+text = ""Together AI enhances collaborative content search.""
- VectorsConfig.newBuilder()
+embeddings = model.embed(text)
- .setParams(
- VectorParams.newBuilder()
- .setSize(768)
+# Initialize Qdrant client
- .setDistance(Distance.Cosine)
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
- .build())
- .build())
- .setOptimizersConfig(
+# Upsert the embedding into Qdrant
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
+qdrant_client.upsert(
- .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
+ collection_name=""CollaborativeContent"",
- .build())
+ points=Batch(
- .get();
+ ids=[1],
-```
+ vectors=[embeddings],
+ )
+)
-```csharp
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+```
+",documentation/embeddings/togetherai.md
+"---
+title: OpenAI
+weight: 2700
-var client = new QdrantClient(""localhost"", 6334);
+aliases: [ ../integrations/openai/ ]
+---
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+# OpenAI
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
- hnswConfig: new HnswConfigDiff { OnDisk = true }
+Qdrant supports working with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings).
-);
-```
+There is an official OpenAI Python package that simplifies obtaining them, and it can be installed with pip:
-In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM.
+```bash
+pip install openai
-```json
+```
-...
-""hnsw_config"": {
- ""m"": 64,
+### Setting up the OpenAI and Qdrant clients
- ""ef_construct"": 512,
- ""on_disk"": true
-}
+```python
-...
+import openai
-```
+import qdrant_client
-The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search.
+openai_client = openai.Client(
-You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS.
+ api_key=""""
+)
-## Prefer high precision with high speed search
+client = qdrant_client.QdrantClient("":memory:"")
-For high speed and high precision search it is critical to keep as much data in RAM as possible.
-By default, Qdrant follows this approach, but you can tune it to your needs.
+texts = [
+ ""Qdrant is the best vector search engine!"",
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
-It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring.
+]
+```
-```http
-PUT /collections/{collection_name}
+The following example shows how to embed a document with the `text-embedding-3-small` model that generates sentence embeddings of size 1536. You can find the list of all supported models [here](https://platform.openai.com/docs/models/embeddings).
-{
- ""vectors"": {
- ""size"": 768,
+### Embedding a document
- ""distance"": ""Cosine""
- },
- ""optimizers_config"": {
+```python
- ""memmap_threshold"": 20000
+embedding_model = ""text-embedding-3-small""
- },
- ""quantization_config"": {
- ""scalar"": {
+result = openai_client.embeddings.create(input=texts, model=embedding_model)
- ""type"": ""int8"",
+```
- ""always_ram"": true
- }
- }
+### Converting the model outputs to Qdrant points
-}
-```
+```python
+from qdrant_client.models import PointStruct
-```python
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+points = [
+ PointStruct(
+ id=idx,
-client = QdrantClient(""localhost"", port=6333)
+ vector=data.embedding,
+ payload={""text"": text},
+ )
-client.create_collection(
+ for idx, (data, text) in enumerate(zip(result.data, texts))
- collection_name=""{collection_name}"",
+]
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+```
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
- quantization_config=models.ScalarQuantization(
- scalar=models.ScalarQuantizationConfig(
+### Creating a collection to insert the documents
- type=models.ScalarType.INT8,
- always_ram=True,
- ),
+```python
- ),
+from qdrant_client.models import VectorParams, Distance
-)
-```
+collection_name = ""example_collection""
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+client.create_collection(
+ collection_name,
+ vectors_config=VectorParams(
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ size=1536,
+ distance=Distance.COSINE,
+ ),
-client.createCollection(""{collection_name}"", {
+)
- vectors: {
+client.upsert(collection_name, points)
- size: 768,
+```
- distance: ""Cosine"",
- },
- optimizers_config: {
+## Searching for documents with Qdrant
- memmap_threshold: 20000,
- },
- quantization_config: {
+Once the documents are indexed, you can search for the most relevant documents using the same model.
- scalar: {
- type: ""int8"",
- always_ram: true,
+```python
- },
+client.search(
- },
+ collection_name=collection_name,
-});
+ query_vector=openai_client.embeddings.create(
-```
+ input=[""What is the best to use for vector search scaling?""],
+ model=embedding_model,
+ )
-```rust
+ .data[0]
-use qdrant_client::{
+ .embedding,
- client::QdrantClient,
+)
- qdrant::{
+```
- quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance,
- OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization,
- VectorParams, VectorsConfig,
+## Using OpenAI Embedding Models with Qdrant's Binary Quantization
- },
-};
+You can use OpenAI embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
-client
+|Method|Dimensionality|Test Dataset|Recall|Oversampling|
- .create_collection(&CreateCollection {
+|-|-|-|-|-|
- collection_name: ""{collection_name}"".to_string(),
+|OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
- vectors_config: Some(VectorsConfig {
+|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
- config: Some(Config::Params(VectorParams {
+|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
- size: 768,
+|OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x|
+",documentation/embeddings/openai.md
+"---
- distance: Distance::Cosine.into(),
+title: AWS Bedrock
- ..Default::default()
+weight: 1000
- })),
+---
- }),
- optimizers_config: Some(OptimizersConfigDiff {
- memmap_threshold: Some(20000),
+# Bedrock Embeddings
- ..Default::default()
- }),
- quantization_config: Some(QuantizationConfig {
+You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
- quantization: Some(Quantization::Scalar(ScalarQuantization {
- r#type: QuantizationType::Int8.into(),
- always_ram: Some(true),
+You'll need the following information from your AWS account:
- ..Default::default()
- })),
- }),
+- Region
- ..Default::default()
+- Access key ID
- })
+- Secret key
- .await?;
-```
+To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key).
-```java
-import io.qdrant.client.QdrantClient;
+With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536.
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
+```python
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+# Install the required dependencies
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
+# pip install boto3 qdrant_client
-import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
-import io.qdrant.client.grpc.Collections.VectorParams;
+import json
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+import boto3
-QdrantClient client =
+from qdrant_client import QdrantClient, models
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+session = boto3.Session()
-client
- .createCollectionAsync(
- CreateCollection.newBuilder()
+bedrock_client = session.client(
- .setCollectionName(""{collection_name}"")
+ ""bedrock-runtime"",
- .setVectorsConfig(
+ region_name="""",
- VectorsConfig.newBuilder()
+ aws_access_key_id="""",
- .setParams(
+ aws_secret_access_key="""",
- VectorParams.newBuilder()
+)
- .setSize(768)
- .setDistance(Distance.Cosine)
- .build())
+qdrant_client = QdrantClient(url=""http://localhost:6333"")
- .build())
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
+qdrant_client.create_collection(
- .setQuantizationConfig(
+ ""{collection_name}"",
- QuantizationConfig.newBuilder()
+ vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
- .setScalar(
+)
- ScalarQuantization.newBuilder()
- .setType(QuantizationType.Int8)
- .setAlwaysRam(true)
+body = json.dumps({""inputText"": ""Some text to generate embeddings for""})
- .build())
- .build())
- .build())
+response = bedrock_client.invoke_model(
- .get();
+ body=body,
-```
+ modelId=""amazon.titan-embed-text-v1"",
+ accept=""application/json"",
+ contentType=""application/json"",
-```csharp
+)
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+response_body = json.loads(response.get(""body"").read())
-var client = new QdrantClient(""localhost"", 6334);
+qdrant_client.upsert(
+ ""{collection_name}"",
-await client.CreateCollectionAsync(
+ points=[models.PointStruct(id=1, vector=response_body[""embedding""])],
- collectionName: ""{collection_name}"",
+)
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+```
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
- quantizationConfig: new QuantizationConfig
- {
+```javascript
- Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
+// Install the required dependencies
- }
+// npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest
-);
-```
+import {
+ BedrockRuntimeClient,
-There are also some search-time parameters you can use to tune the search accuracy and speed:
+ InvokeModelCommand,
+} from ""@aws-sdk/client-bedrock-runtime"";
+import { QdrantClient } from '@qdrant/js-client-rest';
-```http
-POST /collections/{collection_name}/points/search
-{
+const main = async () => {
- ""params"": {
+ const bedrockClient = new BedrockRuntimeClient({
- ""hnsw_ef"": 128,
+ region: """",
- ""exact"": false
+ credentials: {
- },
+ accessKeyId: """",,
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ secretAccessKey: """",
- ""limit"": 3
+ },
-}
+ });
-```
+ const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' });
-```python
-from qdrant_client import QdrantClient, models
+ await qdrantClient.createCollection(""{collection_name}"", {
+ vectors: {
-client = QdrantClient(""localhost"", port=6333)
+ size: 1536,
+ distance: 'Cosine',
+ }
-client.search(
+ });
- collection_name=""{collection_name}"",
- search_params=models.SearchParams(hnsw_ef=128, exact=False),
- query_vector=[0.2, 0.1, 0.9, 0.7],
+ const response = await bedrockClient.send(
- limit=3,
+ new InvokeModelCommand({
-)
+ modelId: ""amazon.titan-embed-text-v1"",
-```
+ body: JSON.stringify({
+ inputText: ""Some text to generate embeddings for"",
+ }),
-```typescript
+ contentType: ""application/json"",
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ accept: ""application/json"",
+ })
+ );
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ const body = new TextDecoder().decode(response.body);
-client.search(""{collection_name}"", {
- vector: [0.2, 0.1, 0.9, 0.7],
- params: {
+ await qdrantClient.upsert(""{collection_name}"", {
- hnsw_ef: 128,
+ points: [
- exact: false,
+ {
- },
+ id: 1,
- limit: 3,
+ vector: JSON.parse(body).embedding,
-});
+ },
-```
+ ],
+ });
+}
-```rust
-use qdrant_client::{
- client::QdrantClient,
+main();
- qdrant::{SearchParams, SearchPoints},
+```
+",documentation/embeddings/bedrock.md
+"---
-};
+title: Aleph Alpha
+weight: 900
+aliases:
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ - /documentation/examples/aleph-alpha-search/
+ - /documentation/tutorials/aleph-alpha-search/
+ - /documentation/integrations/aleph-alpha/
-client
+---
- .search_points(&SearchPoints {
- collection_name: ""{collection_name}"".to_string(),
- vector: vec![0.2, 0.1, 0.9, 0.7],
+# Using Aleph Alpha Embeddings with Qdrant
- params: Some(SearchParams {
- hnsw_ef: Some(128),
- exact: Some(false),
+Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both
- ..Default::default()
+in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be
- }),
+installed with pip:
- limit: 3,
- ..Default::default()
- })
+```bash
- .await?;
+pip install aleph-alpha-client
```
-```java
+There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might
-import java.util.List;
+be done in the following way:
-import io.qdrant.client.QdrantClient;
+```python
-import io.qdrant.client.QdrantGrpcClient;
+import qdrant_client
-import io.qdrant.client.grpc.Points.SearchParams;
+from qdrant_client.models import Batch
-import io.qdrant.client.grpc.Points.SearchPoints;
+from aleph_alpha_client import (
-QdrantClient client =
+ Prompt,
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ AsyncClient,
+ SemanticEmbeddingRequest,
+ SemanticRepresentation,
-client
+ ImagePrompt
- .searchAsync(
+)
- SearchPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+aa_token = ""<< your_token >>""
- .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build())
+model = ""luminous-base""
- .setLimit(3)
- .build())
- .get();
+qdrant_client = qdrant_client.QdrantClient()
-```
+async with AsyncClient(token=aa_token) as client:
+ prompt = ImagePrompt.from_file(""./path/to/the/image.jpg"")
+ prompt = Prompt.from_image(prompt)
-```csharp
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+ query_params = {
+ ""prompt"": prompt,
+ ""representation"": SemanticRepresentation.Symmetric,
-var client = new QdrantClient(""localhost"", 6334);
+ ""compress_to_size"": 128,
+ }
+ query_request = SemanticEmbeddingRequest(**query_params)
-await client.SearchAsync(
+ query_response = await client.semantic_embed(
- collectionName: ""{collection_name}"",
+ request=query_request, model=model
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ )
- searchParams: new SearchParams { HnswEf = 128, Exact = false },
+
- limit: 3
+ qdrant_client.upsert(
-);
+ collection_name=""MyCollection"",
-```
+ points=Batch(
+ ids=[1],
+ vectors=[query_response.embedding],
-- `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512.
+ )
-- `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth.
+ )
+```
-## Latency vs Throughput
+If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input
+text into the `Prompt.from_text` method.
+",documentation/embeddings/aleph-alpha.md
+"---
-- There are two main approaches to measure the speed of search:
+title: Ollama
- - latency of the request - the time from the moment request is submitted to the moment a response is received
+weight: 2600
- - throughput - the number of requests per second the system can handle
+---
-Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another.
+# Using Ollama with Qdrant
-To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\.
+Ollama provides specialized embeddings for niche applications. Ollama supports a variety of embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data in specialized areas.
-You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster.
-```http
-PUT /collections/{collection_name}
-{
- ""vectors"": {
+## Installation
- ""size"": 768,
- ""distance"": ""Cosine""
- },
+You can install the required package using the following pip command:
- ""optimizers_config"": {
- ""default_segment_number"": 16
- }
+```bash
-}
+pip install ollama
```
+## Integration Example
-```python
-from qdrant_client import QdrantClient, models
+```python
-client = QdrantClient(""localhost"", port=6333)
+import qdrant_client
+from qdrant_client.models import Batch
+from ollama import Ollama
-client.create_collection(
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+# Initialize Ollama model
- optimizers_config=models.OptimizersConfigDiff(default_segment_number=16),
+model = Ollama(""ollama-unique"")
-)
-```
+# Generate embeddings for niche applications
+text = ""Ollama excels in niche applications with specific embeddings.""
-```typescript
+embeddings = model.embed(text)
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+# Initialize Qdrant client
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-client.createCollection(""{collection_name}"", {
+# Upsert the embedding into Qdrant
- vectors: {
+qdrant_client.upsert(
- size: 768,
+ collection_name=""NicheApplications"",
- distance: ""Cosine"",
+ points=Batch(
- },
+ ids=[1],
- optimizers_config: {
+ vectors=[embeddings],
- default_segment_number: 16,
+ )
+
+)
- },
-});
```
+",documentation/embeddings/ollama.md
+"---
-```rust
+title: OpenCLIP
-use qdrant_client::{
+weight: 2750
- client::QdrantClient,
+---
- qdrant::{
- vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams,
- VectorsConfig,
+# Using OpenCLIP with Qdrant
- },
-};
+OpenCLIP is an open-source implementation of the CLIP model, allowing for open source generation of multimodal embeddings that link text and images.
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+```python
+import qdrant_client
-client
+from qdrant_client.models import Batch
- .create_collection(&CreateCollection {
+import open_clip
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
+# Load the OpenCLIP model and tokenizer
- size: 768,
+model, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='openai')
- distance: Distance::Cosine.into(),
+tokenizer = open_clip.get_tokenizer('ViT-B-32')
- ..Default::default()
- })),
- }),
+# Generate embeddings for a text
- optimizers_config: Some(OptimizersConfigDiff {
+text = ""A photo of a cat""
- default_segment_number: Some(16),
+text_inputs = tokenizer([text])
- ..Default::default()
- }),
- ..Default::default()
+with torch.no_grad():
- })
+ text_features = model.encode_text(text_inputs)
- .await?;
-```
+# Convert tensor to a list
+embeddings = text_features[0].cpu().numpy().tolist()
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+# Initialize Qdrant client
-import io.qdrant.client.grpc.Collections.CreateCollection;
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-import io.qdrant.client.grpc.Collections.VectorParams;
+# Upsert the embedding into Qdrant
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+qdrant_client.upsert(
+ collection_name=""OpenCLIPEmbeddings"",
+ points=Batch(
-QdrantClient client =
+ ids=[1],
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ vectors=[embeddings],
+ )
+)
-client
+```
- .createCollectionAsync(
- CreateCollection.newBuilder()
+",documentation/embeddings/openclip.md
+"---
- .setCollectionName(""{collection_name}"")
+title: Databricks Embeddings
- .setVectorsConfig(
+weight: 1500
- VectorsConfig.newBuilder()
+---
- .setParams(
- VectorParams.newBuilder()
- .setSize(768)
+# Using Databricks Embeddings with Qdrant
- .setDistance(Distance.Cosine)
- .build())
- .build())
+Databricks offers an advanced platform for generating embeddings, especially within large-scale data environments. You can use the following Python code to integrate Databricks-generated embeddings with Qdrant.
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build())
- .build())
+```python
- .get();
+import qdrant_client
-```
+from qdrant_client.models import Batch
+from databricks import sql
-```csharp
-using Qdrant.Client;
+# Connect to Databricks SQL endpoint
-using Qdrant.Client.Grpc;
+connection = sql.connect(server_hostname='your_hostname',
+ http_path='your_http_path',
+ access_token='your_access_token')
-var client = new QdrantClient(""localhost"", 6334);
+# Execute a query to get embeddings
-await client.CreateCollectionAsync(
+query = ""SELECT embedding FROM your_table WHERE id = 1""
- collectionName: ""{collection_name}"",
+cursor = connection.cursor()
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+cursor.execute(query)
- optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 }
+embedding = cursor.fetchone()[0]
-);
-```
+# Initialize Qdrant client
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel.
-To do that, you can configure qdrant to use minimal number of segments, which is usually 2.
-Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index.
+# Upsert the embedding into Qdrant
+qdrant_client.upsert(
+ collection_name=""DatabricksEmbeddings"",
-```http
+ points=Batch(
-PUT /collections/{collection_name}
+ ids=[1], # Unique ID for the data point
-{
+ vectors=[embedding], # Embedding fetched from Databricks
- ""vectors"": {
+ )
- ""size"": 768,
+)
- ""distance"": ""Cosine""
+```
+",documentation/embeddings/databricks.md
+"---
- },
+title: Cohere
- ""optimizers_config"": {
+weight: 1400
- ""default_segment_number"": 2
+aliases: [ ../integrations/cohere/ ]
- }
+---
-}
-```
+# Cohere
-```python
-from qdrant_client import QdrantClient, models
+Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that
+might be installed as any other package:
-client = QdrantClient(""localhost"", port=6333)
+```bash
+pip install cohere
-client.create_collection(
+```
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- optimizers_config=models.OptimizersConfigDiff(default_segment_number=2),
+The embeddings returned by co.embed API might be used directly in the Qdrant client's calls:
-)
-```
+```python
+import cohere
-```typescript
+import qdrant_client
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+from qdrant_client.models import Batch
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+cohere_client = cohere.Client(""<< your_api_key >>"")
+qdrant_client = qdrant_client.QdrantClient()
+qdrant_client.upsert(
-client.createCollection(""{collection_name}"", {
+ collection_name=""MyCollection"",
- vectors: {
+ points=Batch(
- size: 768,
+ ids=[1],
- distance: ""Cosine"",
+ vectors=cohere_client.embed(
- },
+ model=""large"",
- optimizers_config: {
+ texts=[""The best vector database""],
- default_segment_number: 2,
+ ).embeddings,
- },
+ ),
-});
+)
```
-```rust
-
-use qdrant_client::{
+If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the
- client::QdrantClient,
+""[Question Answering as a Service with Cohere and Qdrant](/articles/qa-with-cohere-and-qdrant/)"" article.
- qdrant::{
- vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams,
- VectorsConfig,
+## Embed v3
- },
-};
+Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional
+parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for.
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+- `input_type=""search_document""` - for documents to store in Qdrant
-client
+- `input_type=""search_query""` - for search queries to find the most relevant documents
- .create_collection(&CreateCollection {
+- `input_type=""classification""` - for classification tasks
- collection_name: ""{collection_name}"".to_string(),
+- `input_type=""clustering""` - for text clustering
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 768,
+While implementing semantic search applications, such as RAG, you should use `input_type=""search_document""` for the
- distance: Distance::Cosine.into(),
+indexed documents and `input_type=""search_query""` for the search queries. The following example shows how to index
- ..Default::default()
+documents with the Embed v3 model:
- })),
- }),
- optimizers_config: Some(OptimizersConfigDiff {
+```python
- default_segment_number: Some(2),
+import cohere
- ..Default::default()
+import qdrant_client
- }),
+from qdrant_client.models import Batch
- ..Default::default()
- })
- .await?;
+cohere_client = cohere.Client(""<< your_api_key >>"")
-```
+client = qdrant_client.QdrantClient()
+client.upsert(
+ collection_name=""MyCollection"",
-```java
+ points=Batch(
-import io.qdrant.client.QdrantClient;
+ ids=[1],
-import io.qdrant.client.QdrantGrpcClient;
+ vectors=cohere_client.embed(
-import io.qdrant.client.grpc.Collections.CreateCollection;
+ model=""embed-english-v3.0"", # New Embed v3 model
-import io.qdrant.client.grpc.Collections.Distance;
+ input_type=""search_document"", # Input type for documents
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+ texts=[""Qdrant is the a vector database written in Rust""],
-import io.qdrant.client.grpc.Collections.VectorParams;
+ ).embeddings,
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+ ),
+)
+```
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model:
-client
- .createCollectionAsync(
+```python
- CreateCollection.newBuilder()
+client.search(
- .setCollectionName(""{collection_name}"")
+ collection_name=""MyCollection"",
- .setVectorsConfig(
+ query_vector=cohere_client.embed(
- VectorsConfig.newBuilder()
+ model=""embed-english-v3.0"", # New Embed v3 model
- .setParams(
+ input_type=""search_query"", # Input type for search queries
- VectorParams.newBuilder()
+ texts=[""The best vector database""],
- .setSize(768)
+ ).embeddings[0],
- .setDistance(Distance.Cosine)
+)
- .build())
+```
- .build())
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build())
+
+",documentation/embeddings/cohere.md
+"---
+title: Clip
+weight: 1300
-```csharp
+---
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+# Using Clip with Qdrant
-var client = new QdrantClient(""localhost"", 6334);
+CLIP (Contrastive Language-Image Pre-Training) provides advanced AI capabilities including natural language processing and computer vision. CLIP is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+## Installation
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 }
-);
+You can install the required package using the following pip command:
-```",documentation/guides/optimize.md
-"---
-title: Telemetry
-weight: 150
+```bash
-aliases:
+pip install clip-client
- - ../telemetry
+```
----
+## Integration Example
-# Telemetry
+```python
+import qdrant_client
+from qdrant_client.models import Batch
-Qdrant collects anonymized usage statistics from users in order to improve the engine.
+from transformers import CLIPProcessor, CLIPModel
-You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion).
+from PIL import Image
-## Why do we collect telemetry?
+# Load the CLIP model and processor
+model = CLIPModel.from_pretrained(""openai/clip-vit-base-patch32"")
+processor = CLIPProcessor.from_pretrained(""openai/clip-vit-base-patch32"")
-We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios.
-We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations.
+# Load and process the image
+image = Image.open(""path/to/image.jpg"")
-In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used.
+inputs = processor(images=image, return_tensors=""pt"")
-Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance.
+# Generate embeddings
-To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code.
+with torch.no_grad():
-With this information, we can make Qdrant faster for everyone.
+ embeddings = model.get_image_features(**inputs).numpy().tolist()
+# Initialize Qdrant client
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-## What information is collected?
+# Upsert the embedding into Qdrant
-There are 3 types of information that we collect:
+qdrant_client.upsert(
+ collection_name=""ImageEmbeddings"",
+ points=Batch(
-* System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance.
+ ids=[1],
-* Performance - information about timings and counters of various pieces of code.
+ vectors=embeddings,
-* Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us.
+ )
+)
-### We **never** collect the following information:
+```
-- User's IP address
+",documentation/embeddings/clip.md
+"---
-- Any data that can be used to identify the user or the user's organization
+title: Clarifai
-- Any data, stored in the collections
+weight: 1200
-- Any names of the collections
+---
-- Any URLs
+# Using Clarifai Embeddings with Qdrant
-## How do we anonymize data?
+Clarifai is a leading provider of visual embeddings, which are particularly strong in image and video analysis. Clarifai offers an API that allows you to create embeddings for various media types, which can be integrated into Qdrant for efficient vector search and retrieval.
-We understand that some users may be concerned about the privacy of their data.
-That is why we make an extra effort to ensure your privacy.
+You can install the Clarifai Python client with pip:
-There are several different techniques that we use to anonymize the data:
+```bash
+pip install clarifai-client
-- We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances.
+```
-- We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000.
-- We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry.
-- All urls are hashed as well.
+## Integration Example
-You can see exact version of anomymized collected data by accessing the [telemetry API](https://qdrant.github.io/qdrant/redoc/index.html#tag/service/operation/telemetry) with `anonymize=true` parameter.
+```python
+import qdrant_client
+from qdrant_client.models import Batch
-For example,
+from clarifai.rest import ClarifaiApp
+# Initialize Clarifai client
+clarifai_app = ClarifaiApp(api_key=""<< your_api_key >>"")
-## Deactivate telemetry
+# Choose the model for embeddings
-You can deactivate telemetry by:
+model = clarifai_app.public_models.general_embedding_model
-- setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true`
+# Upload and get embeddings for an image
-- setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files
+image_path = ""./path/to/the/image.jpg""
-- using cli option `--disable-telemetry`
+response = model.predict_by_filename(image_path)
-Any of these options will prevent Qdrant from sending any telemetry data.
+# Extract the embedding from the response
+embedding = response['outputs'][0]['data']['embeddings'][0]['vector']
-If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions)
+# Initialize Qdrant client
+qdrant_client = qdrant_client.QdrantClient()
-## Request information deletion
+# Upsert the embedding into Qdrant
-We provide an email address so that users can request the complete removal of their data from all of our tools.
+qdrant_client.upsert(
+ collection_name=""MyCollection"",
+ points=Batch(
-To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation.
+ ids=[1],
-You can find this identifier in the telemetry API response (`""id""` field), or in the logs of your Qdrant instance.
+ vectors=[embedding],
+ )
+)
-Any questions regarding the management of the data we collect can also be sent to this email address.
-",documentation/guides/telemetry.md
+```
+",documentation/embeddings/clarifai.md
"---
-title: Distributed Deployment
-
-weight: 100
-
-aliases:
+title: Mistral
- - ../distributed_deployment
+weight: 2100
---
-# Distributed deployment
+| Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/mistral-getting-started/mistral-embed-getting-started/mistral_qdrant_getting_started.ipynb) |
+| --- | ----------- | ----------- |
-Since version v0.8.0 Qdrant supports a distributed deployment mode.
-In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability.
+# Mistral
+Qdrant is compatible with the new released Mistral Embed and its official Python SDK that can be installed as any other package:
-To enable distributed deployment - enable the cluster mode in the [configuration](../configuration) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`.
+## Setup
-```yaml
-cluster:
+### Install the client
- # Use `enabled: true` to run Qdrant in distributed deployment mode
- enabled: true
- # Configuration of the inter-cluster communication
+```bash
- p2p:
+pip install mistralai
- # Port for internal communication between peers
+```
- port: 6335
+And then we set this up:
- # Configuration related to distributed consensus algorithm
- consensus:
- # How frequently peers should ping each other.
+```python
- # Setting this parameter to lower value will allow consensus
+from mistralai.client import MistralClient
- # to detect disconnected node earlier, but too frequent
+from qdrant_client import QdrantClient
- # tick period may create significant network and CPU overhead.
+from qdrant_client.models import PointStruct, VectorParams, Distance
- # We encourage you NOT to change this parameter unless you know what you are doing.
- tick_period_ms: 100
-```
+collection_name = ""example_collection""
-By default, Qdrant will use port `6335` for its internal communication.
+MISTRAL_API_KEY = ""your_mistral_api_key""
-All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations.
+client = QdrantClient("":memory:"")
+mistral_client = MistralClient(api_key=MISTRAL_API_KEY)
+texts = [
-Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached:
+ ""Qdrant is the best vector search engine!"",
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
+]
-```bash
+```
-./qdrant --uri 'http://qdrant_node_1:6335'
-```
+Let's see how to use the Embedding Model API to embed a document for retrieval.
-Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster.
+The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type:
-To do this, they need to be provided with a bootstrap URL:
+## Embedding a document
-```bash
-./qdrant --bootstrap 'http://qdrant_node_1:6335'
+```python
-```
+result = mistral_client.embeddings(
+ model=""mistral-embed"",
+ input=texts,
-The URL of the new peers themselves will be calculated automatically from the IP address of their request.
+)
-But it is also possible to provide them individually using the `--uri` argument.
+```
-```text
+The returned result has a data field with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document.
-USAGE:
- qdrant [OPTIONS]
+### Converting this into Qdrant Points
-OPTIONS:
- --bootstrap
+```python
- Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified -
+points = [
- this peer will be considered as a first in a new deployment
+ PointStruct(
+ id=idx,
+ vector=response.embedding,
- --uri
+ payload={""text"": text},
- Uri of this peer. Other peers should be able to reach it by this uri.
+ )
+ for idx, (response, text) in enumerate(zip(result.data, texts))
+]
- This value has to be supplied if this is the first peer in a new deployment.
+```
- In case this is not the first peer and it bootstraps the value is optional. If not
+## Create a collection and Insert the documents
- supplied then qdrant will take internal grpc port from config and derive the IP address
- of this peer on bootstrap peer (receiving side)
+```python
+client.create_collection(collection_name, vectors_config=VectorParams(
-```
+ size=1024,
+ distance=Distance.COSINE,
+ )
-After a successful synchronization you can observe the state of the cluster through the [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster):
+)
+client.upsert(collection_name, points)
+```
-```http
-GET /cluster
-```
+## Searching for documents with Qdrant
-Example result:
+Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type:
-```json
+```python
-{
+client.search(
- ""result"": {
+ collection_name=collection_name,
- ""status"": ""enabled"",
+ query_vector=mistral_client.embeddings(
- ""peer_id"": 11532566549086892000,
+ model=""mistral-embed"", input=[""What is the best to use for vector search scaling?""]
- ""peers"": {
+ ).data[0].embedding,
- ""9834046559507417430"": {
+)
- ""uri"": ""http://172.18.0.3:6335/""
+```
- },
- ""11532566549086892528"": {
- ""uri"": ""http://qdrant_node_1:6335/""
+## Using Mistral Embedding Models with Binary Quantization
- }
- },
- ""raft_info"": {
+You can use Mistral Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
- ""term"": 1,
- ""commit"": 4,
- ""pending_operations"": 1,
+At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled.
- ""leader"": 11532566549086892000,
- ""role"": ""Leader""
- }
+| Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 |
- },
+|--------------|---------|----------|----------|----------|----------|----------|--------------|
- ""status"": ""ok"",
+| | **Rescore** | False | True | False | True | False | True |
- ""time"": 5.731e-06
+| **Limit** | | | | | | | |
-}
+| 10 | | 0.53444 | 0.857778 | 0.534444 | 0.918889 | 0.533333 | 0.941111 |
-```
+| 20 | | 0.508333 | 0.837778 | 0.508333 | 0.903889 | 0.508333 | 0.927778 |
+| 50 | | 0.492222 | 0.834444 | 0.492222 | 0.903556 | 0.492889 | 0.940889 |
+| 100 | | 0.499111 | 0.845444 | 0.498556 | 0.918333 | 0.497667 | **0.944556** |
-## Raft
+That's it! You can now use Mistral Embedding Models with Qdrant!
+",documentation/embeddings/mistral.md
+"---
-Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure.
+title: ""Nomic""
+weight: 2300
+---
-Operations on points, on the other hand, do not go through the consensus infrastructure.
-Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead.
-In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes.
+# Nomic
-Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes.
+The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder.
-In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them.
+While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1),
+you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
+Once installed, you can configure it with the official Python client, FastEmbed or through direct HTTP requests.
-Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied.
+
-You may use the cluster [REST API](https://qdrant.github.io/qdrant/redoc/index.html?v=master#tag/cluster) to check the state of the consensus.
+You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings
-## Sharding
+are obtained for documents and queries.
-A Collection in Qdrant is made of one or more shards.
+#### Upsert using [Nomic SDK](https://github.com/nomic-ai/nomic)
-A shard is an independent store of points which is able to perform all operations provided by collections.
-There are two methods of distributing points across shards:
+The `task_type` parameter defines the embeddings that you get.
+For documents, set the `task_type` to `search_document`:
-- **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior.
+```python
-- **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding)
+from qdrant_client import QdrantClient, models
+from nomic import embed
-Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result.
+output = embed.text(
+ texts=[""Qdrant is the best vector database!""],
-When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster:
+ model=""nomic-embed-text-v1"",
+ task_type=""search_document"",
+)
-```http
-PUT /collections/{collection_name}
-{
+client = QdrantClient()
- ""vectors"": {
+client.upsert(
- ""size"": 300,
+ collection_name=""my-collection"",
- ""distance"": ""Cosine""
+ points=models.Batch(
- },
+ ids=[1],
- ""shard_number"": 6
+ vectors=output[""embeddings""],
-}
+ ),
+
+)
```
-```python
+#### Upsert using [FastEmbed](https://github.com/qdrant/fastembed)
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+```python
+from fastembed import TextEmbedding
-client = QdrantClient(""localhost"", port=6333)
+from client import QdrantClient, models
-client.create_collection(
+model = TextEmbedding(""nomic-ai/nomic-embed-text-v1"")
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
- shard_number=6,
+output = model.embed([""Qdrant is the best vector database!""])
-)
-```
+client = QdrantClient()
+client.upsert(
-```typescript
+ collection_name=""my-collection"",
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ points=models.Batch(
+ ids=[1],
+ vectors=[embeddings.tolist() for embeddings in output],
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ ),
+)
+```
-client.createCollection(""{collection_name}"", {
- vectors: {
- size: 300,
+#### Search using [Nomic SDK](https://github.com/nomic-ai/nomic)
- distance: ""Cosine"",
- },
- shard_number: 6,
+To query the collection, set the `task_type` to `search_query`:
-});
-```
+```python
+
+output = embed.text(
+ texts=[""What is the best vector database?""],
-```rust
+ model=""nomic-embed-text-v1"",
-use qdrant_client::{
+ task_type=""search_query"",
- client::QdrantClient,
+)
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
-};
+client.search(
+ collection_name=""my-collection"",
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ query_vector=output[""embeddings""][0],
+)
+```
-client
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".into(),
+#### Search using [FastEmbed](https://github.com/qdrant/fastembed)
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 300,
+```python
- distance: Distance::Cosine.into(),
+output = next(model.embed(""What is the best vector database?""))
- ..Default::default()
- })),
- }),
+client.search(
- shard_number: Some(6),
+ collection_name=""my-collection"",
- })
+ query_vector=output.tolist(),
- .await?;
+)
```
-```java
+For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text).
+",documentation/embeddings/nomic.md
+"---
-import io.qdrant.client.QdrantClient;
+title: Nvidia
-import io.qdrant.client.QdrantGrpcClient;
+weight: 2400
-import io.qdrant.client.grpc.Collections.CreateCollection;
+---
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+# Nvidia
-QdrantClient client =
+Qdrant supports working with [Nvidia embeddings](https://build.nvidia.com/explore/retrieval).
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+You can generate an API key to authenticate the requests from the [Nvidia Playground]().
-client
- .createCollectionAsync(
- CreateCollection.newBuilder()
+### Setting up the Qdrant client and Nvidia session
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
- VectorsConfig.newBuilder()
+```python
- .setParams(
+import requests
- VectorParams.newBuilder()
+from qdrant_client import QdrantClient
- .setSize(300)
- .setDistance(Distance.Cosine)
- .build())
+NVIDIA_BASE_URL = ""https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings""
- .build())
- .setShardNumber(6)
- .build())
+NVIDIA_API_KEY = """"
- .get();
-```
+nvidia_session = requests.Session()
-```csharp
-using Qdrant.Client;
+client = QdrantClient("":memory:"")
-using Qdrant.Client.Grpc;
+headers = {
-var client = new QdrantClient(""localhost"", 6334);
+ ""Authorization"": f""Bearer {NVIDIA_API_KEY}"",
+ ""Accept"": ""application/json"",
+}
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
+texts = [
- shardNumber: 6
+ ""Qdrant is the best vector search engine!"",
-);
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
+
+]
```
-We recommend setting the number of shards to be a multiple of the number of nodes you are currently running in your cluster.
+```typescript
+import { QdrantClient } from '@qdrant/js-client-rest';
-For example, if you have 3 nodes, 6 shards could be a good option.
+const NVIDIA_BASE_URL = ""https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings""
+const NVIDIA_API_KEY = """"
-Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations.
+const client = new QdrantClient({ url: 'http://localhost:6333' });
-### Moving shards
+const headers = {
-*Available as of v0.9.0*
+ ""Authorization"": ""Bearer "" + NVIDIA_API_KEY,
+ ""Accept"": ""application/json"",
+ ""Content-Type"": ""application/json""
-Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime.
+}
-Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/collection_cluster_info).
+const texts = [
+ ""Qdrant is the best vector search engine!"",
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
-Use the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to initiate the shard transfer:
+]
+```
-```http
-POST /collections/{collection_name}/cluster
+The following example shows how to embed documents with the `embed-qa-4` model that generates sentence embeddings of size 1024.
-{
- ""move_shard"": {
- ""shard_id"": 0,
+### Embedding documents
- ""from_peer_id"": 381894127,
-
- ""to_peer_id"": 467122995
-
- }
-
-}
-```
+```python
+payload = {
-
+ ""input"": texts,
+ ""input_type"": ""passage"",
+ ""model"": ""NV-Embed-QA"",
-After the transfer is initiated, the service will process it based on the used
+}
-[transfer method](#shard-transfer-method) keeping both shards in sync. Once the
-transfer is completed, the old shard is deleted from the source node.
+response_body = nvidia_session.post(
+ NVIDIA_BASE_URL, headers=headers, json=payload
-In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer).
+).json()
+```
-```http
-DELETE /cluster/peer/{peer_id}
+```typescript
-```
+let body = {
+ ""input"": texts,
+ ""input_type"": ""passage"",
-After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown.
+ ""model"": ""NV-Embed-QA""
+}
-### User-defined sharding
+let response = await fetch(NVIDIA_BASE_URL, {
+ method: ""POST"",
-*Available as of v1.7.0*
+ body: JSON.stringify(body),
+ headers
+});
-Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned.
+let response_body = await response.json()
-A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards.
+```
-To enable user-defined sharding, set `sharding_method` to `custom` during collection creation:
+### Converting the model outputs to Qdrant points
-```http
+```python
-PUT /collections/{collection_name}
+from qdrant_client.models import PointStruct
-{
- ""shard_number"": 1,
- ""sharding_method"": ""custom""
+points = [
- // ... other collection parameters
+ PointStruct(
-}
+ id=idx,
-```
+ vector=data[""embedding""],
+ payload={""text"": text},
+ )
-```python
+ for idx, (data, text) in enumerate(zip(response_body[""data""], texts))
-from qdrant_client import QdrantClient
+]
-from qdrant_client.http import models
+```
-client = QdrantClient(""localhost"", port=6333)
+```typescript
+let points = response_body.data.map((data, i) => {
+ return {
-client.create_collection(
+ id: i,
- collection_name=""{collection_name}"",
+ vector: data.embedding,
- shard_number=1,
+ payload: {
- sharding_method=models.ShardingMethod.CUSTOM,
+ text: texts[i]
- # ... other collection parameters
+ }
-)
+ }
-client.create_shard_key(""{collection_name}"", ""user_1"")
+})
```
-```typescript
-
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+### Creating a collection to insert the documents
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```python
+from qdrant_client.models import VectorParams, Distance
-client.createCollection(""{collection_name}"", {
- shard_number: 1,
+collection_name = ""example_collection""
- sharding_method: ""custom"",
- // ... other collection parameters
-});
+client.create_collection(
-```
+ collection_name,
+ vectors_config=VectorParams(
+ size=1024,
-```rust
+ distance=Distance.COSINE,
+ ),
+)
-use qdrant_client::{
+client.upsert(collection_name, points)
- client::QdrantClient,
+```
- qdrant::{CreateCollection, ShardingMethod},
-};
+```typescript
+const COLLECTION_NAME = ""example_collection""
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+await client.createCollection(COLLECTION_NAME, {
-client
+ vectors: {
- .create_collection(&CreateCollection {
+ size: 1024,
- collection_name: ""{collection_name}"".into(),
+ distance: 'Cosine',
- shard_number: Some(1),
+ }
- sharding_method: Some(ShardingMethod::Custom),
+});
- // ... other collection parameters
- ..Default::default()
- })
+await client.upsert(COLLECTION_NAME, {
- .await?;
+ wait: true,
-```
+ points
+})
+```
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+## Searching for documents with Qdrant
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.ShardingMethod;
+Once the documents are added, you can search for the most relevant documents.
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+```python
+payload = {
+ ""input"": ""What is the best to use for vector search scaling?"",
-client
+ ""input_type"": ""query"",
- .createCollectionAsync(
+ ""model"": ""NV-Embed-QA"",
- CreateCollection.newBuilder()
+}
- .setCollectionName(""{collection_name}"")
- // ... other collection parameters
- .setShardNumber(1)
+response_body = nvidia_session.post(
- .setShardingMethod(ShardingMethod.Custom)
+ NVIDIA_BASE_URL, headers=headers, json=payload
- .build())
+).json()
- .get();
-```
+client.search(
+ collection_name=collection_name,
-```csharp
+ query_vector=response_body[""data""][0][""embedding""],
-using Qdrant.Client;
+)
-using Qdrant.Client.Grpc;
+```
-var client = new QdrantClient(""localhost"", 6334);
+```typescript
+body = {
+ ""input"": ""What is the best to use for vector search scaling?"",
-await client.CreateCollectionAsync(
+ ""input_type"": ""query"",
- collectionName: ""{collection_name}"",
+ ""model"": ""NV-Embed-QA"",
- // ... other collection parameters
+}
- shardNumber: 1,
- shardingMethod: ShardingMethod.Custom
-);
+response = await fetch(NVIDIA_BASE_URL, {
-```
+ method: ""POST"",
+ body: JSON.stringify(body),
+ headers
-In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings:
+});
-```json
+response_body = await response.json()
-{
- ""shard_number"": 1,
- ""sharding_method"": ""custom"",
+await client.search(COLLECTION_NAME, {
- ""replication_factor"": 2
+ vector: response_body.data[0].embedding,
-}
+});
```
+",documentation/embeddings/nvidia.md
+"---
+title: Prem AI
+weight: 2800
-Then you will have `1 * 10 * 2 = 20` total physical shards in the collection.
+---
-To specify the shard for each point, you need to provide the `shard_key` field in the upsert request:
+# Prem AI
-```http
+[PremAI](https://premai.io/) is a unified generative AI development platform for fine-tuning deploying, and monitoring AI models.
-PUT /collections/{collection_name}/points
-{
- ""points"": [
+Qdrant is compatible with PremAI APIs.
- {
- ""id"": 1111,
- ""vector"": [0.1, 0.2, 0.3]
+### Installing the SDKs
- },
- ]
- ""shard_key"": ""user_1""
+```bash
-}
+pip install premai qdrant-client
```
-```python
-
-from qdrant_client import QdrantClient
+To install the npm package:
-from qdrant_client.http import models
+```bash
-client = QdrantClient(""localhost"", port=6333)
+npm install @premai/prem-sdk @qdrant/js-client-rest
+```
-client.upsert(
- collection_name=""{collection_name}"",
+### Import all required packages
- points=[
- models.PointStruct(
- id=1111,
+```python
- vector=[0.1, 0.2, 0.3],
+from premai import Prem
- ),
- ],
- shard_key_selector=""user_1"",
+from qdrant_client import QdrantClient
-)
+from qdrant_client.models import Distance, VectorParams
```
@@ -8510,1189 +8355,1184 @@ client.upsert(
```typescript
+import Prem from '@premai/prem-sdk';
+import { QdrantClient } from '@qdrant/js-client-rest';
-client.upsertPoints(""{collection_name}"", {
+```
- points: [
- {
- id: 1111,
+### Define all the constants
- vector: [0.1, 0.2, 0.3],
- },
- ],
+We need to define the project ID and the embedding model to use. You can learn more about obtaining these in the PremAI [docs](https://docs.premai.io/quick-start).
- shard_key: ""user_1"",
-});
-```
+```python
-```rust
+PROJECT_ID = 123
+EMBEDDING_MODEL = ""text-embedding-3-large""
+COLLECTION_NAME = ""prem-collection-py""
-use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType};
+QDRANT_SERVER_URL = ""http://localhost:6333""
+DOCUMENTS = [
+ ""This is a sample python document"",
-client
+ ""We will be using qdrant and premai python sdk""
- .upsert_points_blocking(
+]
- ""{collection_name}"",
+```
- Some(vec![shard_key::Key::String(""user_1"".into())]),
- vec![
- PointStruct::new(
+```typescript
- 1111,
+const PROJECT_ID = 123;
- vec![0.1, 0.2, 0.3],
+const EMBEDDING_MODEL = ""text-embedding-3-large"";
- Default::default(),
+const COLLECTION_NAME = ""prem-collection-js"";
- ),
+const SERVER_URL = ""http://localhost:6333""
- ],
+const DOCUMENTS = [
- None,
+ ""This is a sample javascript document"",
- )
+ ""We will be using qdrant and premai javascript sdk""
- .await?;
+];
```
-```java
+### Set up PremAI and Qdrant clients
-import java.util.List;
-import static io.qdrant.client.PointIdFactory.id;
-import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector;
+```python
-import static io.qdrant.client.VectorsFactory.vectors;
+prem_client = Prem(api_key=""xxxx-xxx-xxx"")
+qdrant_client = QdrantClient(url=QDRANT_SERVER_URL)
+```
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.PointStruct;
+```typescript
-import io.qdrant.client.grpc.Points.UpsertPoints;
+const premaiClient = new Prem({
+ apiKey: ""xxxx-xxx-xxx""
+})
-QdrantClient client =
+const qdrantClient = new QdrantClient({ url: SERVER_URL });
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+```
-client
+### Generating Embeddings
- .upsertAsync(
- UpsertPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
+```python
- .addAllPoints(
+from typing import Union, List
- List.of(
- PointStruct.newBuilder()
- .setId(id(111))
+def get_embeddings(
- .setVectors(vectors(0.1f, 0.2f, 0.3f))
+ project_id: int,
- .build()))
+ embedding_model: str,
- .setShardKeySelector(shardKeySelector(""user_1""))
+ documents: Union[str, List[str]]
- .build())
+) -> List[List[float]]:
- .get();
+ """"""
-```
+ Helper function to get the embeddings from premai sdk
+ Args
+ project_id (int): The project id from prem saas platform.
-```csharp
+ embedding_model (str): The embedding model alias to choose
-using Qdrant.Client;
+ documents (Union[str, List[str]]): Single texts or list of texts to embed
-using Qdrant.Client.Grpc;
+ Returns:
+ List[List[int]]: A list of list of integers that represents different
+ embeddings
-var client = new QdrantClient(""localhost"", 6334);
+ """"""
+ embeddings = []
+ documents = [documents] if isinstance(documents, str) else documents
-await client.UpsertAsync(
+ for embedding in prem_client.embeddings.create(
- collectionName: ""{collection_name}"",
+ project_id=project_id,
- points: new List
+ model=embedding_model,
- {
+ input=documents
- new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } }
+ ).data:
- },
+ embeddings.append(embedding.embedding)
- shardKeySelector: new ShardKeySelector { ShardKeys = { new List { ""user_id"" } } }
+
-);
+ return embeddings
```
-
+```typescript
-
+async function getEmbeddings(projectID, embeddingModel, documents) {
-* When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys.
+ const response = await premaiClient.embeddings.create({
-This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check.
+ project_id: projectID,
-
+ model: embeddingModel,
+ input: documents
+ });
-Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards.
+ return response;
+}
+```
-Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed.
+### Converting Embeddings to Qdrant Points
-
-### Shard transfer method
+```python
+from qdrant_client.models import PointStruct
-*Available as of v1.7.0*
+embeddings = get_embeddings(
-There are different methods for transferring, such as moving or replicating, a
+ project_id=PROJECT_ID,
-shard to another node. Depending on what performance and guarantees you'd like
+ embedding_model=EMBEDDING_MODEL,
-to have and how you'd like to manage your cluster, you likely want to choose a
+ documents=DOCUMENTS
-specific method. Each method has its own pros and cons. Which is fastest depends
+)
-on the size and state of a shard.
+points = [
-Available shard transfer methods are:
+ PointStruct(
+ id=idx,
+ vector=embedding,
-- `stream_records`: _(default)_ transfer shard by streaming just its records to the target node in batches.
+ payload={""text"": text},
-- `snapshot`: transfer shard including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots) automatically.
+ ) for idx, (embedding, text) in enumerate(zip(embeddings, DOCUMENTS))
+]
+```
-Each has pros, cons and specific requirements, which are:
+```typescript
-| Method: | Stream records | Snapshot |
+function convertToQdrantPoints(embeddings, texts) {
-|:---|:---|:---|
+ return embeddings.data.map((data, i) => {
-| **Connection** |
Requires internal gRPC API (port 6335)
|
Requires internal gRPC API (port 6335)
Requires REST API (port 6333)
|
+ return {
-| **HNSW index** |
Doesn't transfer index
Will reindex on target node
|
Index is transferred with a snapshot
Immediately ready on target node
|
+ id: i,
-| **Quantization** |
Doesn't transfer quantized data
Will re-quantize on target node
|
Quantized data is transferred with a snapshot
Immediately ready on target node
|
+ vector: data.embedding,
-| **Consistency** |
Weak data consistency
Unordered updates on target node[^unordered]
|
Strong data consistency
Ordered updates on target node[^ordered]
|
+ payload: {
-| **Disk space** |
No extra disk space required
|
Extra disk space required for snapshot on both nodes
|
+ text: texts[i]
+ }
+ };
-[^unordered]: Weak data consistency and unordered updates: All records are streamed to the target node in order.
+ });
- New updates are received on the target node in parallel, while the transfer
+}
- of records is still happening. We therefore have `weak` ordering, regardless
- of what [ordering](#write-ordering) is used for updates.
-[^ordered]: Strong data consistency and ordered updates: A snapshot of the shard
+const embeddings = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, DOCUMENTS);
- is created, it is transferred and recovered on the target node. That ensures
+const points = convertToQdrantPoints(embeddings, DOCUMENTS);
- the state of the shard is kept consistent. New updates are queued on the
+```
- source node, and transferred in order to the target node. Updates therefore
- have the same [ordering](#write-ordering) as the user selects, making
- `strong` ordering possible.
+### Set up a Qdrant Collection
-To select a shard transfer method, specify the `method` like:
+```python
+qdrant_client.create_collection(
+ collection_name=COLLECTION_NAME,
-```http
+ vectors_config=VectorParams(size=3072, distance=Distance.DOT)
-POST /collections/{collection_name}/cluster
+)
-{
+```
- ""move_shard"": {
+```typescript
- ""shard_id"": 0,
+await qdrantClient.createCollection(COLLECTION_NAME, {
- ""from_peer_id"": 381894127,
+ vectors: {
- ""to_peer_id"": 467122995,
+ size: 3072,
- ""method"": ""snapshot""
+ distance: 'Cosine'
}
-}
+})
```
-The `stream_records` transfer method is the simplest available. It simply
+### Insert Documents into the Collection
-transfers all shard records in batches to the target node until it has
-transferred all of them, keeping both shards in sync. It will also make sure the
-transferred shard indexing process is keeping up before performing a final
+```python
-switch. The method has two common disadvantages: 1. It does not transfer index
+doc_ids = list(range(len(embeddings)))
-or quantization data, meaning that the shard has to be optimized again on the
-new node, which can be very expensive. 2. The consistency and ordering
-guarantees are `weak`[^unordered], which is not suitable for some applications.
+qdrant_client.upsert(
-Because it is so simple, it's also very robust, making it a reliable choice if
+ collection_name=COLLECTION_NAME,
-the above cons are acceptable in your use case. If your cluster is unstable and
+ points=points
-out of resources, it's probably best to use the `stream_records` transfer
+ )
-method, because it is unlikely to fail.
+```
-The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots) to
+```typescript
-transfer a shard. A snapshot is created automatically. It is then transferred
+await qdrantClient.upsert(COLLECTION_NAME, {
-and restored on the target node. After this is done, the snapshot is removed
+ wait: true,
-from both nodes. While the snapshot/transfer/restore operation is happening, the
+ points
-source node queues up all new operations. All queued updates are then sent in
+ });
-order to the target shard to bring it into the same state as the source. There
+```
-are two important benefits: 1. It transfers index and quantization data, so that
-the shard does not have to be optimized again on the target node, making them
-immediately available. This way, Qdrant ensures that there will be no
+### Perform a Search
-degradation in performance at the end of the transfer. Especially on large
-shards, this can give a huge performance improvement. 2. The consistency and
-ordering guarantees can be `strong`[^ordered], required for some applications.
+```python
+query = ""what is the extension of python document""
-The `stream_records` method is currently used as default. This may change in the
-future.
+query_embedding = get_embeddings(
+ project_id=PROJECT_ID,
+ embedding_model=EMBEDDING_MODEL,
-## Replication
+ documents=query
+)
-*Available as of v0.11.0*
+qdrant_client.search(collection_name=COLLECTION_NAME, query_vector=query_embedding[0])
+```
-Qdrant allows you to replicate shards between nodes in the cluster.
+```typescript
+const query = ""what is the extension of javascript document""
+const query_embedding_response = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, query)
-Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster.
-This ensures the availability of the data in case of node failures, except if all replicas are lost.
+await qdrantClient.search(COLLECTION_NAME, {
+ vector: query_embedding_response.data[0].embedding
-### Replication factor
+});
+```
+",documentation/embeddings/premai.md
+"---
+title: GradientAI
-When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to ""1"", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection.
+weight: 1750
+---
-Currently, the replication factor of a collection can only be configured at creation time.
+# Using GradientAI with Qdrant
-```http
-PUT /collections/{collection_name}
+GradientAI provides state-of-the-art models for generating embeddings, which are highly effective for vector search tasks in Qdrant.
-{
- ""vectors"": {
- ""size"": 300,
+## Installation
- ""distance"": ""Cosine""
- },
- ""shard_number"": 6,
+You can install the required packages using the following pip command:
- ""replication_factor"": 2,
-}
+
+```bash
+
+pip install gradientai python-dotenv qdrant-client
```
+## Code Example
+
+
+
```python
-from qdrant_client import QdrantClient
+from dotenv import load_dotenv
-from qdrant_client.http import models
+import qdrant_client
+from qdrant_client.models import Batch
+from gradientai import Gradient
-client = QdrantClient(""localhost"", port=6333)
+load_dotenv()
-client.create_collection(
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
+def main() -> None:
- shard_number=6,
+ # Initialize GradientAI client
- replication_factor=2,
+ gradient = Gradient()
-)
-```
+ # Retrieve the embeddings model
+ embeddings_model = gradient.get_embeddings_model(slug=""bge-large"")
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ # Generate embeddings for your data
+ generate_embeddings_response = embeddings_model.generate_embeddings(
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ inputs=[
+ ""Multimodal brain MRI is the preferred method to evaluate for acute ischemic infarct and ideally should be obtained within 24 hours of symptom onset, and in most centers will follow a NCCT"",
+ ""CTA has a higher sensitivity and positive predictive value than magnetic resonance angiography (MRA) for detection of intracranial stenosis and occlusion and is recommended over time-of-flight (without contrast) MRA"",
-client.createCollection(""{collection_name}"", {
+ ""Echocardiographic strain imaging has the advantage of detecting early cardiac involvement, even before thickened walls or symptoms are apparent"",
- vectors: {
+ ],
- size: 300,
+ )
- distance: ""Cosine"",
- },
- shard_number: 6,
+ # Initialize Qdrant client
- replication_factor: 2,
+ client = qdrant_client.QdrantClient(url=""http://localhost:6333"")
-});
-```
+ # Upsert the embeddings into Qdrant
+ for i, embedding in enumerate(generate_embeddings_response.embeddings):
-```rust
+ client.upsert(
-use qdrant_client::{
+ collection_name=""MedicalRecords"",
- client::QdrantClient,
+ points=Batch(
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
+ ids=[i + 1], # Unique ID for each embedding
-};
+ vectors=[embedding.embedding],
+ )
+ )
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ print(""Embeddings successfully upserted into Qdrant."")
-client
+ gradient.close()
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".into(),
- vectors_config: Some(VectorsConfig {
+if __name__ == ""__main__"":
- config: Some(Config::Params(VectorParams {
+ main()
- size: 300,
+```",documentation/embeddings/gradientai.md
+"---
- distance: Distance::Cosine.into(),
+title: Gemini
- ..Default::default()
+weight: 1600
- })),
+---
- }),
- shard_number: Some(6),
- replication_factor: Some(2),
+| Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/gemini-getting-started/gemini-getting-started/gemini-getting-started.ipynb) |
- ..Default::default()
+| --- | ----------- | ----------- |
- })
- .await?;
-```
+# Gemini
-```java
+Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package:
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
+Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model.
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized.
-QdrantClient client =
+The Embedding Model API supports various task types, outlined as follows:
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+1. `retrieval_query`: query in a search/retrieval setting
-client
+2. `retrieval_document`: document from the corpus being searched
- .createCollectionAsync(
+3. `semantic_similarity`: semantic text similarity
- CreateCollection.newBuilder()
+4. `classification`: embeddings to be used for text classification
- .setCollectionName(""{collection_name}"")
+5. `clustering`: the generated embeddings will be used for clustering
- .setVectorsConfig(
+6. `task_type_unspecified`: Unset value, which will default to one of the other values.
- VectorsConfig.newBuilder()
- .setParams(
- VectorParams.newBuilder()
- .setSize(300)
- .setDistance(Distance.Cosine)
+If you're building a semantic search application, such as RAG, you should use `task_type=""retrieval_document""` for the indexed documents and `task_type=""retrieval_query""` for the search queries.
- .build())
- .build())
- .setShardNumber(6)
+The following example shows how to do this with Qdrant:
- .setReplicationFactor(2)
- .build())
- .get();
+## Setup
-```
+```bash
-```csharp
+pip install google-generativeai
-using Qdrant.Client;
+```
-using Qdrant.Client.Grpc;
+Let's see how to use the Embedding Model API to embed a document for retrieval.
-var client = new QdrantClient(""localhost"", 6334);
+The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type:
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
+## Embedding a document
- shardNumber: 6,
- replicationFactor: 2
-);
+```python
-```
+import google.generativeai as gemini_client
+from qdrant_client import QdrantClient
+from qdrant_client.models import Distance, PointStruct, VectorParams
-This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards.
+collection_name = ""example_collection""
-Since a replication factor of ""2"" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand.
+GEMINI_API_KEY = ""YOUR GEMINI API KEY"" # add your key here
-### Creating new shard replicas
+client = QdrantClient(url=""http://localhost:6333"")
-It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://qdrant.github.io/qdrant/redoc/index.html?v=v0.11.0#tag/cluster/operation/update_collection_cluster).
+gemini_client.configure(api_key=GEMINI_API_KEY)
+texts = [
+ ""Qdrant is a vector database that is compatible with Gemini."",
-A replica can be added on a specific peer by specifying the peer from which to replicate.
+ ""Gemini is a new family of Google PaLM models, released in December 2023."",
+]
-```http
-POST /collections/{collection_name}/cluster
+results = [
-{
+ gemini_client.embed_content(
- ""replicate_shard"": {
+ model=""models/embedding-001"",
- ""shard_id"": 0,
+ content=sentence,
- ""from_peer_id"": 381894127,
+ task_type=""retrieval_document"",
- ""to_peer_id"": 467122995
+ title=""Qdrant x Gemini"",
- }
+ )
-}
+ for sentence in texts
+
+]
```
-
+## Creating Qdrant Points and Indexing documents with Qdrant
-And a replica can be removed on a specific peer.
+### Creating Qdrant Points
-```http
+```python
-POST /collections/{collection_name}/cluster
+points = [
-{
+ PointStruct(
- ""drop_replica"": {
+ id=idx,
- ""shard_id"": 0,
+ vector=response['embedding'],
- ""peer_id"": 381894127
+ payload={""text"": text},
- }
+ )
-}
+ for idx, (response, text) in enumerate(zip(results, texts))
+
+]
```
-Keep in mind that a collection must contain at least one active replica of a shard.
+### Create Collection
-### Error handling
+```python
+client.create_collection(collection_name, vectors_config=
+ VectorParams(
-Replicas can be in different states:
+ size=768,
+ distance=Distance.COSINE,
+ )
-- Active: healthy and ready to serve traffic
+)
-- Dead: unhealthy and not ready to serve traffic
+```
-- Partial: currently under resynchronization before activation
+### Add these into the collection
-A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic.
+```python
-A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically.
+client.upsert(collection_name, points)
+```
-This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation.
+## Searching for documents with Qdrant
-### Node Failure Recovery
+Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type:
-Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable.
-No system is immune to this.
+```python
+client.search(
+ collection_name=collection_name,
-But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation.
+ query_vector=gemini_client.embed_content(
-Let's walk through them from best to worst.
+ model=""models/embedding-001"",
+ content=""Is Qdrant compatible with Gemini?"",
+ task_type=""retrieval_query"",
-**Recover with replicated collection**
+ )[""embedding""],
+)
+```
-If the number of failed nodes is less than the replication factor of the collection, then no data is lost.
-Your cluster should still be able to perform read, search and update queries.
+## Using Gemini Embedding Models with Binary Quantization
-Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed.
+You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much.
-**Recreate node with replicated collections**
+In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model:
-If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node.
+At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled.
-To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API.
-Apply the `force` flag if necessary.
+| Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 |
+|--------------|---------|----------|----------|----------|----------|----------|----------|
+| | **Rescore** | False | True | False | True | False | True |
-When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
+| **Limit** | | | | | | | |
+| 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 |
+| 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 |
-Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation.
+| 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 |
-Use the [Replicate Shard Operation](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/update_collection_cluster) to create another copy of the shard on the newly connected node.
+| 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** |
-It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery.
+That's it! You can now use Gemini Embedding Models with Qdrant!
+",documentation/embeddings/gemini.md
+"---
-Building a completely automatic process of collection scaling would require control over the cluster machines themself.
+title: OCI (Oracle Cloud Infrastructure)
-Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that.
+weight: 2500
+---
+# Using OCI (Oracle Cloud Infrastructure) with Qdrant
-**Recover from snapshot**
+OCI provides robust cloud-based embeddings for various media types. The Generative AI Embedding Models convert textual input - ranging from phrases and sentences to entire paragraphs - into a structured format known as embeddings. Each piece of text input is transformed into a numerical array consisting of 1024 distinct numbers.
-If there are no copies of data in the cluster, it is still possible to recover from a snapshot.
+## Installation
-Follow the same steps to detach failed node and create a new one in the cluster:
+You can install the required package using the following pip command:
-* To exclude failed nodes from the consensus, use [remove peer](https://qdrant.github.io/qdrant/redoc/index.html#tag/cluster/operation/remove_peer) API. Apply the `force` flag if necessary.
-* Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
+```bash
+pip install oci
-Snapshot recovery, used in single-node deployment, is different from cluster one.
+```
-Consensus manages all metadata about all collections and does not require snapshots to recover it.
-But you can use snapshots to recover missing shards of the collections.
+## Code Example
-Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it.
-The service will download the specified snapshot of the collection and recover shards with data from it.
+Below is an example of how to obtain embeddings using OCI (Oracle Cloud Infrastructure)'s API and store them in a Qdrant collection:
-Once all shards of the collection are recovered, the collection will become operational again.
+```python
+import qdrant_client
+from qdrant_client.models import Batch
-## Consistency guarantees
+import oci
-By default, Qdrant focuses on availability and maximum throughput of search operations.
+# Initialize OCI client
-For the majority of use cases, this is a preferable trade-off.
+config = oci.config.from_file()
+ai_client = oci.ai_language.AIServiceLanguageClient(config)
-During the normal state of operation, it is possible to search and modify data from any peers in the cluster.
+# Generate embeddings using OCI's AI service
+text = ""OCI provides cloud-based AI services.""
-Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster.
+response = ai_client.batch_detect_language_entities(text)
+embeddings = response.data[0].entities[0].embedding
-- reads are using a partial fan-out strategy to optimize latency and availability
-- writes are executed in parallel on all active sharded replicas
+# Initialize Qdrant client
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
-![Embeddings](/docs/concurrent-operations-replicas.png)
+# Upsert the embedding into Qdrant
+qdrant_client.upsert(
-However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc.
+ collection_name=""CloudAI"",
+ points=Batch(
+ ids=[1],
-Qdrant provides a few options to control consistency guarantees:
+ vectors=[embeddings],
+ )
+)
-- `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations.
-- Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low.
-- Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical.
+```
+",documentation/embeddings/oci.md
+"---
+title: Jina Embeddings
+weight: 1900
-### Write consistency factor
+aliases:
+ - /documentation/embeddings/jina-emebddngs/
+ - ../integrations/jina-embeddings/
-The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default.
+---
-It can be configured at the collection's creation time.
+# Jina Embeddings
-```http
-PUT /collections/{collection_name}
-{
+Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens.
- ""vectors"": {
- ""size"": 300,
- ""distance"": ""Cosine""
+To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production.
- },
- ""shard_number"": 6,
- ""replication_factor"": 2,
+```python
- ""write_consistency_factor"": 2,
+import qdrant_client
-}
+import requests
-```
+from qdrant_client.models import Distance, VectorParams, Batch
-```python
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+# Provide Jina API key and choose one of the available models.
+# You can get a free trial key here: https://jina.ai/embeddings/
+JINA_API_KEY = ""jina_xxxxxxxxxxx""
-client = QdrantClient(""localhost"", port=6333)
+MODEL = ""jina-embeddings-v2-base-en"" # or ""jina-embeddings-v2-base-en""
+EMBEDDING_SIZE = 768 # 512 for small variant
-client.create_collection(
- collection_name=""{collection_name}"",
+# Get embeddings from the API
- vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
+url = ""https://api.jina.ai/v1/embeddings""
- shard_number=6,
- replication_factor=2,
- write_consistency_factor=2,
+headers = {
-)
+ ""Content-Type"": ""application/json"",
-```
+ ""Authorization"": f""Bearer {JINA_API_KEY}"",
+}
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+data = {
+ ""input"": [""Your text string goes here"", ""You can send multiple texts""],
+ ""model"": MODEL,
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+}
-client.createCollection(""{collection_name}"", {
+response = requests.post(url, headers=headers, json=data)
- vectors: {
+embeddings = [d[""embedding""] for d in response.json()[""data""]]
- size: 300,
- distance: ""Cosine"",
- },
- shard_number: 6,
- replication_factor: 2,
+# Index the embeddings into Qdrant
- write_consistency_factor: 2,
+client = qdrant_client.QdrantClient("":memory:"")
-});
+client.create_collection(
-```
+ collection_name=""MyCollection"",
+
+ vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT),
+
+)
-```rust
-use qdrant_client::{
- client::QdrantClient,
+qdrant_client.upsert(
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
+ collection_name=""MyCollection"",
-};
+ points=Batch(
+ ids=list(range(len(embeddings))),
+ vectors=embeddings,
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ),
+)
-client
- .create_collection(&CreateCollection {
+```
- collection_name: ""{collection_name}"".into(),
- vectors_config: Some(VectorsConfig {
+",documentation/embeddings/jina-embeddings.md
+"---
- config: Some(Config::Params(VectorParams {
+title: Upstage
- size: 300,
+weight: 3100
- distance: Distance::Cosine.into(),
+---
- ..Default::default()
- })),
- }),
+# Upstage
- shard_number: Some(6),
- replication_factor: Some(2),
- write_consistency_factor: Some(2),
+Qdrant supports working with the Solar Embeddings API from [Upstage](https://upstage.ai/).
- ..Default::default()
- })
- .await?;
+[Solar Embeddings](https://developers.upstage.ai/docs/apis/embeddings) API features dual models for user queries and document embedding, within a unified vector space, designed for performant text processing.
-```
+You can generate an API key to authenticate the requests from the [Upstage Console]().
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+### Setting up the Qdrant client and Upstage session
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.VectorParams;
+```python
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+import requests
+from qdrant_client import QdrantClient
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+UPSTAGE_BASE_URL = ""https://api.upstage.ai/v1/solar/embeddings""
-client
+UPSTAGE_API_KEY = """"
- .createCollectionAsync(
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
+upstage_session = requests.Session()
- .setVectorsConfig(
- VectorsConfig.newBuilder()
- .setParams(
+client = QdrantClient(url=""http://localhost:6333"")
- VectorParams.newBuilder()
- .setSize(300)
- .setDistance(Distance.Cosine)
+headers = {
- .build())
+ ""Authorization"": f""Bearer {UPSTAGE_API_KEY}"",
- .build())
+ ""Accept"": ""application/json"",
- .setShardNumber(6)
+}
- .setReplicationFactor(2)
- .setWriteConsistencyFactor(2)
- .build())
+texts = [
- .get();
+ ""Qdrant is the best vector search engine!"",
+
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
+
+]
```
-```csharp
+```typescript
-using Qdrant.Client;
+import { QdrantClient } from '@qdrant/js-client-rest';
-using Qdrant.Client.Grpc;
+const UPSTAGE_BASE_URL = ""https://api.upstage.ai/v1/solar/embeddings""
-var client = new QdrantClient(""localhost"", 6334);
+const UPSTAGE_API_KEY = """"
-await client.CreateCollectionAsync(
+const client = new QdrantClient({ url: 'http://localhost:6333' });
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
- shardNumber: 6,
+const headers = {
- replicationFactor: 2,
+ ""Authorization"": ""Bearer "" + UPSTAGE_API_KEY,
- writeConsistencyFactor: 2
+ ""Accept"": ""application/json"",
-);
+ ""Content-Type"": ""application/json""
-```
+}
-Write operations will fail if the number of active replicas is less than the `write_consistency_factor`.
+const texts = [
+ ""Qdrant is the best vector search engine!"",
+ ""Loved by Enterprises and everyone building for low latency, high performance, and scale."",
-### Read consistency
+]
+```
-Read `consistency` can be specified for most read requests and will ensure that the returned result
-is consistent across cluster nodes.
+The following example shows how to embed documents with the recommended `solar-embedding-1-large-passage` and `solar-embedding-1-large-query` models that generates sentence embeddings of size 4096.
-- `all` will query all nodes and return points, which present on all of them
+### Embedding documents
-- `majority` will query all nodes and return points, which present on the majority of them
-- `quorum` will query randomly selected majority of nodes and return points, which present on all of them
-- `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them
+```python
-- default `consistency` is `1`
+body = {
+ ""input"": texts,
+ ""model"": ""solar-embedding-1-large-passage"",
-```http
+}
-POST /collections/{collection_name}/points/search?consistency=majority
-{
- ""filter"": {
+response_body = upstage_session.post(
- ""must"": [
+ UPSTAGE_BASE_URL, headers=headers, json=body
- {
+).json()
- ""key"": ""city"",
+```
- ""match"": {
- ""value"": ""London""
- }
+```typescript
- }
+let body = {
- ]
+ ""input"": texts,
- },
+ ""model"": ""solar-embedding-1-large-passage"",
- ""params"": {
+}
- ""hnsw_ef"": 128,
- ""exact"": false
- },
+let response = await fetch(UPSTAGE_BASE_URL, {
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ method: ""POST"",
- ""limit"": 3
+ body: JSON.stringify(body),
-}
+ headers
-```
+});
-```python
+let response_body = await response.json()
-client.search(
+```
- collection_name=""{collection_name}"",
- query_filter=models.Filter(
- must=[
+### Converting the model outputs to Qdrant points
- models.FieldCondition(
- key=""city"",
- match=models.MatchValue(
+```python
- value=""London"",
+from qdrant_client.models import PointStruct
- ),
- )
- ]
+points = [
- ),
+ PointStruct(
- search_params=models.SearchParams(hnsw_ef=128, exact=False),
+ id=idx,
- query_vector=[0.2, 0.1, 0.9, 0.7],
+ vector=data[""embedding""],
- limit=3,
+ payload={""text"": text},
- consistency=""majority"",
+ )
-)
+ for idx, (data, text) in enumerate(zip(response_body[""data""], texts))
+
+]
```
@@ -9700,3421 +9540,3284 @@ client.search(
```typescript
-client.search(""{collection_name}"", {
+let points = response_body.data.map((data, i) => {
- filter: {
+ return {
- must: [{ key: ""city"", match: { value: ""London"" } }],
+ id: i,
- },
+ vector: data.embedding,
+
+ payload: {
- params: {
+ text: texts[i]
- hnsw_ef: 128,
+ }
- exact: false,
+ }
- },
+})
- vector: [0.2, 0.1, 0.9, 0.7],
+```
- limit: 3,
- consistency: ""majority"",
-});
+### Creating a collection to insert the documents
-```
+```python
-```rust
+from qdrant_client.models import VectorParams, Distance
-use qdrant_client::{
- client::QdrantClient,
- qdrant::{
+collection_name = ""example_collection""
- read_consistency::Value, Condition, Filter, ReadConsistency, ReadConsistencyType,
- SearchParams, SearchPoints,
- },
+client.create_collection(
-};
+ collection_name,
+ vectors_config=VectorParams(
+ size=4096,
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ distance=Distance.COSINE,
+ ),
+)
-client
+client.upsert(collection_name, points)
- .search_points(&SearchPoints {
+```
- collection_name: ""{collection_name}"".into(),
- filter: Some(Filter::must([Condition::matches(
- ""city"",
+```typescript
- ""London"".into(),
+const COLLECTION_NAME = ""example_collection""
- )])),
- params: Some(SearchParams {
- hnsw_ef: Some(128),
+await client.createCollection(COLLECTION_NAME, {
- exact: Some(false),
+ vectors: {
- ..Default::default()
+ size: 4096,
- }),
+ distance: 'Cosine',
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ }
- limit: 3,
+});
- read_consistency: Some(ReadConsistency {
- value: Some(Value::Type(ReadConsistencyType::Majority.into())),
- }),
+await client.upsert(COLLECTION_NAME, {
- ..Default::default()
+ wait: true,
- })
+ points
- .await?;
+})
```
-```java
+## Searching for documents with Qdrant
-import java.util.List;
+Once all the documents are added, you can search for the most relevant documents.
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+```python
-import io.qdrant.client.QdrantClient;
+body = {
-import io.qdrant.client.QdrantGrpcClient;
+ ""input"": ""What is the best to use for vector search scaling?"",
-import io.qdrant.client.grpc.Points.Filter;
+ ""model"": ""solar-embedding-1-large-query"",
-import io.qdrant.client.grpc.Points.ReadConsistency;
+}
-import io.qdrant.client.grpc.Points.ReadConsistencyType;
-import io.qdrant.client.grpc.Points.SearchParams;
-import io.qdrant.client.grpc.Points.SearchPoints;
+response_body = upstage_session.post(
+ UPSTAGE_BASE_URL, headers=headers, json=body
+).json()
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+client.search(
+ collection_name=collection_name,
-client
+ query_vector=response_body[""data""][0][""embedding""],
- .searchAsync(
+)
- SearchPoints.newBuilder()
+```
- .setCollectionName(""{collection_name}"")
- .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build())
- .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(true).build())
+```typescript
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+body = {
- .setLimit(3)
+ ""input"": ""What is the best to use for vector search scaling?"",
- .setReadConsistency(
+ ""model"": ""solar-embedding-1-large-query"",
- ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build())
+}
- .build())
- .get();
-```
+response = await fetch(UPSTAGE_BASE_URL, {
+ method: ""POST"",
+ body: JSON.stringify(body),
-```csharp
+ headers
-using Qdrant.Client;
+});
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+response_body = await response.json()
-var client = new QdrantClient(""localhost"", 6334);
+await client.search(COLLECTION_NAME, {
+ vector: response_body.data[0].embedding,
-await client.SearchAsync(
+});
- collectionName: ""{collection_name}"",
+```
+",documentation/embeddings/upstage.md
+"---
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+title: John Snow Labs
- filter: MatchKeyword(""city"", ""London""),
+weight: 2000
- searchParams: new SearchParams { HnswEf = 128, Exact = true },
+---
- limit: 3,
- readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority }
-);
+# Using John Snow Labs with Qdrant
-```
+John Snow Labs offers a variety of models, particularly in the healthcare domain. They have pre-trained models that can generate embeddings for medical text data.
-### Write ordering
+## Installation
-Write `ordering` can be specified for any write request to serialize it through a single ""leader"" node,
-which ensures that all write operations (issued with the same `ordering`) are performed and observed
-sequentially.
+You can install the required package using the following pip command:
-- `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered.
+```bash
-- `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change.
+pip install johnsnowlabs
-- `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down.
+```
-```http
-PUT /collections/{collection_name}/points?ordering=strong
-{
+Here is an example of how you might obtain embeddings using John Snow Labs's API and store them in a Qdrant collection:
- ""batch"": {
- ""ids"": [1, 2, 3],
- ""payloads"": [
+```python
- {""color"": ""red""},
+import qdrant_client
- {""color"": ""green""},
+from qdrant_client.models import Batch
- {""color"": ""blue""}
+from johnsnowlabs import nlp
- ],
- ""vectors"": [
- [0.9, 0.1, 0.1],
+# Load the pre-trained model, for example, a named entity recognition (NER) model
- [0.1, 0.9, 0.1],
+model = nlp.load_model(""ner_jsl"")
- [0.1, 0.1, 0.9]
- ]
- }
+# Sample text to generate embeddings
-}
+text = ""John Snow Labs provides state-of-the-art healthcare NLP solutions.""
-```
+# Generate embeddings for the text
-```python
+document = nlp.DocumentAssembler().setInput(text)
-client.upsert(
+embeddings = model.transform(document).collectEmbeddings()
- collection_name=""{collection_name}"",
- points=models.Batch(
- ids=[1, 2, 3],
+# Initialize Qdrant client
- payloads=[
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
- {""color"": ""red""},
- {""color"": ""green""},
- {""color"": ""blue""},
+# Upsert the embeddings into Qdrant
- ],
+qdrant_client.upsert(
- vectors=[
+ collection_name=""HealthcareNLP"",
- [0.9, 0.1, 0.1],
+ points=Batch(
- [0.1, 0.9, 0.1],
+ ids=[1], # This would be your unique ID for the data point
- [0.1, 0.1, 0.9],
+ vectors=[embeddings],
- ],
+ )
- ),
+)
- ordering=""strong"",
-)
```
+",documentation/embeddings/johnsnow.md
+"
-```typescript
+---
-client.upsert(""{collection_name}"", {
+title: Embeddings
- batch: {
+weight: 15
- ids: [1, 2, 3],
+---
- payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }],
+# Supported Embedding Providers & Models
- vectors: [
- [0.9, 0.1, 0.1],
- [0.1, 0.9, 0.1],
+Qdrant supports all available text and multimodal dense vector embedding models as well as vector embedding services without any limitations.
- [0.1, 0.1, 0.9],
- ],
- },
+## Some of the Embeddings you can use with Qdrant:
- ordering: ""strong"",
-});
-```
+SentenceTransformers, BERT, SBERT, Clip, OpenClip, Open AI, Vertex AI, Azure AI, AWS Bedrock, Jina AI, Upstage AI, Mistral AI, Cohere AI, Voyage AI, Aleph Alpha, Baidu Qianfan, BGE, Instruct, Watsonx Embeddings, Snowflake Embeddings, NVIDIA NeMo, Nomic, OCI Embeddings, Ollama Embeddings, MixedBread, Together AI, Clarifai, Databricks Embeddings, GPT4All Embeddings, John Snow Labs Embeddings.
-```rust
+Additionally, [any open-source embeddings from HuggingFace](https://huggingface.co/spaces/mteb/leaderboard) can be used with Qdrant.
-use qdrant_client::qdrant::{PointStruct, WriteOrdering, WriteOrderingType};
-use serde_json::json;
+## Code samples:
-client
- .upsert_points_blocking(
+| Embeddings Providers | Description |
- ""{collection_name}"",
+| ----------------------------- | ----------- |
- None,
+| [Aleph Alpha](./aleph-alpha/) | Multilingual embeddings focused on European languages. |
- vec![
+| [Azure](./azure/) | Microsoft's embedding model selection. |
- PointStruct::new(
+| [Bedrock](./bedrock/) | AWS managed service for foundation models and embeddings. |
- 1,
+| [Clarifai](./clarifai/) | Embeddings for image and video recognition. |
- vec![0.9, 0.1, 0.1],
+| [Clip](./clip/) | Aligns images and text, created by OpenAI. |
- json!({
+| [Cohere](./cohere/) | Language model embeddings for NLP tasks. |
- ""color"": ""red""
+| [Databricks](./databricks/) | Scalable embeddings integrated with Apache Spark. |
- })
+| [Gemini](./gemini/) | Google’s multimodal embeddings for text and vision. |
- .try_into()
+| [GPT4All](./gpt4all/) | Open-source, local embeddings for privacy-focused use. |
- .unwrap(),
+| [GradientAI](./gradient/) | AI Models for custom enterprise tasks.|
- ),
+| [Instruct](./instruct/) | Embeddings tuned for following instructions. |
- PointStruct::new(
+| [Jina AI](./jina-embeddings/) | Customizable embeddings for neural search. |
- 2,
+| [John Snow Labs](./johnsnow/) | Medical and clinical embeddings. |
- vec![0.1, 0.9, 0.1],
+| [Mistral](./mistral/) | Open-source, efficient language model embeddings. |
- json!({
+| [MixedBread](./mixedbread/) | Lightweight embeddings for constrained environments. |
- ""color"": ""green""
+| [Nomic](./nomic/) | Embeddings for data visualization. |
- })
+| [Nvidia](./nvidia/) | GPU-optimized embeddings from Nvidia. |
- .try_into()
+| [OCI](./oci/) | Oracle Cloud’s AI service with embeddings. |
- .unwrap(),
+| [Ollama](./ollama/) | Embeddings for conversational AI. |
- ),
+| [OpenAI](./openai/) | Industry-leading embeddings for NLP. |
- PointStruct::new(
+| [OpenCLIP](./openclip/) | OS implementation of CLIP for image and text. |
- 3,
+| [Prem AI](./premai/) | Precise language embeddings. |
- vec![0.1, 0.1, 0.9],
+| [Snowflake](./snowflake/) | Scalable embeddings for big data. |
- json!({
+| [Together AI](./togetherai/) | Community-driven, open-source embeddings. |
- ""color"": ""blue""
+| [Upstage](./upstage/) | Embeddings for speech and language tasks. |
- })
+| [Voyage AI](./voyage/) | Navigation and spatial understanding embeddings. |
- .try_into()
+| [Watsonx](./watsonx/) | IBM's enterprise-grade embeddings. |
+",documentation/embeddings/_index.md
+"---
- .unwrap(),
+title: MixedBread
- ),
+weight: 2200
- ],
+---
- Some(WriteOrdering {
- r#type: WriteOrderingType::Strong.into(),
- }),
+# Using MixedBread with Qdrant
- )
- .await?;
-```
+MixedBread is a unique provider offering embeddings across multiple domains. Their models are versatile for various search tasks when integrated with Qdrant. MixedBread is creating state-of-the-art models and tools that make search smarter, faster, and more relevant. Whether you're building a next-gen search engine or RAG (Retrieval Augmented Generation) systems, or whether you're enhancing your existing search solution, they've got the ingredients to make it happen.
-```java
+## Installation
-import java.util.List;
-import java.util.Map;
+You can install the required package using the following pip command:
-import static io.qdrant.client.PointIdFactory.id;
-import static io.qdrant.client.ValueFactory.value;
+```bash
-import static io.qdrant.client.VectorsFactory.vectors;
+pip install mixedbread
+```
-import io.qdrant.client.grpc.Points.PointStruct;
-import io.qdrant.client.grpc.Points.UpsertPoints;
+## Integration Example
-import io.qdrant.client.grpc.Points.WriteOrdering;
-import io.qdrant.client.grpc.Points.WriteOrderingType;
+Below is an example of how to obtain embeddings using MixedBread's API and store them in a Qdrant collection:
-client
- .upsertAsync(
+```python
- UpsertPoints.newBuilder()
+import qdrant_client
- .setCollectionName(""{collection_name}"")
+from qdrant_client.models import Batch
- .addAllPoints(
+from mixedbread import MixedBreadModel
- List.of(
- PointStruct.newBuilder()
- .setId(id(1))
+# Initialize MixedBread model
- .setVectors(vectors(0.9f, 0.1f, 0.1f))
+model = MixedBreadModel(""mixedbread-variant"")
- .putAllPayload(Map.of(""color"", value(""red"")))
- .build(),
- PointStruct.newBuilder()
+# Generate embeddings
- .setId(id(2))
+text = ""MixedBread provides versatile embeddings for various domains.""
- .setVectors(vectors(0.1f, 0.9f, 0.1f))
+embeddings = model.embed(text)
- .putAllPayload(Map.of(""color"", value(""green"")))
- .build(),
- PointStruct.newBuilder()
+# Initialize Qdrant client
- .setId(id(3))
+qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333)
- .setVectors(vectors(0.1f, 0.1f, 0.94f))
- .putAllPayload(Map.of(""color"", value(""blue"")))
- .build()))
+# Upsert the embedding into Qdrant
- .setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build())
+qdrant_client.upsert(
- .build())
+ collection_name=""VersatileEmbeddings"",
- .get();
+ points=Batch(
-```
+ ids=[1],
+ vectors=[embeddings],
+ )
-```csharp
+)
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+```
+",documentation/embeddings/mixedbread.md
+"---
+title: Azure OpenAI
-var client = new QdrantClient(""localhost"", 6334);
+weight: 950
+---
-await client.UpsertAsync(
- collectionName: ""{collection_name}"",
+# Using Azure OpenAI with Qdrant
- points: new List
- {
- new()
+Azure OpenAI is Microsoft's platform for AI embeddings, focusing on powerful text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant.
- {
- Id = 1,
- Vectors = new[] { 0.9f, 0.1f, 0.1f },
+## Installation
- Payload = { [""city""] = ""red"" }
- },
- new()
+You can install the required packages using the following pip command:
- {
- Id = 2,
- Vectors = new[] { 0.1f, 0.9f, 0.1f },
+```bash
- Payload = { [""city""] = ""green"" }
+pip install openai azure-identity python-dotenv qdrant-client
- },
+```
- new()
- {
- Id = 3,
+## Code Example
- Vectors = new[] { 0.1f, 0.1f, 0.9f },
- Payload = { [""city""] = ""blue"" }
- }
+```python
- },
+import os
- ordering: WriteOrderingType.Strong
+import openai
-);
+import dotenv
-```
+import qdrant_client
+from qdrant_client.models import Batch
+from azure.identity import DefaultAzureCredential, get_bearer_token_provider
-## Listener mode
+dotenv.load_dotenv()
-
+# Set to True if using Azure Active Directory for authentication
-In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations.
+use_azure_active_directory = False
-There are several scenarios where this can be useful:
+# Qdrant client setup
-- Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time.
+qdrant_client = qdrant_client.QdrantClient(url=""http://localhost:6333"")
-- Listener node can be used to syncronize data into another region, while still performing search operations in the local region.
+# Azure OpenAI Authentication
+if not use_azure_active_directory:
+ endpoint = os.environ[""AZURE_OPENAI_ENDPOINT""]
-To enable listener mode, set `node_type` to `Listener` in the config file:
+ api_key = os.environ[""AZURE_OPENAI_API_KEY""]
+ client = openai.AzureOpenAI(
+ azure_endpoint=endpoint,
-```yaml
+ api_key=api_key,
-storage:
+ api_version=""2023-09-01-preview""
- node_type: ""Listener""
+ )
-```
+else:
+ endpoint = os.environ[""AZURE_OPENAI_ENDPOINT""]
+ client = openai.AzureOpenAI(
-Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage.
+ azure_endpoint=endpoint,
+ azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), ""https://cognitiveservices.azure.com/.default""),
+ api_version=""2023-09-01-preview""
-All shards, stored on the listener node, will be converted to the `Listener` state.
-
-
+ )
-Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL.
-This mechanism should allow to minimize upsert latency in case of parallel snapshotting.
+# Deployment name of the model in Azure OpenAI Studio
+deployment = ""your-deployment-name"" # Replace with your deployment name
-## Consensus Checkpointing
+# Generate embeddings using the Azure OpenAI client
-Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state.
+text_input = ""The food was delicious and the waiter...""
-This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes.
+embeddings_response = client.embeddings.create(
+ model=deployment,
+ input=text_input
-For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state.
+)
-In long-running clusters, this can take a long time, and the log can grow very large.
+# Extract the embedding vector from the response
-To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state.
+embedding_vector = embeddings_response.data[0].embedding
-To use this feature, simply call the `/cluster/recover` API on required node:
+# Insert the embedding into Qdrant
+qdrant_client.upsert(
+ collection_name=""MyCollection"",
-```http
+ points=Batch(
-POST /cluster/recover
+ ids=[1], # This ID can be dynamically assigned or managed
-```
+ vectors=[embedding_vector],
+ )
+)
-This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application.
+print(""Embedding successfully upserted into Qdrant."")
-In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation.
-",documentation/guides/distributed_deployment.md
+```",documentation/embeddings/azure.md
"---
-title: Installation
+title: Database Optimization
-weight: 10
+weight: 2
-aliases:
+---
- - ../install
- - ../installation
----
+# Frequently Asked Questions: Database Optimization
-## Installation requirements
+### How do I reduce memory usage?
-The following sections describe the requirements for deploying Qdrant.
+The primary source of memory usage is vector data. There are several ways to address that:
-### CPU and memory
+- Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors.
+- Configure on-disk vector storage
-The CPU and RAM that you need depends on:
+The choice of the approach depends on your requirements.
+Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant.
-- Number of vectors
-- Vector dimensions
-- [Payloads](/documentation/concepts/payload/) and their indexes
+### How do you choose the machine configuration?
-- Storage
-- Replication
-- How you configure quantization
+There are two main scenarios of Qdrant usage in terms of resource consumption:
-Our [Cloud Pricing Calculator](https://cloud.qdrant.io/calculator) can help you estimate required resources without payload or index data.
+- **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM.
+- **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/).
-### Storage
+### I configured on-disk vector storage, but memory usage is still high. Why?
-For persistent storage, Qdrant requires block-level access to storage devices with a [POSIX-compatible file system](https://www.quobyte.com/storage-explained/posix-filesystem/). Network systems such as [iSCSI](https://en.wikipedia.org/wiki/ISCSI) that provide block-level access are also acceptable.
-Qdrant won't work with [Network file systems](https://en.wikipedia.org/wiki/File_system#Network_file_systems) such as NFS, or [Object storage](https://en.wikipedia.org/wiki/Object_storage) systems such as S3.
+Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service.
+If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM.
-If you offload vectors to a local disk, we recommend you use a solid-state (SSD or NVMe) drive.
+Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM.
+As a result, the Qdrant process might use more memory than the minimum required to run the service.
-### Networking
+> Unused RAM is wasted RAM
-Each Qdrant instance requires three open ports:
+If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes.
-* `6333` - For the HTTP API, for the [Monitoring](/documentation/guides/monitoring/) health and metrics endpoints
-* `6334` - For the [gRPC](/documentation/interfaces/#grpc-interface) API
-* `6335` - For [Distributed deployment](/documentation/guides/distributed_deployment/)
+### My requests are very slow or time out. What should I do?
-All Qdrant instances in a cluster must be able to:
+There are several possible reasons for that:
-- Communicate with each other over these ports
+- **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index).
-- Allow incoming connections to ports `6333` and `6334` from clients that use Qdrant.
+- **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/).
+- **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.",documentation/faq/database-optimization.md
+"---
+title: Qdrant Fundamentals
-## Installation options
+weight: 1
+---
-Qdrant can be installed in different ways depending on your needs:
+# Frequently Asked Questions: General Topics
+||||||
-For production, you can use our Qdrant Cloud to run Qdrant either fully managed in our infrastructure or with Hybrid SaaS in yours.
+|-|-|-|-|-|
+|[Vectors](/documentation/faq/qdrant-fundamentals/#vectors)|[Search](/documentation/faq/qdrant-fundamentals/#search)|[Collections](/documentation/faq/qdrant-fundamentals/#collections)|[Compatibility](/documentation/faq/qdrant-fundamentals/#compatibility)|[Cloud](/documentation/faq/qdrant-fundamentals/#cloud)|
-For testing or development setups, you can run the Qdrant container or as a binary executable.
+## Vectors
-If you want to run Qdrant in your own infrastructure, without any cloud connection, we recommend to install Qdrant in a Kubernetes cluster with our Helm chart, or to use our Qdrant Enterprise Operator
+### What is the maximum vector dimension supported by Qdrant?
-## Production
+Qdrant supports up to 65,535 dimensions by default, but this can be configured to support higher dimensions.
-For production, we recommend that you configure Qdrant in the cloud, with Kubernetes, or with a Qdrant Enterprise Operator.
+### What is the maximum size of vector metadata that can be stored?
-### Qdrant Cloud
+There is no inherent limitation on metadata size, but it should be [optimized for performance and resource usage](/documentation/guides/optimize/). Users can set upper limits in the configuration.
-You can set up production with the [Qdrant Cloud](https://qdrant.to/cloud), which provides fully managed Qdrant databases.
-It provides horizontal and vertical scaling, one click installation and upgrades, monitoring, logging, as well as backup and disaster recovery. For more information, see the [Qdrant Cloud documentation](/documentation/cloud).
+### Can the same similarity search query yield different results on different machines?
-### Kubernetes
+Yes, due to differences in hardware configurations and parallel processing, results may vary slightly.
-You can use a ready-made [Helm Chart](https://helm.sh/docs/) to run Qdrant in your Kubernetes cluster:
+### What to do with documents with small chunks using a fixed chunk strategy?
-```bash
+For documents with small chunks, consider merging chunks or using variable chunk sizes to optimize vector representation and search performance.
-helm repo add qdrant https://qdrant.to/helm
-helm install qdrant qdrant/qdrant
-```
+### How do I choose the right vector embeddings for my use case?
-For more information, see the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant) README.
+This depends on the nature of your data and the specific application. Consider factors like dimensionality, domain-specific models, and the performance characteristics of different embeddings.
-### Qdrant Kubernetes Operator
+### How does Qdrant handle different vector embeddings from various providers in the same collection?
-We provide a Qdrant Enterprise Operator for Kubernetes installations. For more information, [use this form](https://qdrant.to/contact-us) to contact us.
+Qdrant natively [supports multiple vectors per data point](/documentation/concepts/vectors/#multivectors), allowing different embeddings from various providers to coexist within the same collection.
-### Docker and Docker Compose
+### Can I migrate my embeddings from another vector store to Qdrant?
-Usually, we recommend to run Qdrant in Kubernetes, or use the Qdrant Cloud for production setups. This makes setting up highly available and scalable Qdrant clusters with backups and disaster recovery a lot easier.
+Yes, Qdrant supports migration of embeddings from other vector stores, facilitating easy transitions and adoption of Qdrant’s features.
-However, you can also use Docker and Docker Compose to run Qdrant in production, by following the setup instructions in the [Docker](#docker) and [Docker Compose](#docker-compose) Development sections.
+## Search
-In addition, you have to make sure:
+### How does Qdrant handle real-time data updates and search?
-* To use a performant [persistent storage](#storage) for your data
-* To configure the [security settings](/documentation/guides/security/) for your deployment
-* To set up and configure Qdrant on multiple nodes for a highly available [distributed deployment](/documentation/guides/distributed_deployment/)
+Qdrant supports live updates for vector data, with newly inserted, updated and deleted vectors available for immediate search. The system uses full-scan search on unindexed segments during background index updates.
-* To set up a load balancer for your Qdrant cluster
-* To create a [backup and disaster recovery strategy](/documentation/concepts/snapshots/) for your data
-* To integrate Qdrant with your [monitoring](/documentation/guides/monitoring/) and logging solutions
+### My search results contain vectors with null values. Why?
-## Development
+By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results.
+But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`.
-For development and testing, we recommend that you set up Qdrant in Docker. We also have different client libraries.
+If you're still seeing `""vector"": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method.
-### Docker
+### How can I search without a vector?
-The easiest way to start using Qdrant for testing or development is to run the Qdrant container image.
-The latest versions are always available on [DockerHub](https://hub.docker.com/r/qdrant/qdrant/tags?page=1&ordering=last_updated).
+You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection.
-Make sure that [Docker](https://docs.docker.com/engine/install/), [Podman](https://podman.io/docs/installation) or the container runtime of your choice is installed and running. The following instructions use Docker.
+### Does Qdrant support a full-text search or a hybrid search?
-Pull the image:
+Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case.
+That includes both the interface and the performance.
-```bash
-docker pull qdrant/qdrant
+What Qdrant can do:
-```
+- Search with full-text filters
-In the following command, revise `$(pwd)/path/to/data` for your Docker configuration. Then use the updated command to run the container:
+- Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases)
+- Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/)
+- Sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models
-```bash
+- [Multi-vectors](../../concepts/vectors/#multivectors), for example ColBERT and other late-interaction models
-docker run -p 6333:6333 \
+- Combination of the [multiple searches](../../concepts/hybrid-queries/)
- -v $(pwd)/path/to/data:/qdrant/storage \
- qdrant/qdrant
-```
+What Qdrant doesn't plan to support:
-With this command, you start a Qdrant instance with the default configuration.
+- Non-vector-based retrieval or ranking functions
-It stores all data in the `./path/to/data` directory.
+- Built-in ontologies or knowledge graphs
+- Query analyzers and other NLP tools
-By default, Qdrant uses port 6333, so at [localhost:6333](http://localhost:6333) you should see the welcome message.
+Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines.
+Read more about [our approach](../../../articles/hybrid-search/) to hybrid search.
-To change the Qdrant configuration, you can overwrite the production configuration:
+## Collections
-```bash
-docker run -p 6333:6333 \
- -v $(pwd)/path/to/data:/qdrant/storage \
+### How many collections can I create?
- -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \
- qdrant/qdrant
-```
+As many as you want, but be aware that each collection requires additional resources.
+It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead.
-Alternatively, you can use your own `custom_config.yaml` configuration file:
+We consider creating a collection for each user/dialog/document as an antipattern.
-```bash
-docker run -p 6333:6333 \
+Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial.
- -v $(pwd)/path/to/data:/qdrant/storage \
- -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/custom_config.yaml \
- qdrant/qdrant \
+### How do I upload a large number of vectors into a Qdrant collection?
- ./qdrant --config-path config/custom_config.yaml
-```
+Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial.
-For more information, see the [Configuration](/documentation/guides/configuration/) documentation.
+### Can I only store quantized vectors and discard full precision vectors?
-### Docker Compose
+No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc.
-You can also use [Docker Compose](https://docs.docker.com/compose/) to run Qdrant.
+## Compatibility
-Here is an example customized compose file for a single node Qdrant cluster:
+### Is Qdrant compatible with CPUs or GPUs for vector computation?
-```yaml
-services:
+Qdrant primarily relies on CPU acceleration for scalability and efficiency, with no current support for GPU acceleration.
- qdrant:
- image: qdrant/qdrant:latest
- restart: always
+### Do you guarantee compatibility across versions?
- container_name: qdrant
- ports:
- - 6333:6333
+In case your version is older, we only guarantee compatibility between two consecutive minor versions. This also applies to client versions. Ensure your client version is never more than one minor version away from your cluster version.
- - 6334:6334
+While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code.
- expose:
- - 6333
- - 6334
+### Do you support downgrades?
- - 6335
- configs:
- - source: qdrant_config
+We do not support downgrading a cluster on any of our products. If you deploy a newer version of Qdrant, your
- target: /qdrant/config/production.yaml
+data is automatically migrated to the newer storage format. This migration is not reversible.
- volumes:
- - ./qdrant_data:/qdrant_data
+### How do I avoid issues when updating to the latest version?
-configs:
- qdrant_config:
+We only guarantee compatibility if you update between consecutive versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`.
- content: |
- log_level: INFO
-```
+## Cloud
-
+### Is it possible to scale down a Qdrant Cloud cluster?
-### From source
+It is possible to vertically scale down a Qdrant Cloud cluster, as long as the disk size is not reduced. Horizontal downscaling is currently not possible, but on our roadmap.
+But in some cases, we might be able to help you with that manually. Please open a support ticket, so that we can assist.
+",documentation/faq/qdrant-fundamentals.md
+"---
+title: FAQ
-Qdrant is written in Rust and can be compiled into a binary executable.
+weight: 22
-This installation method can be helpful if you want to compile Qdrant for a specific processor architecture or if you do not want to use Docker.
+is_empty: true
+---",documentation/faq/_index.md
+"---
+title: Airbyte
-Before compiling, make sure that the necessary libraries and the [rust toolchain](https://www.rust-lang.org/tools/install) are installed.
+aliases: [ ../integrations/airbyte/, ../frameworks/airbyte/ ]
-The current list of required libraries can be found in the [Dockerfile](https://github.com/qdrant/qdrant/blob/master/Dockerfile).
+---
-Build Qdrant with Cargo:
+# Airbyte
-```bash
+[Airbyte](https://airbyte.com/) is an open-source data integration platform that helps you replicate your data
-cargo build --release --bin qdrant
+between different systems. It has a [growing list of connectors](https://docs.airbyte.io/integrations) that can
-```
+be used to ingest data from multiple sources. Building data pipelines is also crucial for managing the data in
+Qdrant, and Airbyte is a great tool for this purpose.
-After a successful build, you can find the binary in the following subdirectory `./target/release/qdrant`.
+Airbyte may take care of the data ingestion from a selected source, while Qdrant will help you to build a search
+engine on top of it. There are three supported modes of how the data can be ingested into Qdrant:
-## Client libraries
+* **Full Refresh Sync**
-In addition to the service, Qdrant provides a variety of client libraries for different programming languages. For a full list, see our [Client libraries](../../interfaces/#client-libraries) documentation.
-",documentation/guides/installation.md
-"---
+* **Incremental - Append Sync**
-title: Quantization
+* **Incremental - Append + Deduped**
-weight: 120
-aliases:
- - ../quantization
+You can read more about these modes in the [Airbyte documentation](https://docs.airbyte.io/integrations/destinations/qdrant).
----
+## Prerequisites
-# Quantization
+Before you start, make sure you have the following:
-Quantization is an optional feature in Qdrant that enables efficient storage and search of high-dimensional vectors.
-By transforming original vectors into a new representations, quantization compresses data while preserving close to original relative distances between vectors.
-Different quantization methods have different mechanics and tradeoffs. We will cover them in this section.
+1. Airbyte instance, either [Open Source](https://airbyte.com/solutions/airbyte-open-source),
+ [Self-Managed](https://airbyte.com/solutions/airbyte-enterprise), or [Cloud](https://airbyte.com/solutions/airbyte-cloud).
+2. Running instance of Qdrant. It has to be accessible by URL from the machine where Airbyte is running.
-Quantization is primarily used to reduce the memory footprint and accelerate the search process in high-dimensional vector spaces.
+ You can follow the [installation guide](/documentation/guides/installation/) to set up Qdrant.
-In the context of the Qdrant, quantization allows you to optimize the search engine for specific use cases, striking a balance between accuracy, storage efficiency, and search speed.
+## Setting up Qdrant as a destination
-There are tradeoffs associated with quantization.
-On the one hand, quantization allows for significant reductions in storage requirements and faster search times.
-This can be particularly beneficial in large-scale applications where minimizing the use of resources is a top priority.
+Once you have a running instance of Airbyte, you can set up Qdrant as a destination directly in the UI.
-On the other hand, quantization introduces an approximation error, which can lead to a slight decrease in search quality.
+Airbyte's Qdrant destination is connected with a single collection in Qdrant.
-The level of this tradeoff depends on the quantization method and its parameters, as well as the characteristics of the data.
+![Airbyte Qdrant destination](/documentation/frameworks/airbyte/qdrant-destination.png)
-## Scalar Quantization
+### Text processing
-*Available as of v1.1.0*
+Airbyte has some built-in mechanisms to transform your texts into embeddings. You can choose how you want to
+chunk your fields into pieces before calculating the embeddings, but also which fields should be used to
+create the point payload.
-Scalar quantization, in the context of vector search engines, is a compression technique that compresses vectors by reducing the number of bits used to represent each vector component.
+![Processing settings](/documentation/frameworks/airbyte/processing.png)
-For instance, Qdrant uses 32-bit floating numbers to represent the original vector components. Scalar quantization allows you to reduce the number of bits used to 8.
+### Embeddings
-In other words, Qdrant performs `float32 -> uint8` conversion for each vector component.
-Effectively, this means that the amount of memory required to store a vector is reduced by a factor of 4.
+You can choose the model that will be used to calculate the embeddings. Currently, Airbyte supports multiple
+models, including OpenAI and Cohere.
-In addition to reducing the memory footprint, scalar quantization also speeds up the search process.
-Qdrant uses a special SIMD CPU instruction to perform fast vector comparison.
-This instruction works with 8-bit integers, so the conversion to `uint8` allows Qdrant to perform the comparison faster.
+![Embeddings settings](/documentation/frameworks/airbyte/embedding.png)
-The main drawback of scalar quantization is the loss of accuracy. The `float32 -> uint8` conversion introduces an error that can lead to a slight decrease in search quality.
+Using some precomputed embeddings from your data source is also possible. In this case, you can pass the field
-However, this error is usually negligible, and tends to be less significant for high-dimensional vectors.
+name containing the embeddings and their dimensionality.
-In our experiments, we found that the error introduced by scalar quantization is usually less than 1%.
+![Precomputed embeddings settings](/documentation/frameworks/airbyte/precomputed-embedding.png)
-However, this value depends on the data and the quantization parameters.
-Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
+### Qdrant connection details
+Finally, we can configure the target Qdrant instance and collection. In case you use the built-in authentication
-## Binary Quantization
+mechanism, here is where you can pass the token.
-*Available as of v1.5.0*
+![Qdrant connection details](/documentation/frameworks/airbyte/qdrant-config.png)
-Binary quantization is an extreme case of scalar quantization.
+Once you confirm creating the destination, Airbyte will test if a specified Qdrant cluster is accessible and
-This feature lets you represent each vector component as a single bit, effectively reducing the memory footprint by a **factor of 32**.
+might be used as a destination.
-This is the fastest quantization method, since it lets you perform a vector comparison with a few CPU instructions.
+## Setting up connection
-Binary quantization can achieve up to a **40x** speedup compared to the original vectors.
+Airbyte combines sources and destinations into a single entity called a connection. Once you have a destination
+configured and a source, you can create a connection between them. It doesn't matter what source you use, as
+long as Airbyte supports it. The process is pretty straightforward, but depends on the source you use.
-However, binary quantization is only efficient for high-dimensional vectors and require a centered distribution of vector components.
+![Airbyte connection](/documentation/frameworks/airbyte/connection.png)
-At the moment, binary quantization shows good accuracy results with the following models:
+## Further Reading
-- OpenAI `text-embedding-ada-002` - 1536d tested with [dbpedia dataset](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) achieving 0.98 recall@100 with 4x oversampling
-- Cohere AI `embed-english-v2.0` - 4096d tested on [wikipedia embeddings](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) - 0.98 recall@50 with 2x oversampling
+* [Airbyte documentation](https://docs.airbyte.com/understanding-airbyte/connections/).
+* [Source Code](https://github.com/airbytehq/airbyte/tree/master/airbyte-integrations/connectors/destination-qdrant)
+",documentation/data-management/airbyte.md
+"---
-Models with a lower dimensionality or a different distribution of vector components may require additional experiments to find the optimal quantization parameters.
+title: Apache Spark
+aliases: [ ../integrations/spark/, ../frameworks/spark/ ]
+---
-We recommend using binary quantization only with rescoring enabled, as it can significantly improve the search quality
-with just a minor performance impact.
-Additionally, oversampling can be used to tune the tradeoff between search speed and search quality in the query time.
+# Apache Spark
-### Binary Quantization as Hamming Distance
+[Spark](https://spark.apache.org/) is a distributed computing framework designed for big data processing and analytics. The [Qdrant-Spark connector](https://github.com/qdrant/qdrant-spark) enables Qdrant to be a storage destination in Spark.
-The additional benefit of this method is that you can efficiently emulate Hamming distance with dot product.
+## Installation
-Specifically, if original vectors contain `{-1, 1}` as possible values, then the dot product of two vectors is equal to the Hamming distance by simply replacing `-1` with `0` and `1` with `1`.
+You can set up the Qdrant-Spark Connector in a few different ways, depending on your preferences and requirements.
+### GitHub Releases
-
+The simplest way to get started is by downloading pre-packaged JAR file releases from the [GitHub releases page](https://github.com/qdrant/qdrant-spark/releases). These JAR files come with all the necessary dependencies.
-
- Sample truth table
+### Building from Source
-| Vector 1 | Vector 2 | Dot product |
+If you prefer to build the JAR from source, you'll need [JDK 8](https://www.azul.com/downloads/#zulu) and [Maven](https://maven.apache.org/) installed on your system. Once you have the prerequisites in place, navigate to the project's root directory and run the following command:
-|----------|----------|-------------|
-| 1 | 1 | 1 |
-| 1 | -1 | -1 |
+```bash
-| -1 | 1 | -1 |
+mvn package
-| -1 | -1 | 1 |
+```
-| Vector 1 | Vector 2 | Hamming distance |
+This command will compile the source code and generate a fat JAR, which will be stored in the `target` directory by default.
-|----------|----------|------------------|
-| 1 | 1 | 0 |
-| 1 | 0 | 1 |
+### Maven Central
-| 0 | 1 | 1 |
-| 0 | 0 | 0 |
+For use with Java and Scala projects, the package can be found [here](https://central.sonatype.com/artifact/io.qdrant/spark).
-
+## Usage
-As you can see, both functions are equal up to a constant factor, which makes similarity search equivalent.
-Binary quantization makes it efficient to compare vectors using this representation.
+Below, we'll walk through the steps of creating a Spark session with Qdrant support and loading data into Qdrant.
+### Creating a single-node Spark session with Qdrant Support
-## Product Quantization
+To begin, import the necessary libraries and create a Spark session with Qdrant support:
-*Available as of v1.2.0*
+```python
+from pyspark.sql import SparkSession
-Product quantization is a method of compressing vectors to minimize their memory usage by dividing them into
-chunks and quantizing each segment individually.
-Each chunk is approximated by a centroid index that represents the original vector component.
+spark = SparkSession.builder.config(
-The positions of the centroids are determined through the utilization of a clustering algorithm such as k-means.
+ ""spark.jars"",
-For now, Qdrant uses only 256 centroids, so each centroid index can be represented by a single byte.
+ ""spark-VERSION.jar"", # Specify the downloaded JAR file
+ )
+ .master(""local[*]"")
-Product quantization can compress by a more prominent factor than a scalar one.
+ .appName(""qdrant"")
-But there are some tradeoffs. Product quantization distance calculations are not SIMD-friendly, so it is slower than scalar quantization.
+ .getOrCreate()
-Also, product quantization has a loss of accuracy, so it is recommended to use it only for high-dimensional vectors.
+```
-Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
+```scala
+import org.apache.spark.sql.SparkSession
-## How to choose the right quantization method
+val spark = SparkSession.builder
+ .config(""spark.jars"", ""spark-VERSION.jar"") // Specify the downloaded JAR file
-Here is a brief table of the pros and cons of each quantization method:
+ .master(""local[*]"")
+ .appName(""qdrant"")
+ .getOrCreate()
-| Quantization method | Accuracy | Speed | Compression |
+```
-|---------------------|----------|--------------|-------------|
-| Scalar | 0.99 | up to x2 | 4 |
-| Product | 0.7 | 0.5 | up to 64 |
+```java
-| Binary | 0.95* | up to x40 | 32 |
+import org.apache.spark.sql.SparkSession;
-`*` - for compatible models
+public class QdrantSparkJavaExample {
+ public static void main(String[] args) {
+ SparkSession spark = SparkSession.builder()
-* **Binary Quantization** is the fastest method and the most memory-efficient, but it requires a centered distribution of vector components. It is recommended to use with tested models only.
+ .config(""spark.jars"", ""spark-VERSION.jar"") // Specify the downloaded JAR file
-* **Scalar Quantization** is the most universal method, as it provides a good balance between accuracy, speed, and compression. It is recommended as default quantization if binary quantization is not applicable.
+ .master(""local[*]"")
-* **Product Quantization** may provide a better compression ratio, but it has a significant loss of accuracy and is slower than scalar quantization. It is recommended if the memory footprint is the top priority and the search speed is not critical.
+ .appName(""qdrant"")
+ .getOrCreate();
+ }
-## Setting up Quantization in Qdrant
+}
+```
-You can configure quantization for a collection by specifying the quantization parameters in the `quantization_config` section of the collection configuration.
+### Loading data into Qdrant
-Quantization will be automatically applied to all vectors during the indexation process.
-Quantized vectors are stored alongside the original vectors in the collection, so you will still have access to the original vectors if you need them.
+
-*Available as of v1.1.1*
+The connector supports ingesting multiple named/unnamed, dense/sparse vectors.
-The `quantization_config` can also be set on a per vector basis by specifying it in a named vector.
+_Click each to expand._
-### Setting up Scalar Quantization
+
+ Unnamed/Default vector
-To enable scalar quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
+```python
+
-```http
+ .write
-PUT /collections/{collection_name}
+ .format(""io.qdrant.spark.Qdrant"")
-{
+ .option(""qdrant_url"", )
- ""vectors"": {
+ .option(""collection_name"", )
- ""size"": 768,
+ .option(""embedding_field"", ) # Expected to be a field of type ArrayType(FloatType)
- ""distance"": ""Cosine""
+ .option(""schema"", .schema.json())
- },
+ .mode(""append"")
- ""quantization_config"": {
+ .save()
- ""scalar"": {
+```
- ""type"": ""int8"",
- ""quantile"": 0.99,
- ""always_ram"": true
+
- }
- }
-}
+
-```
+ Named vector
```python
-from qdrant_client import QdrantClient
+
-from qdrant_client.http import models
+ .write
+ .format(""io.qdrant.spark.Qdrant"")
+ .option(""qdrant_url"", )
-client = QdrantClient(""localhost"", port=6333)
+ .option(""collection_name"", )
+ .option(""embedding_field"", ) # Expected to be a field of type ArrayType(FloatType)
+ .option(""vector_name"", )
-client.create_collection(
+ .option(""schema"", .schema.json())
- collection_name=""{collection_name}"",
+ .mode(""append"")
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ .save()
- quantization_config=models.ScalarQuantization(
+```
- scalar=models.ScalarQuantizationConfig(
- type=models.ScalarType.INT8,
- quantile=0.99,
+> #### NOTE
- always_ram=True,
+>
- ),
+> The `embedding_field` and `vector_name` options are maintained for backward compatibility. It is recommended to use `vector_fields` and `vector_names` for named vectors as shown below.
- ),
-)
-```
+
-```typescript
+
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ Multiple named vectors
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```python
+
+ .write
-client.createCollection(""{collection_name}"", {
+ .format(""io.qdrant.spark.Qdrant"")
- vectors: {
+ .option(""qdrant_url"", """")
- size: 768,
+ .option(""collection_name"", """")
- distance: ""Cosine"",
+ .option(""vector_fields"", "","")
- },
+ .option(""vector_names"", "","")
- quantization_config: {
+ .option(""schema"", .schema.json())
- scalar: {
+ .mode(""append"")
- type: ""int8"",
+ .save()
- quantile: 0.99,
+```
- always_ram: true,
- },
- },
+
-});
-```
+
+ Sparse vectors
-```rust
-use qdrant_client::{
- client::QdrantClient,
+```python
- qdrant::{
+
- quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance,
+ .write
- QuantizationConfig, QuantizationType, ScalarQuantization, VectorParams, VectorsConfig,
+ .format(""io.qdrant.spark.Qdrant"")
- },
+ .option(""qdrant_url"", """")
-};
+ .option(""collection_name"", """")
+ .option(""sparse_vector_value_fields"", """")
+ .option(""sparse_vector_index_fields"", """")
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ .option(""sparse_vector_names"", """")
+ .option(""schema"", .schema.json())
+ .mode(""append"")
-client
+ .save()
- .create_collection(&CreateCollection {
+```
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
+
- size: 768,
- distance: Distance::Cosine.into(),
- ..Default::default()
+
- })),
+ Multiple sparse vectors
- }),
- quantization_config: Some(QuantizationConfig {
- quantization: Some(Quantization::Scalar(ScalarQuantization {
+```python
- r#type: QuantizationType::Int8.into(),
+
- quantile: Some(0.99),
+ .write
- always_ram: Some(true),
+ .format(""io.qdrant.spark.Qdrant"")
- })),
+ .option(""qdrant_url"", """")
- }),
+ .option(""collection_name"", """")
- ..Default::default()
+ .option(""sparse_vector_value_fields"", "","")
- })
+ .option(""sparse_vector_index_fields"", "","")
- .await?;
+ .option(""sparse_vector_names"", "","")
+
+ .option(""schema"", .schema.json())
+
+ .mode(""append"")
+
+ .save()
```
-```java
+
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
+
-import io.qdrant.client.grpc.Collections.Distance;
+ Combination of named dense and sparse vectors
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
-import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
+```python
-import io.qdrant.client.grpc.Collections.VectorParams;
+
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+ .write
+ .format(""io.qdrant.spark.Qdrant"")
+ .option(""qdrant_url"", """")
-QdrantClient client =
+ .option(""collection_name"", """")
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ .option(""vector_fields"", "","")
+ .option(""vector_names"", "","")
+ .option(""sparse_vector_value_fields"", "","")
-client
+ .option(""sparse_vector_index_fields"", "","")
- .createCollectionAsync(
+ .option(""sparse_vector_names"", "","")
- CreateCollection.newBuilder()
+ .option(""schema"", .schema.json())
- .setCollectionName(""{collection_name}"")
+ .mode(""append"")
- .setVectorsConfig(
+ .save()
- VectorsConfig.newBuilder()
+```
- .setParams(
- VectorParams.newBuilder()
- .setSize(768)
+
- .setDistance(Distance.Cosine)
- .build())
- .build())
+
- .setQuantizationConfig(
+ No vectors - Entire dataframe is stored as payload
- QuantizationConfig.newBuilder()
- .setScalar(
- ScalarQuantization.newBuilder()
+```python
- .setType(QuantizationType.Int8)
+
- .setQuantile(0.99f)
+ .write
- .setAlwaysRam(true)
+ .format(""io.qdrant.spark.Qdrant"")
- .build())
+ .option(""qdrant_url"", """")
- .build())
+ .option(""collection_name"", """")
- .build())
+ .option(""schema"", .schema.json())
- .get();
+ .mode(""append"")
+
+ .save()
```
-```csharp
+
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+## Databricks
-var client = new QdrantClient(""localhost"", 6334);
+
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- quantizationConfig: new QuantizationConfig
+You can use the `qdrant-spark` connector as a library in [Databricks](https://www.databricks.com/).
- {
- Scalar = new ScalarQuantization
- {
+- Go to the `Libraries` section in your Databricks cluster dashboard.
- Type = QuantizationType.Int8,
+- Select `Install New` to open the library installation modal.
- Quantile = 0.99f,
+- Search for `io.qdrant:spark:VERSION` in the Maven packages and click `Install`.
- AlwaysRam = true
- }
- }
+![Databricks](/documentation/frameworks/spark/databricks.png)
-);
-```
+## Datatype Support
-There are 3 parameters that you can specify in the `quantization_config` section:
+Qdrant supports all the Spark data types, and the appropriate data types are mapped based on the provided schema.
-`type` - the type of the quantized vector components. Currently, Qdrant supports only `int8`.
+## Configuration Options
-`quantile` - the quantile of the quantized vector components.
-The quantile is used to calculate the quantization bounds.
+| Option | Description | Column DataType | Required |
-For instance, if you specify `0.99` as the quantile, 1% of extreme values will be excluded from the quantization bounds.
+| :--------------------------- | :------------------------------------------------------------------ | :---------------------------- | :------- |
+| `qdrant_url` | GRPC URL of the Qdrant instance. Eg: | - | ✅ |
+| `collection_name` | Name of the collection to write data into | - | ✅ |
-Using quantiles lower than `1.0` might be useful if there are outliers in your vector components.
+| `schema` | JSON string of the dataframe schema | - | ✅ |
-This parameter only affects the resulting precision and not the memory footprint.
+| `embedding_field` | Name of the column holding the embeddings | `ArrayType(FloatType)` | ❌ |
-It might be worth tuning this parameter if you experience a significant decrease in search quality.
+| `id_field` | Name of the column holding the point IDs. Default: Random UUID | `StringType` or `IntegerType` | ❌ |
+| `batch_size` | Max size of the upload batch. Default: 64 | - | ❌ |
+| `retries` | Number of upload retries. Default: 3 | - | ❌ |
-`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
+| `api_key` | Qdrant API key for authentication | - | ❌ |
-However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
+| `vector_name` | Name of the vector in the collection. | - | ❌ |
+| `vector_fields` | Comma-separated names of columns holding the vectors. | `ArrayType(FloatType)` | ❌ |
+| `vector_names` | Comma-separated names of vectors in the collection. | - | ❌ |
-In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
+| `sparse_vector_index_fields` | Comma-separated names of columns holding the sparse vector indices. | `ArrayType(IntegerType)` | ❌ |
+| `sparse_vector_value_fields` | Comma-separated names of columns holding the sparse vector values. | `ArrayType(FloatType)` | ❌ |
+| `sparse_vector_names` | Comma-separated names of the sparse vectors in the collection. | - | ❌ |
-### Setting up Binary Quantization
+| `shard_key_selector` | Comma-separated names of custom shard keys to use during upsert. | - | ❌ |
-To enable binary quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
+For more information, be sure to check out the [Qdrant-Spark GitHub repository](https://github.com/qdrant/qdrant-spark). The Apache Spark guide is available [here](https://spark.apache.org/docs/latest/quick-start.html). Happy data processing!
+",documentation/data-management/spark.md
+"---
+title: Confluent Kafka
+aliases: [ ../frameworks/confluent/ ]
-```http
+---
-PUT /collections/{collection_name}
-{
- ""vectors"": {
+![Confluent Logo](/documentation/frameworks/confluent/confluent-logo.png)
- ""size"": 1536,
- ""distance"": ""Cosine""
- },
+Built by the original creators of Apache Kafka®, [Confluent Cloud](https://www.confluent.io/confluent-cloud/?utm_campaign=tm.pmm_cd.cwc_partner_Qdrant_generic&utm_source=Qdrant&utm_medium=partnerref) is a cloud-native and complete data streaming platform available on AWS, Azure, and Google Cloud. The platform includes a fully managed, elastically scaling Kafka engine, 120+ connectors, serverless Apache Flink®, enterprise-grade security controls, and a robust governance suite.
- ""quantization_config"": {
- ""binary"": {
- ""always_ram"": true
+With our [Qdrant-Kafka Sink Connector](https://github.com/qdrant/qdrant-kafka), Qdrant is part of the [Connect with Confluent](https://www.confluent.io/partners/connect/) technology partner program. It brings fully managed data streams directly to organizations from Confluent Cloud, making it easier for organizations to stream any data to Qdrant with a fully managed Apache Kafka service.
- }
- }
-}
+## Usage
-```
+### Pre-requisites
-```python
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+- A Confluent Cloud account. You can begin with a [free trial](https://www.confluent.io/confluent-cloud/tryfree/?utm_campaign=tm.pmm_cd.cwc_partner_qdrant_tryfree&utm_source=qdrant&utm_medium=partnerref) with credits for the first 30 days.
+- Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
-client = QdrantClient(""localhost"", port=6333)
+### Installation
-client.create_collection(
- collection_name=""{collection_name}"",
+1) Download the latest connector zip file from [Confluent Hub](https://www.confluent.io/hub/qdrant/qdrant-kafka).
- vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
- quantization_config=models.BinaryQuantization(
- binary=models.BinaryQuantizationConfig(
+2) Configure an environment and cluster on Confluent and create a topic to produce messages for.
- always_ram=True,
- ),
- ),
+3) Navigate to the `Connectors` section of the Confluent cluster and click `Add Plugin`. Upload the zip file with the following info.
-)
-```
+![Qdrant Connector Install](/documentation/frameworks/confluent/install.png)
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+4) Once installed, navigate to the connector and set the following configuration values.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+![Qdrant Connector Config](/documentation/frameworks/confluent/config.png)
-client.createCollection(""{collection_name}"", {
+Replace the placeholder values with your credentials.
- vectors: {
- size: 1536,
- distance: ""Cosine"",
+5) Add the Qdrant instance host to the allowed networking endpoints.
- },
- quantization_config: {
- binary: {
+![Qdrant Connector Endpoint](/documentation/frameworks/confluent/endpoint.png)
- always_ram: true,
- },
- },
+7) Start the connector.
-});
-```
+## Producing Messages
-```rust
-use qdrant_client::{
+You can now produce messages for the configured topic, and they'll be written into the configured Qdrant instance.
- client::QdrantClient,
- qdrant::{
- quantization_config::Quantization, vectors_config::Config, BinaryQuantization,
+![Qdrant Connector Message](/documentation/frameworks/confluent/message.png)
- CreateCollection, Distance, QuantizationConfig, VectorParams, VectorsConfig,
- },
-};
+## Message Formats
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+The connector supports messages in the following formats.
-client
+_Click each to expand._
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
+
- config: Some(Config::Params(VectorParams {
+ Unnamed/Default vector
- size: 1536,
- distance: Distance::Cosine.into(),
- ..Default::default()
+Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection).
- })),
- }),
- quantization_config: Some(QuantizationConfig {
+```json
- quantization: Some(Quantization::Binary(BinaryQuantization {
+{
- always_ram: Some(true),
+ ""collection_name"": ""{collection_name}"",
- })),
+ ""id"": 1,
- }),
+ ""vector"": [
- ..Default::default()
+ 0.1,
- })
+ 0.2,
- .await?;
+ 0.3,
-```
+ 0.4,
+ 0.5,
+ 0.6,
-```java
+ 0.7,
-import io.qdrant.client.QdrantClient;
+ 0.8
-import io.qdrant.client.QdrantGrpcClient;
+ ],
-import io.qdrant.client.grpc.Collections.BinaryQuantization;
+ ""payload"": {
-import io.qdrant.client.grpc.Collections.CreateCollection;
+ ""name"": ""kafka"",
-import io.qdrant.client.grpc.Collections.Distance;
+ ""description"": ""Kafka is a distributed streaming platform"",
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
+ ""url"": ""https://kafka.apache.org/""
-import io.qdrant.client.grpc.Collections.VectorParams;
+ }
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+}
+```
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
-client
+
- .createCollectionAsync(
+ Named multiple vectors
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
+Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
- VectorsConfig.newBuilder()
- .setParams(
- VectorParams.newBuilder()
+```json
- .setSize(1536)
+{
- .setDistance(Distance.Cosine)
+ ""collection_name"": ""{collection_name}"",
- .build())
+ ""id"": 1,
- .build())
+ ""vector"": {
- .setQuantizationConfig(
+ ""some-dense"": [
- QuantizationConfig.newBuilder()
+ 0.1,
- .setBinary(BinaryQuantization.newBuilder().setAlwaysRam(true).build())
+ 0.2,
- .build())
+ 0.3,
- .build())
+ 0.4,
- .get();
+ 0.5,
-```
+ 0.6,
+ 0.7,
+ 0.8
-```csharp
+ ],
-using Qdrant.Client;
+ ""some-other-dense"": [
-using Qdrant.Client.Grpc;
+ 0.1,
+ 0.2,
+ 0.3,
-var client = new QdrantClient(""localhost"", 6334);
+ 0.4,
+ 0.5,
+ 0.6,
-await client.CreateCollectionAsync(
+ 0.7,
- collectionName: ""{collection_name}"",
+ 0.8
- vectorsConfig: new VectorParams { Size = 1536, Distance = Distance.Cosine },
+ ]
- quantizationConfig: new QuantizationConfig
+ },
- {
+ ""payload"": {
- Binary = new BinaryQuantization { AlwaysRam = true }
+ ""name"": ""kafka"",
- }
+ ""description"": ""Kafka is a distributed streaming platform"",
-);
+ ""url"": ""https://kafka.apache.org/""
+
+ }
+
+}
```
-`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
+
-However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
+
-In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
+ Sparse vectors
-### Setting up Product Quantization
+Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors).
-To enable product quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
+```json
+{
+ ""collection_name"": ""{collection_name}"",
-```http
+ ""id"": 1,
-PUT /collections/{collection_name}
+ ""vector"": {
-{
+ ""some-sparse"": {
- ""vectors"": {
+ ""indices"": [
- ""size"": 768,
+ 0,
- ""distance"": ""Cosine""
+ 1,
- },
+ 2,
- ""quantization_config"": {
+ 3,
- ""product"": {
+ 4,
- ""compression"": ""x16"",
+ 5,
- ""always_ram"": true
+ 6,
- }
+ 7,
- }
+ 8,
-}
+ 9
-```
+ ],
+ ""values"": [
+ 0.1,
-```python
+ 0.2,
-from qdrant_client import QdrantClient
+ 0.3,
-from qdrant_client.http import models
+ 0.4,
+ 0.5,
+ 0.6,
-client = QdrantClient(""localhost"", port=6333)
+ 0.7,
+ 0.8,
+ 0.9,
-client.create_collection(
+ 1.0
- collection_name=""{collection_name}"",
+ ]
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ }
- quantization_config=models.ProductQuantization(
+ },
- product=models.ProductQuantizationConfig(
+ ""payload"": {
- compression=models.CompressionRatio.X16,
+ ""name"": ""kafka"",
- always_ram=True,
+ ""description"": ""Kafka is a distributed streaming platform"",
- ),
+ ""url"": ""https://kafka.apache.org/""
- ),
+ }
-)
+}
```
-```typescript
-
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+ Multi-vectors
-client.createCollection(""{collection_name}"", {
- vectors: {
+Reference:
- size: 768,
- distance: ""Cosine"",
- },
+- [Multi-vectors](https://qdrant.tech/documentation/concepts/vectors/#multivectors)
- quantization_config: {
- product: {
- compression: ""x16"",
+```json
- always_ram: true,
+{
- },
+ ""collection_name"": ""{collection_name}"",
- },
+ ""id"": 1,
-});
+ ""vector"": {
-```
+ ""some-multi"": [
+ [
+ 0.1,
-```rust
+ 0.2,
-use qdrant_client::{
+ 0.3,
- client::QdrantClient,
+ 0.4,
- qdrant::{
+ 0.5,
- quantization_config::Quantization, vectors_config::Config, CompressionRatio,
+ 0.6,
- CreateCollection, Distance, ProductQuantization, QuantizationConfig, VectorParams,
+ 0.7,
- VectorsConfig,
+ 0.8,
- },
+ 0.9,
-};
+ 1.0
+ ],
+ [
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ 1.0,
+ 0.9,
+ 0.8,
-client
+ 0.5,
- .create_collection(&CreateCollection {
+ 0.4,
- collection_name: ""{collection_name}"".to_string(),
+ 0.8,
- vectors_config: Some(VectorsConfig {
+ 0.6,
- config: Some(Config::Params(VectorParams {
+ 0.4,
- size: 768,
+ 0.2,
- distance: Distance::Cosine.into(),
+ 0.1
- ..Default::default()
+ ]
- })),
+ ]
- }),
+ },
- quantization_config: Some(QuantizationConfig {
+ ""payload"": {
- quantization: Some(Quantization::Product(ProductQuantization {
+ ""name"": ""kafka"",
- compression: CompressionRatio::X16.into(),
+ ""description"": ""Kafka is a distributed streaming platform"",
- always_ram: Some(true),
+ ""url"": ""https://kafka.apache.org/""
- })),
+ }
- }),
+}
- ..Default::default()
+```
- })
- .await?;
-```
+
-```java
+
-import io.qdrant.client.QdrantClient;
+ Combination of named dense and sparse vectors
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CompressionRatio;
-import io.qdrant.client.grpc.Collections.CreateCollection;
+Reference:
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.ProductQuantization;
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
+- [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+- [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors).
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+```json
+{
+ ""collection_name"": ""{collection_name}"",
-client
+ ""id"": ""a10435b5-2a58-427a-a3a0-a5d845b147b7"",
- .createCollectionAsync(
+ ""vector"": {
- CreateCollection.newBuilder()
+ ""some-other-dense"": [
- .setCollectionName(""{collection_name}"")
+ 0.1,
- .setVectorsConfig(
+ 0.2,
- VectorsConfig.newBuilder()
+ 0.3,
- .setParams(
+ 0.4,
- VectorParams.newBuilder()
+ 0.5,
- .setSize(768)
+ 0.6,
- .setDistance(Distance.Cosine)
+ 0.7,
- .build())
+ 0.8
- .build())
+ ],
- .setQuantizationConfig(
+ ""some-sparse"": {
- QuantizationConfig.newBuilder()
+ ""indices"": [
- .setProduct(
+ 0,
- ProductQuantization.newBuilder()
+ 1,
- .setCompression(CompressionRatio.x16)
+ 2,
- .setAlwaysRam(true)
+ 3,
- .build())
+ 4,
- .build())
+ 5,
- .build())
+ 6,
- .get();
+ 7,
-```
+ 8,
+ 9
+ ],
-```csharp
+ ""values"": [
-using Qdrant.Client;
+ 0.1,
-using Qdrant.Client.Grpc;
+ 0.2,
+ 0.3,
+ 0.4,
-var client = new QdrantClient(""localhost"", 6334);
+ 0.5,
+ 0.6,
+ 0.7,
-await client.CreateCollectionAsync(
+ 0.8,
- collectionName: ""{collection_name}"",
+ 0.9,
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+ 1.0
- quantizationConfig: new QuantizationConfig
+ ]
- {
+ }
- Product = new ProductQuantization { Compression = CompressionRatio.X16, AlwaysRam = true }
+ },
- }
+ ""payload"": {
-);
+ ""name"": ""kafka"",
-```
+ ""description"": ""Kafka is a distributed streaming platform"",
+ ""url"": ""https://kafka.apache.org/""
+ }
-There are two parameters that you can specify in the `quantization_config` section:
+}
+```
-`compression` - compression ratio.
-Compression ratio represents the size of the quantized vector in bytes divided by the size of the original vector in bytes.
+
-In this case, the quantized vector will be 16 times smaller than the original vector.
+## Further Reading
-`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
-However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. Then set `always_ram` to `true`.
+- [Kafka Connect Docs](https://docs.confluent.io/platform/current/connect/index.html)
+- [Confluent Connectors Docs](https://docs.confluent.io/cloud/current/connectors/bring-your-connector/custom-connector-qs.html)
+",documentation/data-management/confluent.md
+"---
-### Searching with Quantization
+title: Redpanda Connect
+---
-Once you have configured quantization for a collection, you don't need to do anything extra to search with quantization.
-Qdrant will automatically use quantized vectors if they are available.
+![Redpanda Cover](/documentation/data-management/redpanda/redpanda-cover.png)
-However, there are a few options that you can use to control the search process:
+[Redpanda Connect](https://www.redpanda.com/connect) is a declarative data-agnostic streaming service designed for efficient, stateless processing steps. It offers transaction-based resiliency with back pressure, ensuring at-least-once delivery when connecting to at-least-once sources with sinks, without the need to persist messages during transit.
-```http
+Connect pipelines are configured using a YAML file, which organizes components hierarchically. Each section represents a different component type, such as inputs, processors and outputs, and these can have nested child components and [dynamic values](https://docs.redpanda.com/redpanda-connect/configuration/interpolation/).
-POST /collections/{collection_name}/points/search
-{
- ""params"": {
+The [Qdrant Output](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/) component enables streaming vector data into Qdrant collections in your RedPanda pipelines.
- ""quantization"": {
- ""ignore"": false,
- ""rescore"": true,
+## Example
- ""oversampling"": 2.0
- }
- },
+An example configuration of the output once the inputs and processors are set, would look like:
- ""vector"": [0.2, 0.1, 0.9, 0.7],
- ""limit"": 10
-}
+```yaml
-```
+input:
+ # https://docs.redpanda.com/redpanda-connect/components/inputs/about/
-```python
-from qdrant_client import QdrantClient
+pipeline:
-from qdrant_client.http import models
+ processors:
+ # https://docs.redpanda.com/redpanda-connect/components/processors/about/
-client = QdrantClient(""localhost"", port=6333)
+output:
+ label: ""qdrant-output""
-client.search(
+ qdrant:
- collection_name=""{collection_name}"",
+ max_in_flight: 64
- query_vector=[0.2, 0.1, 0.9, 0.7],
+ batching:
- search_params=models.SearchParams(
+ count: 8
- quantization=models.QuantizationSearchParams(
+ grpc_host: xyz-example.eu-central.aws.cloud.qdrant.io:6334
- ignore=False,
+ api_token: """"
- rescore=True,
+ tls:
- oversampling=2.0,
+ enabled: true
- )
+ # skip_cert_verify: false
- ),
+ # enable_renegotiation: false
-)
+ # root_cas: """"
-```
+ # root_cas_file: """"
+ # client_certs: []
+ collection_name: """"
-```typescript
+ id: root = uuid_v4()
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ vector_mapping: 'root = {""some_dense"": this.vector, ""some_sparse"": {""indices"": [23,325,532],""values"": [0.352,0.532,0.532]}}'
+ payload_mapping: 'root = {""field"": this.value, ""field_2"": 987}'
+```
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+## Further Reading
-client.search(""{collection_name}"", {
- vector: [0.2, 0.1, 0.9, 0.7],
- params: {
+- [Getting started with Connect](https://docs.redpanda.com/redpanda-connect/guides/getting_started/)
- quantization: {
+- [Qdrant Output Reference](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/)
+",documentation/data-management/redpanda.md
+"---
- ignore: false,
+title: DLT
- rescore: true,
+aliases: [ ../integrations/dlt/, ../frameworks/dlt/ ]
- oversampling: 2.0,
+---
- },
- },
- limit: 10,
+# DLT(Data Load Tool)
-});
-```
+[DLT](https://dlthub.com/) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets.
-```rust
-use qdrant_client::{
+With the DLT-Qdrant integration, you can now select Qdrant as a DLT destination to load data into.
- client::QdrantClient,
- qdrant::{QuantizationSearchParams, SearchParams, SearchPoints},
-};
+**DLT Enables**
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+- Automated maintenance - with schema inference, alerts and short declarative code, maintenance becomes simple.
+- Run it where Python runs - on Airflow, serverless functions, notebooks. Scales on micro and large infrastructure alike.
+- User-friendly, declarative interface that removes knowledge obstacles for beginners while empowering senior professionals.
-client
- .search_points(&SearchPoints {
- collection_name: ""{collection_name}"".to_string(),
+## Usage
- vector: vec![0.2, 0.1, 0.9, 0.7],
- params: Some(SearchParams {
- quantization: Some(QuantizationSearchParams {
+To get started, install `dlt` with the `qdrant` extra.
- ignore: Some(false),
- rescore: Some(true),
- oversampling: Some(2.0),
+```bash
- ..Default::default()
+pip install ""dlt[qdrant]""
- }),
+```
- ..Default::default()
- }),
- limit: 10,
+Configure the destination in the DLT secrets file. The file is located at `~/.dlt/secrets.toml` by default. Add the following section to the secrets file.
- ..Default::default()
- })
- .await?;
+```toml
-```
+[destination.qdrant.credentials]
+location = ""https://your-qdrant-url""
+api_key = ""your-qdrant-api-key""
-```java
+```
-import java.util.List;
+The location will default to `http://localhost:6333` and `api_key` is not defined - which are the defaults for a local Qdrant instance.
-import io.qdrant.client.QdrantClient;
+Find more information about DLT configurations [here](https://dlthub.com/docs/general-usage/credentials).
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.QuantizationSearchParams;
-import io.qdrant.client.grpc.Points.SearchParams;
+Define the source of the data.
-import io.qdrant.client.grpc.Points.SearchPoints;
+```python
-QdrantClient client =
+import dlt
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+from dlt.destinations.qdrant import qdrant_adapter
-client
+movies = [
- .searchAsync(
+ {
- SearchPoints.newBuilder()
+ ""title"": ""Blade Runner"",
- .setCollectionName(""{collection_name}"")
+ ""year"": 1982,
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ ""description"": ""The film is about a dystopian vision of the future that combines noir elements with sci-fi imagery.""
- .setParams(
+ },
- SearchParams.newBuilder()
+ {
- .setQuantization(
+ ""title"": ""Ghost in the Shell"",
- QuantizationSearchParams.newBuilder()
+ ""year"": 1995,
- .setIgnore(false)
+ ""description"": ""The film is about a cyborg policewoman and her partner who set out to find the main culprit behind brain hacking, the Puppet Master.""
- .setRescore(true)
+ },
- .setOversampling(2.0)
+ {
- .build())
+ ""title"": ""The Matrix"",
- .build())
+ ""year"": 1999,
- .setLimit(10)
+ ""description"": ""The movie is set in the 22nd century and tells the story of a computer hacker who joins an underground group fighting the powerful computers that rule the earth.""
- .build())
+ }
- .get();
+]
```
-```csharp
+
-var client = new QdrantClient(""localhost"", 6334);
+Define the pipeline.
-await client.SearchAsync(
+```python
- collectionName: ""{collection_name}"",
+pipeline = dlt.pipeline(
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ pipeline_name=""movies"",
- searchParams: new SearchParams
+ destination=""qdrant"",
- {
+ dataset_name=""movies_dataset"",
- Quantization = new QuantizationSearchParams
+)
- {
+```
- Ignore = false,
- Rescore = true,
- Oversampling = 2.0
+Run the pipeline.
- }
- },
- limit: 10
+```python
-);
+info = pipeline.run(
-```
+ qdrant_adapter(
+ movies,
+ embed=[""title"", ""description""]
-`ignore` - Toggle whether to ignore quantized vectors during the search process. By default, Qdrant will use quantized vectors if they are available.
+ )
+)
+```
-`rescore` - Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors.
-This can improve the search quality, but may slightly decrease the search speed, compared to the search without rescore.
-It is recommended to disable rescore only if the original vectors are stored on a slow storage (e.g. HDD or network storage).
+The data is now loaded into Qdrant.
-By default, rescore is enabled.
+To use vector search after the data has been loaded, you must specify which fields Qdrant needs to generate embeddings for. You do that by wrapping the data (or [DLT resource](https://dlthub.com/docs/general-usage/resource)) with the `qdrant_adapter` function.
-**Available as of v1.3.0**
+## Write disposition
-`oversampling` - Defines how many extra vectors should be pre-selected using quantized index, and then re-scored using original vectors.
-For example, if oversampling is 2.4 and limit is 100, then 240 vectors will be pre-selected using quantized index, and then top-100 will be returned after re-scoring.
-Oversampling is useful if you want to tune the tradeoff between search speed and search quality in the query time.
+A DLT [write disposition](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/#write-disposition) defines how the data should be written to the destination. All write dispositions are supported by the Qdrant destination.
-## Quantization tips
+## DLT Sync
-#### Accuracy tuning
+Qdrant destination supports syncing of the [`DLT` state](https://dlthub.com/docs/general-usage/state#syncing-state-with-destination).
-In this section, we will discuss how to tune the search precision.
+## Next steps
-The fastest way to understand the impact of quantization on the search quality is to compare the search results with and without quantization.
+- The comprehensive Qdrant DLT destination documentation can be found [here](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/).
-In order to disable quantization, you can set `ignore` to `true` in the search request:
+- [Source Code](https://github.com/dlt-hub/dlt/tree/devel/dlt/destinations/impl/qdrant)
+",documentation/data-management/dlt.md
+"---
+title: Apache Airflow
+aliases: [ ../frameworks/airflow/ ]
-```http
+---
-POST /collections/{collection_name}/points/search
-{
- ""params"": {
+# Apache Airflow
- ""quantization"": {
- ""ignore"": true
- }
+[Apache Airflow](https://airflow.apache.org/) is an open-source platform for authoring, scheduling and monitoring data and computing workflows. Airflow uses Python to create workflows that can be easily scheduled and monitored.
- },
- ""vector"": [0.2, 0.1, 0.9, 0.7],
- ""limit"": 10
+Qdrant is available as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in Airflow to interface with the database.
-}
-```
+## Prerequisites
-```python
-from qdrant_client import QdrantClient, models
+Before configuring Airflow, you need:
-client = QdrantClient(""localhost"", port=6333)
+1. A Qdrant instance to connect to. You can set one up in our [installation guide](/documentation/guides/installation/).
-client.search(
+2. A running Airflow instance. You can use their [Quick Start Guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html).
- collection_name=""{collection_name}"",
- query_vector=[0.2, 0.1, 0.9, 0.7],
- search_params=models.SearchParams(
+## Installation
- quantization=models.QuantizationSearchParams(
- ignore=True,
- )
+You can install the Qdrant provider by running `pip install apache-airflow-providers-qdrant` in your Airflow shell.
- ),
-)
-```
+**NOTE**: You'll have to restart your Airflow session for the provider to be available.
-```typescript
+## Setting up a connection
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Open the `Admin-> Connections` section of the Airflow UI. Click the `Create` link to create a new [Qdrant connection](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/connections.html).
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+![Qdrant connection](/documentation/frameworks/airflow/connection.png)
-client.search(""{collection_name}"", {
- vector: [0.2, 0.1, 0.9, 0.7],
- params: {
+You can also set up a connection using [environment variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#environment-variables-connections) or an [external secret backend](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html).
- quantization: {
- ignore: true,
- },
+## Qdrant hook
- },
-});
-```
+An Airflow hook is an abstraction of a specific API that allows Airflow to interact with an external system.
-```rust
+```python
-use qdrant_client::{
+from airflow.providers.qdrant.hooks.qdrant import QdrantHook
- client::QdrantClient,
- qdrant::{QuantizationSearchParams, SearchParams, SearchPoints},
-};
+hook = QdrantHook(conn_id=""qdrant_connection"")
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+hook.verify_connection()
+```
-client
- .search_points(&SearchPoints {
+A [`qdrant_client#QdrantClient`](https://pypi.org/project/qdrant-client/) instance is available via `@property conn` of the `QdrantHook` instance for use within your Airflow workflows.
- collection_name: ""{collection_name}"".to_string(),
- vector: vec![0.2, 0.1, 0.9, 0.7],
- params: Some(SearchParams {
+```python
- quantization: Some(QuantizationSearchParams {
+from qdrant_client import models
- ignore: Some(true),
- ..Default::default()
- }),
+hook.conn.count("""")
- ..Default::default()
- }),
- limit: 3,
+hook.conn.upsert(
- ..Default::default()
+ """",
- })
+ points=[
- .await?;
+ models.PointStruct(id=32, vector=[0.32, 0.12, 0.123], payload={""color"": ""red""})
-```
+ ],
+)
-```java
-import java.util.List;
+```
-import io.qdrant.client.QdrantClient;
+## Qdrant Ingest Operator
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.QuantizationSearchParams;
-import io.qdrant.client.grpc.Points.SearchParams;
+The Qdrant provider also provides a convenience operator for uploading data to a Qdrant collection that internally uses the Qdrant hook.
-import io.qdrant.client.grpc.Points.SearchPoints;
+```python
-QdrantClient client =
+from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+vectors = [
-client
+ [0.11, 0.22, 0.33, 0.44],
- .searchAsync(
+ [0.55, 0.66, 0.77, 0.88],
- SearchPoints.newBuilder()
+ [0.88, 0.11, 0.12, 0.13],
- .setCollectionName(""{collection_name}"")
+]
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ids = [32, 21, ""b626f6a9-b14d-4af9-b7c3-43d8deb719a6""]
- .setParams(
+payload = [{""meta"": ""data""}, {""meta"": ""data_2""}, {""meta"": ""data_3"", ""extra"": ""data""}]
- SearchParams.newBuilder()
- .setQuantization(
- QuantizationSearchParams.newBuilder().setIgnore(true).build())
+QdrantIngestOperator(
- .build())
+ conn_id=""qdrant_connection"",
- .setLimit(10)
+ task_id=""qdrant_ingest"",
- .build())
+ collection_name="""",
- .get();
+ vectors=vectors,
+
+ ids=ids,
+
+ payload=payload,
+
+)
```
-```csharp
+## Reference
-using Qdrant.Client;
+- 📦 [Provider package PyPI](https://pypi.org/project/apache-airflow-providers-qdrant/)
-using Qdrant.Client.Grpc;
+- 📚 [Provider docs](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html)
+- 📄 [Source Code](https://github.com/apache/airflow/tree/main/airflow/providers/qdrant)
+",documentation/data-management/airflow.md
+"---
+title: MindsDB
-var client = new QdrantClient(""localhost"", 6334);
+aliases: [ ../integrations/mindsdb/, ../frameworks/mindsdb/ ]
+---
-await client.SearchAsync(
- collectionName: ""{collection_name}"",
+# MindsDB
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- searchParams: new SearchParams
- {
+[MindsDB](https://mindsdb.com) is an AI automation platform for building AI/ML powered features and applications. It works by connecting any source of data with any AI/ML model or framework and automating how real-time data flows between them.
- Quantization = new QuantizationSearchParams { Ignore = true }
- },
- limit: 10
+With the MindsDB-Qdrant integration, you can now select Qdrant as a database to load into and retrieve from with semantic search and filtering.
-);
-```
+
+**MindsDB allows you to easily**:
-- **Adjust the quantile parameter**: The quantile parameter in scalar quantization determines the quantization bounds.
+- Connect to any store of data or end-user application.
-By setting it to a value lower than 1.0, you can exclude extreme values (outliers) from the quantization bounds.
+- Pass data to an AI model from any store of data or end-user application.
-For example, if you set the quantile to 0.99, 1% of the extreme values will be excluded.
+- Plug the output of an AI model into any store of data or end-user application.
-By adjusting the quantile, you find an optimal value that will provide the best search quality for your collection.
+- Fully automate these workflows to build AI-powered features and applications
-- **Enable rescore**: Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. On large collections, this can improve the search quality, with just minor performance impact.
+## Usage
+To get started with Qdrant and MindsDB, the following syntax can be used.
-#### Memory and speed tuning
+```sql
+CREATE DATABASE qdrant_test
-In this section, we will discuss how to tune the memory and speed of the search process with quantization.
+WITH ENGINE = ""qdrant"",
+PARAMETERS = {
+ ""location"": "":memory:"",
-There are 3 possible modes to place storage of vectors within the qdrant collection:
+ ""collection_config"": {
+ ""size"": 386,
+ ""distance"": ""Cosine""
-- **All in RAM** - all vector, original and quantized, are loaded and kept in RAM. This is the fastest mode, but requires a lot of RAM. Enabled by default.
+ }
+}
+```
-- **Original on Disk, quantized in RAM** - this is a hybrid mode, allows to obtain a good balance between speed and memory usage. Recommended scenario if you are aiming to shrink the memory footprint while keeping the search speed.
+The available arguments for instantiating Qdrant can be found [here](https://github.com/mindsdb/mindsdb/blob/23a509cb26bacae9cc22475497b8644e3f3e23c3/mindsdb/integrations/handlers/qdrant_handler/qdrant_handler.py#L408-L468).
-This mode is enabled by setting `always_ram` to `true` in the quantization config while using memmap storage:
+## Creating a new table
-```http
-PUT /collections/{collection_name}
-{
+- Qdrant options for creating a collection can be specified as `collection_config` in the `CREATE DATABASE` parameters.
- ""vectors"": {
+- By default, UUIDs are set as collection IDs. You can provide your own IDs under the `id` column.
- ""size"": 768,
- ""distance"": ""Cosine""
- },
+```sql
- ""optimizers_config"": {
+CREATE TABLE qdrant_test.test_table (
- ""memmap_threshold"": 20000
+ SELECT embeddings,'{""source"": ""bbc""}' as metadata FROM mysql_demo_db.test_embeddings
- },
+);
- ""quantization_config"": {
+```
- ""scalar"": {
- ""type"": ""int8"",
- ""always_ram"": true
+## Querying the database
- }
- }
-}
+#### Perform a full retrieval using the following syntax.
+
+
+
+```sql
+
+SELECT * FROM qdrant_test.test_table
```
-```python
+By default, the `LIMIT` is set to 10 and the `OFFSET` is set to 0.
-from qdrant_client import QdrantClient, models
+#### Perform a similarity search using your embeddings
-client = QdrantClient(""localhost"", port=6333)
+
-client.create_collection(
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+```sql
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
+SELECT * FROM qdrant_test.test_table
- quantization_config=models.ScalarQuantization(
+WHERE search_vector = (select embeddings from mysql_demo_db.test_embeddings limit 1)
- scalar=models.ScalarQuantizationConfig(
+```
- type=models.ScalarType.INT8,
- always_ram=True,
- ),
+#### Perform a search using filters
- ),
-)
+
+```sql
+
+SELECT * FROM qdrant_test.test_table
+
+WHERE `metadata.source` = 'bbc';
```
-```typescript
+#### Delete entries using IDs
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+```sql
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+DELETE FROM qtest.test_table_6
+WHERE id = 2
+```
-client.createCollection(""{collection_name}"", {
- vectors: {
- size: 768,
+#### Delete entries using filters
- distance: ""Cosine"",
- },
- optimizers_config: {
+```sql
- memmap_threshold: 20000,
+DELETE * FROM qdrant_test.test_table
- },
+WHERE `metadata.source` = 'bbc';
- quantization_config: {
+```
- scalar: {
- type: ""int8"",
- always_ram: true,
+#### Drop a table
- },
- },
-});
+```sql
+
+ DROP TABLE qdrant_test.test_table;
```
-```rust
+## Next steps
-use qdrant_client::{
- client::QdrantClient,
- qdrant::{
+- You can find more information pertaining to MindsDB and its datasources [here](https://docs.mindsdb.com/).
- quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance,
+- [Source Code](https://github.com/mindsdb/mindsdb/tree/main/mindsdb/integrations/handlers/qdrant_handler)
+",documentation/data-management/mindsdb.md
+"---
- OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization,
+title: Apache NiFi
- VectorParams, VectorsConfig,
+aliases: [ ../frameworks/nifi/ ]
- },
+---
-};
+# Apache NiFi
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+[NiFi](https://nifi.apache.org/) is a real-time data ingestion platform, which can transfer and manage data transfer between numerous sources and destination systems. It supports many protocols and offers a web-based user interface for developing and monitoring data flows.
-client
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
+NiFi supports ingesting and querying data in Qdrant via its processor modules.
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 768,
+## Configuration
- distance: Distance::Cosine.into(),
- ..Default::default()
- })),
+![NiFi Qdrant configuration](/documentation/frameworks/nifi/nifi-conifg.png)
- }),
- optimizers_config: Some(OptimizersConfigDiff {
- memmap_threshold: Some(20000),
+You can configure Qdrant NiFi processors with your Qdrant credentials, query/upload configurations. The processors offer 2 built-in embedding providers to encode data into vector embeddings - HuggingFace, OpenAI.
- ..Default::default()
- }),
- quantization_config: Some(QuantizationConfig {
+## Put Qdrant
- quantization: Some(Quantization::Scalar(ScalarQuantization {
- r#type: QuantizationType::Int8.into(),
- always_ram: Some(true),
+![NiFI Put Qdrant](/documentation/frameworks/nifi/nifi-put-qdrant.png)
- ..Default::default()
- })),
- }),
+The `Put Qdrant` processor can ingest NiFi [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) data into a Qdrant collection.
- ..Default::default()
- })
- .await?;
+## Query Qdrant
-```
+![NiFI Query Qdrant](/documentation/frameworks/nifi/nifi-query-qdrant.png)
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+The `Query Qdrant` processor can perform a similarity search across a Qdrant collection and return a [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) result.
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+## Further Reading
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
-import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
+- [NiFi Documentation](https://nifi.apache.org/documentation/v2/).
-import io.qdrant.client.grpc.Collections.VectorParams;
+- [Source Code](https://github.com/apache/nifi-python-extensions)
+",documentation/data-management/nifi.md
+"---
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+title: InfinyOn Fluvio
+---
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+![Fluvio Logo](/documentation/data-management/fluvio/fluvio-logo.png)
-client
+[InfinyOn Fluvio](https://www.fluvio.io/) is an open-source platform written in Rust for high speed, real-time data processing. It is cloud native, designed to work with any infrastructure type, from bare metal hardware to containerized platforms.
- .createCollectionAsync(
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
+## Usage with Qdrant
- .setVectorsConfig(
- VectorsConfig.newBuilder()
- .setParams(
+With the [Qdrant Fluvio Connector](https://github.com/qdrant/qdrant-fluvio), you can stream records from Fluvio topics to Qdrant collections, leveraging Fluvio's delivery guarantees and high-throughput.
- VectorParams.newBuilder()
- .setSize(768)
- .setDistance(Distance.Cosine)
+### Pre-requisites
- .build())
- .build())
- .setOptimizersConfig(
+- A Fluvio installation. You can refer to the [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart/) for instructions.
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
+- Qdrant server to connect to. You can set up a [local instance](/documentation/quickstart/) or a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
- .setQuantizationConfig(
- QuantizationConfig.newBuilder()
- .setScalar(
+### Downloading the connector
- ScalarQuantization.newBuilder()
- .setType(QuantizationType.Int8)
- .setAlwaysRam(true)
+Run the following commands after [setting up Fluvio](https://www.fluvio.io/docs/fluvio/quickstart).
- .build())
- .build())
- .build())
+```console
- .get();
+cdk hub download qdrant/qdrant-sink@0.1.0
```
-```csharp
-
-using Qdrant.Client;
-
-using Qdrant.Client.Grpc;
+### Example Config
-var client = new QdrantClient(""localhost"", 6334);
+> _config.yaml_
-await client.CreateCollectionAsync(
+```yaml
- collectionName: ""{collection_name}"",
+apiVersion: 0.1.0
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+meta:
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
+ version: 0.1.0
- quantizationConfig: new QuantizationConfig
+ name: my-qdrant-connector
- {
+ type: qdrant-sink
- Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
+ topic: topic-name
- }
+ secrets:
-);
+ - name: QDRANT_API_KEY
-```
+qdrant:
-In this scenario, the number of disk reads may play a significant role in the search speed.
+ url: https://xyz-example.eu-central.aws.cloud.qdrant.io:6334
-In a system with high disk latency, the re-scoring step may become a bottleneck.
+ api_key: ""${{ secrets.QDRANT_API_KEY }}""
+```
-Consider disabling `rescore` to improve the search speed:
+> _secrets.txt_
-```http
-POST /collections/{collection_name}/points/search
+```text
-{
+QDRANT_API_KEY=
- ""params"": {
+```
- ""quantization"": {
- ""rescore"": false
- }
+### Running
- },
- ""vector"": [0.2, 0.1, 0.9, 0.7],
- ""limit"": 10
+```console
-}
+cdk deploy start --ipkg qdrant-qdrant-sink-0.1.0.ipkg -c config.yaml --secrets secrets.txt
```
-```python
+### Produce Messages
-from qdrant_client import QdrantClient, models
+You can now run the following to generate messages to be written into Qdrant.
-client = QdrantClient(""localhost"", port=6333)
+```console
-client.search(
+fluvio produce topic-name
- collection_name=""{collection_name}"",
+```
- query_vector=[0.2, 0.1, 0.9, 0.7],
- search_params=models.SearchParams(
- quantization=models.QuantizationSearchParams(rescore=False)
+### Message Formats
- ),
-)
-```
+This sink connector supports messages with dense/sparse/multi vectors.
-```typescript
+_Click each to expand._
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ Unnamed/Default vector
-client.search(""{collection_name}"", {
+Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection).
- vector: [0.2, 0.1, 0.9, 0.7],
- params: {
- quantization: {
+```json
- rescore: false,
+{
- },
+ ""collection_name"": ""{collection_name}"",
- },
+ ""id"": 1,
-});
+ ""vector"": [
-```
+ 0.1,
+ 0.2,
+ 0.3,
-```rust
+ 0.4,
-use qdrant_client::{
+ 0.5,
- client::QdrantClient,
+ 0.6,
- qdrant::{QuantizationSearchParams, SearchParams, SearchPoints},
+ 0.7,
-};
+ 0.8
+ ],
+ ""payload"": {
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ""name"": ""fluvio"",
+ ""description"": ""Solution for distributed stream processing"",
+ ""url"": ""https://www.fluvio.io/""
-client
+ }
- .search_points(&SearchPoints {
+}
- collection_name: ""{collection_name}"".to_string(),
+```
- vector: vec![0.2, 0.1, 0.9, 0.7],
- params: Some(SearchParams {
- quantization: Some(QuantizationSearchParams {
+
- rescore: Some(false),
- ..Default::default()
- }),
+
- ..Default::default()
+ Named multiple vectors
- }),
- limit: 3,
- ..Default::default()
+Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
- })
- .await?;
-```
+```json
+
+{
+ ""collection_name"": ""{collection_name}"",
+ ""id"": 1,
-```java
+ ""vector"": {
-import java.util.List;
+ ""some-dense"": [
+ 0.1,
+ 0.2,
-import io.qdrant.client.QdrantClient;
+ 0.3,
-import io.qdrant.client.QdrantGrpcClient;
+ 0.4,
-import io.qdrant.client.grpc.Points.QuantizationSearchParams;
+ 0.5,
-import io.qdrant.client.grpc.Points.SearchParams;
+ 0.6,
-import io.qdrant.client.grpc.Points.SearchPoints;
+ 0.7,
+ 0.8
+ ],
-QdrantClient client =
+ ""some-other-dense"": [
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ 0.1,
+ 0.2,
+ 0.3,
-client
+ 0.4,
- .searchAsync(
+ 0.5,
- SearchPoints.newBuilder()
+ 0.6,
- .setCollectionName(""{collection_name}"")
+ 0.7,
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ 0.8
- .setParams(
+ ]
- SearchParams.newBuilder()
+ },
- .setQuantization(
+ ""payload"": {
- QuantizationSearchParams.newBuilder().setRescore(false).build())
+ ""name"": ""fluvio"",
- .build())
+ ""description"": ""Solution for distributed stream processing"",
- .setLimit(3)
+ ""url"": ""https://www.fluvio.io/""
- .build())
+ }
- .get();
+}
```
-```csharp
+
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+
+ Sparse vectors
-var client = new QdrantClient(""localhost"", 6334);
+Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors).
-await client.SearchAsync(
- collectionName: ""{collection_name}"",
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+```json
- searchParams: new SearchParams
+{
- {
+ ""collection_name"": ""{collection_name}"",
- Quantization = new QuantizationSearchParams { Rescore = false }
+ ""id"": 1,
- },
+ ""vector"": {
- limit: 3
+ ""some-sparse"": {
-);
+ ""indices"": [
-```
+ 0,
+ 1,
+ 2,
-- **All on Disk** - all vectors, original and quantized, are stored on disk. This mode allows to achieve the smallest memory footprint, but at the cost of the search speed.
+ 3,
+ 4,
+ 5,
-It is recommended to use this mode if you have a large collection and fast storage (e.g. SSD or NVMe).
+ 6,
+ 7,
+ 8,
-This mode is enabled by setting `always_ram` to `false` in the quantization config while using mmap storage:
+ 9
+ ],
+ ""values"": [
-```http
+ 0.1,
-PUT /collections/{collection_name}
+ 0.2,
-{
+ 0.3,
- ""vectors"": {
+ 0.4,
- ""size"": 768,
+ 0.5,
- ""distance"": ""Cosine""
+ 0.6,
- },
+ 0.7,
- ""optimizers_config"": {
+ 0.8,
- ""memmap_threshold"": 20000
+ 0.9,
- },
+ 1.0
- ""quantization_config"": {
+ ]
- ""scalar"": {
+ }
- ""type"": ""int8"",
+ },
- ""always_ram"": false
+ ""payload"": {
- }
+ ""name"": ""fluvio"",
+
+ ""description"": ""Solution for distributed stream processing"",
+
+ ""url"": ""https://www.fluvio.io/""
}
@@ -13124,4393 +12827,4323 @@ PUT /collections/{collection_name}
-```python
+
-from qdrant_client import QdrantClient, models
+
-client = QdrantClient(""localhost"", port=6333)
+ Multi-vector
-client.create_collection(
+```json
- collection_name=""{collection_name}"",
+{
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ ""collection_name"": ""{collection_name}"",
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
+ ""id"": 1,
- quantization_config=models.ScalarQuantization(
+ ""vector"": {
- scalar=models.ScalarQuantizationConfig(
+ ""some-multi"": [
- type=models.ScalarType.INT8,
+ [
- always_ram=False,
+ 0.1,
- ),
+ 0.2,
- ),
+ 0.3,
-)
+ 0.4,
-```
+ 0.5,
+ 0.6,
+ 0.7,
-```typescript
+ 0.8,
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ 0.9,
+ 1.0
+ ],
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ [
+ 1.0,
+ 0.9,
-client.createCollection(""{collection_name}"", {
+ 0.8,
- vectors: {
+ 0.5,
- size: 768,
+ 0.4,
- distance: ""Cosine"",
+ 0.8,
- },
+ 0.6,
- optimizers_config: {
+ 0.4,
- memmap_threshold: 20000,
+ 0.2,
- },
+ 0.1
- quantization_config: {
+ ]
- scalar: {
+ ]
- type: ""int8"",
+ },
- always_ram: false,
+ ""payload"": {
- },
+ ""name"": ""fluvio"",
- },
+ ""description"": ""Solution for distributed stream processing"",
-});
+ ""url"": ""https://www.fluvio.io/""
-```
+ }
+}
+```
-```rust
-use qdrant_client::{
- client::QdrantClient,
+
- qdrant::{
- quantization_config::Quantization, vectors_config::Config, CreateCollection, Distance,
- OptimizersConfigDiff, QuantizationConfig, QuantizationType, ScalarQuantization,
+
- VectorParams, VectorsConfig,
+ Combination of named dense and sparse vectors
- },
-};
+Reference:
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+- [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
-client
- .create_collection(&CreateCollection {
+- [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors).
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
+```json
- size: 768,
+{
- distance: Distance::Cosine.into(),
+ ""collection_name"": ""{collection_name}"",
- ..Default::default()
+ ""id"": ""a10435b5-2a58-427a-a3a0-a5d845b147b7"",
- })),
+ ""vector"": {
- }),
+ ""some-other-dense"": [
- optimizers_config: Some(OptimizersConfigDiff {
+ 0.1,
- memmap_threshold: Some(20000),
+ 0.2,
- ..Default::default()
+ 0.3,
- }),
+ 0.4,
- quantization_config: Some(QuantizationConfig {
+ 0.5,
- quantization: Some(Quantization::Scalar(ScalarQuantization {
+ 0.6,
- r#type: QuantizationType::Int8.into(),
+ 0.7,
- always_ram: Some(false),
+ 0.8
- ..Default::default()
+ ],
- })),
+ ""some-sparse"": {
- }),
+ ""indices"": [
- ..Default::default()
+ 0,
- })
+ 1,
- .await?;
+ 2,
-```
+ 3,
+ 4,
+ 5,
-```java
+ 6,
-import io.qdrant.client.QdrantClient;
+ 7,
-import io.qdrant.client.QdrantGrpcClient;
+ 8,
-import io.qdrant.client.grpc.Collections.CreateCollection;
+ 9
-import io.qdrant.client.grpc.Collections.Distance;
+ ],
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+ ""values"": [
-import io.qdrant.client.grpc.Collections.QuantizationConfig;
+ 0.1,
-import io.qdrant.client.grpc.Collections.QuantizationType;
+ 0.2,
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
+ 0.3,
-import io.qdrant.client.grpc.Collections.VectorParams;
+ 0.4,
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+ 0.5,
+ 0.6,
+ 0.7,
-QdrantClient client =
+ 0.8,
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ 0.9,
+ 1.0
+ ]
-client
+ }
- .createCollectionAsync(
+ },
- CreateCollection.newBuilder()
+ ""payload"": {
- .setCollectionName(""{collection_name}"")
+ ""name"": ""fluvio"",
- .setVectorsConfig(
+ ""description"": ""Solution for distributed stream processing"",
- VectorsConfig.newBuilder()
+ ""url"": ""https://www.fluvio.io/""
- .setParams(
+ }
- VectorParams.newBuilder()
+}
- .setSize(768)
+```
- .setDistance(Distance.Cosine)
- .build())
- .build())
+
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
- .setQuantizationConfig(
+### Further Reading
- QuantizationConfig.newBuilder()
- .setScalar(
- ScalarQuantization.newBuilder()
+- [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart)
- .setType(QuantizationType.Int8)
+- [Fluvio Tutorials](https://www.fluvio.io/docs/fluvio/tutorials/)
- .setAlwaysRam(false)
+- [Connector Source](https://github.com/qdrant/qdrant-fluvio)
+",documentation/data-management/fluvio.md
+"---
- .build())
+title: Unstructured
- .build())
+aliases: [ ../frameworks/unstructured/ ]
- .build())
+---
- .get();
-```
+# Unstructured
-```csharp
-using Qdrant.Client;
+[Unstructured](https://unstructured.io/) is a library designed to help preprocess, structure unstructured text documents for downstream machine learning tasks.
-using Qdrant.Client.Grpc;
+Qdrant can be used as an ingestion destination in Unstructured.
-var client = new QdrantClient(""localhost"", 6334);
+## Setup
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+Install Unstructured with the `qdrant` extra.
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
- quantizationConfig: new QuantizationConfig
- {
+```bash
- Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = false }
+pip install ""unstructured[qdrant]""
- }
+```
-);
-```",documentation/guides/quantization.md
-"---
-title: Monitoring
+## Usage
-weight: 155
-aliases:
- - ../monitoring
----
+Depending on the use case you can prefer the command line or using it within your application.
-# Monitoring
+### CLI
-Qdrant exposes its metrics in a Prometheus format, so you can integrate them easily
-with the compatible tools and monitor Qdrant with your own monitoring system. You can
+```bash
-use the `/metrics` endpoint and configure it as a scrape target.
+EMBEDDING_PROVIDER=${EMBEDDING_PROVIDER:-""langchain-huggingface""}
-Metrics endpoint:
+unstructured-ingest \
+ local \
+ --input-path example-docs/book-war-and-peace-1225p.txt \
-The integration with Qdrant is easy to
+ --output-dir local-output-to-qdrant \
-[configure](https://prometheus.io/docs/prometheus/latest/getting_started/#configure-prometheus-to-monitor-the-sample-targets)
+ --strategy fast \
-with Prometheus and Grafana.
+ --chunk-elements \
+ --embedding-provider ""$EMBEDDING_PROVIDER"" \
+ --num-processes 2 \
-## Exposed metric
+ --verbose \
+ qdrant \
+ --collection-name ""test"" \
-Each Qdrant server will expose the following metrics.
+ --url ""http://localhost:6333"" \
+ --batch-size 80
+```
-| Name | Type | Meaning |
-|-------------------------------------|---------|---------------------------------------------------|
-| app_info | counter | Information about Qdrant server |
+For a full list of the options the CLI accepts, run `unstructured-ingest qdrant --help`
-| app_status_recovery_mode | counter | If Qdrant is currently started in recovery mode |
-| collections_total | gauge | Number of collections |
-| collections_vector_total | gauge | Total number of vectors in all collections |
+### Programmatic usage
-| collections_full_total | gauge | Number of full collections |
-| collections_aggregated_total | gauge | Number of aggregated collections |
-| rest_responses_total | counter | Total number of responses through REST API |
+```python
-| rest_responses_fail_total | counter | Total number of failed responses through REST API |
+from unstructured.ingest.connector.local import SimpleLocalConfig
-| rest_responses_avg_duration_seconds | gauge | Average response duration in REST API |
+from unstructured.ingest.connector.qdrant import (
-| rest_responses_min_duration_seconds | gauge | Minimum response duration in REST API |
+ QdrantWriteConfig,
-| rest_responses_max_duration_seconds | gauge | Maximum response duration in REST API |
+ SimpleQdrantConfig,
-| grpc_responses_total | counter | Total number of responses through gRPC API |
+)
-| grpc_responses_fail_total | counter | Total number of failed responses through REST API |
+from unstructured.ingest.interfaces import (
-| grpc_responses_avg_duration_seconds | gauge | Average response duration in gRPC API |
+ ChunkingConfig,
-| grpc_responses_min_duration_seconds | gauge | Minimum response duration in gRPC API |
+ EmbeddingConfig,
-| grpc_responses_max_duration_seconds | gauge | Maximum response duration in gRPC API |
+ PartitionConfig,
-| cluster_enabled | gauge | Whether the cluster support is enabled |
+ ProcessorConfig,
+ ReadConfig,
+)
-### Cluster related metrics
+from unstructured.ingest.runner import LocalRunner
+from unstructured.ingest.runner.writers.base_writer import Writer
+from unstructured.ingest.runner.writers.qdrant import QdrantWriter
-There are also some metrics which are exposed in distributed mode only.
+def get_writer() -> Writer:
-| Name | Type | Meaning |
+ return QdrantWriter(
-|----------------------------------|---------|------------------------------------------------------------------------|
+ connector_config=SimpleQdrantConfig(
-| cluster_peers_total | gauge | Total number of cluster peers |
+ url=""http://localhost:6333"",
-| cluster_term | counter | Current cluster term |
+ collection_name=""test"",
-| cluster_commit | counter | Index of last committed (finalized) operation cluster peer is aware of |
+ ),
-| cluster_pending_operations_total | gauge | Total number of pending operations for cluster peer |
+ write_config=QdrantWriteConfig(batch_size=80),
-| cluster_voter | gauge | Whether the cluster peer is a voter or learner |
+ )
-## Kubernetes health endpoints
+if __name__ == ""__main__"":
+ writer = get_writer()
+ runner = LocalRunner(
-*Available as of v1.5.0*
+ processor_config=ProcessorConfig(
+ verbose=True,
+ output_dir=""local-output-to-qdrant"",
-Qdrant exposes three endpoints, namely
+ num_processes=2,
-[`/healthz`](http://localhost:6333/healthz),
+ ),
-[`/livez`](http://localhost:6333/livez) and
+ connector_config=SimpleLocalConfig(
-[`/readyz`](http://localhost:6333/readyz), to indicate the current status of the
+ input_path=""example-docs/book-war-and-peace-1225p.txt"",
-Qdrant server.
+ ),
+ read_config=ReadConfig(),
+ partition_config=PartitionConfig(),
-These currently provide the most basic status response, returning HTTP 200 if
+ chunking_config=ChunkingConfig(chunk_elements=True),
-Qdrant is started and ready to be used.
+ embedding_config=EmbeddingConfig(provider=""langchain-huggingface""),
+ writer=writer,
+ writer_kwargs={},
-Regardless of whether an [API key](../security#authentication) is configured,
+ )
-the endpoints are always accessible.
+ runner.run()
+```
-You can read more about Kubernetes health endpoints
-[here](https://kubernetes.io/docs/reference/using-api/health-checks/).
-",documentation/guides/monitoring.md
-"---
+## Next steps
-title: Guides
-weight: 22
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
+- Unstructured API [reference](https://unstructured-io.github.io/unstructured/api.html).
-is_empty: true
+- Qdrant ingestion destination [reference](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html).
----",documentation/guides/_index.md
+- [Source Code](https://github.com/Unstructured-IO/unstructured/blob/main/unstructured/ingest/connector/qdrant.py)
+",documentation/data-management/unstructured.md
"---
-title: Security
-
-weight: 165
-
-aliases:
+title: Data Management
- - ../security
+weight: 15
---
-# Security
-
-
+## Data Management Integrations
+| Integration | Description |
+| ------------------------------- | -------------------------------------------------------------------------------------------------- |
-Please read this page carefully. Although there are various ways to secure your Qdrant instances, **they are unsecured by default**.
+| [Airbyte](./airbyte/) | Data integration platform specialising in ELT pipelines. |
-You need to enable security measures before production use. Otherwise, they are completely open to anyone
+| [Airflow](./airflow/) | Platform designed for developing, scheduling, and monitoring batch-oriented workflows. |
+| [Connect](./redpanda/) | Declarative data-agnostic streaming service for efficient, stateless processing. |
+| [Confluent](./confluent/) | Fully-managed data streaming platform with a cloud-native Apache Kafka engine. |
-## Authentication
+| [DLT](./dlt/) | Python library to simplify data loading processes between several sources and destinations. |
+| [Fluvio](./fluvio/) | Rust-based platform for high speed, real-time data processing. |
+| [Fondant](./fondant/) | Framework for developing datasets, sharing reusable operations and data processing trees. |
-*Available as of v1.2.0*
+| [MindsDB](./mindsdb/) | Platform to deploy, serve, and fine-tune models with numerous data source integrations. |
+| [NiFi](./nifi/) | Data ingestion platform to manage data transfer between different sources and destination systems. |
+| [Spark](./spark/) | A unified analytics engine for large-scale data processing. |
-Qdrant supports a simple form of client authentication using a static API key.
+| [Unstructured](./unstructured/) | Python library with components for ingesting and pre-processing data from numerous sources. |
+",documentation/data-management/_index.md
+"---
-This can be used to secure your instance.
+title: Fondant
+aliases: [ ../integrations/fondant/, ../frameworks/fondant/ ]
+---
-To enable API key based authentication in your own Qdrant instance you must
-specify a key in the configuration:
+# Fondant
-```yaml
-service:
+[Fondant](https://fondant.ai/en/stable/) is an open-source framework that aims to simplify and speed
- # Set an api-key.
+up large-scale data processing by making containerized components reusable across pipelines and
- # If set, all requests must include a header with the api-key.
+execution environments. Benefit from built-in features such as autoscaling, data lineage, and
- # example header: `api-key: `
+pipeline caching, and deploy to (managed) platforms such as Vertex AI, Sagemaker, and Kubeflow
- #
+Pipelines.
- # If you enable this you should also enable TLS.
- # (Either above or via an external service like nginx.)
- # Sending an api-key over an unencrypted channel is insecure.
+Fondant comes with a library of reusable components that you can leverage to compose your own
- api_key: your_secret_api_key_here
+pipeline, including a Qdrant component for writing embeddings to Qdrant.
-```
+## Usage
-Or alternatively, you can use the environment variable:
+
-```
+**A data load pipeline for RAG using Qdrant**.
-
+A simple ingestion pipeline could look like the following:
-For using API key based authentication in Qdrant cloud see the cloud
-[Authentication](https://qdrant.tech/documentation/cloud/authentication)
-section.
+```python
+import pyarrow as pa
+from fondant.pipeline import Pipeline
-The API key then needs to be present in all REST or gRPC requests to your instance.
-All official Qdrant clients for Python, Go, Rust, .NET and Java support the API key parameter.
+indexing_pipeline = Pipeline(
+ name=""ingestion-pipeline"",
-
+)
-```bash
+# An custom implemenation of a read component.
-curl \
+text = indexing_pipeline.read(
- -X GET https://localhost:6333 \
+ ""path/to/data-source-component"",
- --header 'api-key: your_secret_api_key_here'
+ arguments={
-```
+ # your custom arguments
+ }
+)
-```python
-from qdrant_client import QdrantClient
+chunks = text.apply(
+ ""chunk_text"",
-client = QdrantClient(
+ arguments={
- url=""https://localhost"",
+ ""chunk_size"": 512,
- port=6333,
+ ""chunk_overlap"": 32,
- api_key=""your_secret_api_key_here"",
+ },
)
-```
-
-
-
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+embeddings = chunks.apply(
+ ""embed_text"",
-const client = new QdrantClient({
+ arguments={
- url: ""http://localhost"",
+ ""model_provider"": ""huggingface"",
- port: 6333,
+ ""model"": ""all-MiniLM-L6-v2"",
- apiKey: ""your_secret_api_key_here"",
+ },
-});
+)
-```
+embeddings.write(
-```rust
+ ""index_qdrant"",
-use qdrant_client::client::QdrantClient;
+ arguments={
+ ""url"": ""http:localhost:6333"",
+ ""collection_name"": ""some-collection-name"",
-let client = QdrantClient::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
+ },
- .with_api_key("""")
+ cache=False,
- .build()?;
+)
```
-```java
+Once you have a pipeline, you can easily run it using the built-in CLI. Fondant allows
-import io.qdrant.client.QdrantClient;
+you to run the pipeline in production across different clouds.
-import io.qdrant.client.QdrantGrpcClient;
+The first component is a custom read module that needs to be implemented and cannot be used off the
-QdrantClient client =
+shelf. A detailed tutorial on how to rebuild this
- new QdrantClient(
+pipeline [is provided on GitHub](https://github.com/ml6team/fondant-usecase-RAG/tree/main).
- QdrantGrpcClient.newBuilder(
- ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- 6334,
+## Next steps
- true)
- .withApiKey("""")
- .build());
+More information about creating your own pipelines and components can be found in the [Fondant
-```
+documentation](https://fondant.ai/en/stable/).
+",documentation/data-management/fondant.md
+"---
+title: Working with ColBERT
+weight: 6
-```csharp
+---
-using Qdrant.Client;
+# How to Generate ColBERT Multivectors with FastEmbed
-var client = new QdrantClient(
- host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- https: true,
+With FastEmbed, you can use ColBERT to generate multivector embeddings. ColBERT is a powerful retrieval model that combines the strength of BERT embeddings with efficient late interaction techniques. FastEmbed will provide you with an optimized pipeline to utilize these embeddings in your search tasks.
- apiKey: """"
-);
-```
+Please note that ColBERT requires more resources than other no-interaction models. We recommend you use ColBERT as a re-ranker instead of a first-stage retriever.
-
+The first-stage retriever can retrieve 100-500 examples. This task would be done by a simpler model. Then, you can rank the leftover results with ColBERT.
-### Read-only API key
+## Setup
-*Available as of v1.7.0*
+This command imports all late interaction models for text embedding.
-In addition to the regular API key, Qdrant also supports a read-only API key.
+```python
-This key can be used to access read-only operations on the instance.
+from fastembed import LateInteractionTextEmbedding
+```
+You can list which models are supported in your version of FastEmbed.
-```yaml
-service:
- read_only_api_key: your_secret_read_only_api_key_here
+```python
-```
+LateInteractionTextEmbedding.list_supported_models()
+```
+This command displays the available models. The output shows details about the ColBERT model, including its dimensions, description, size, sources, and model file.
-Or with the environment variable:
+```python
-```bash
+[{'model': 'colbert-ir/colbertv2.0',
-export QDRANT__SERVICE__READ_ONLY_API_KEY=your_secret_read_only_api_key_here
+ 'dim': 128,
-```
+ 'description': 'Late interaction model',
+ 'size_in_GB': 0.44,
+ 'sources': {'hf': 'colbert-ir/colbertv2.0'},
-Both API keys can be used simultaneously.
+ 'model_file': 'model.onnx'}]
+```
+Now, load the model.
-## TLS
+```python
+embedding_model = LateInteractionTextEmbedding(""colbert-ir/colbertv2.0"")
+```
-*Available as of v1.2.0*
+The model files will be fetched and downloaded, with progress showing.
-TLS for encrypted connections can be enabled on your Qdrant instance to secure
+## Embed data
-connections.
+First, you need to define both documents and queries.
-
+```python
-First make sure you have a certificate and private key for TLS, usually in
+documents = [
-`.pem` format. On your local machine you may use
+ ""ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT."",
-[mkcert](https://github.com/FiloSottile/mkcert#readme) to generate a self signed
+ ""On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process"",
-certificate.
+]
+queries = [
+ ""Are there any other late interaction text embedding models except ColBERT?"",
-To enable TLS, set the following properties in the Qdrant configuration with the
+ ""What is the difference between late interaction and early interaction text embedding models?"",
-correct paths and restart:
+]
+```
-```yaml
-service:
+**Note:** ColBERT computes document and query embeddings differently. Make sure to use the corresponding methods.
- # Enable HTTPS for the REST and gRPC API
- enable_tls: true
+Now, create embeddings from both documents and queries.
-# TLS configuration.
-# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
+```python
-tls:
+document_embeddings = list(
- # Server certificate chain file
+ embedding_model.embed(documents)
- cert: ./tls/cert.pem
+) # embed and qury_embed return generators,
+# which we need to evaluate by writing them to a list
+query_embeddings = list(embedding_model.query_embed(queries))
- # Server private key file
- key: ./tls/key.pem
```
+Display the shapes of document and query embeddings.
-For internal communication when running cluster mode, TLS can be enabled with:
+```python
+document_embeddings[0].shape, query_embeddings[0].shape
-```yaml
+```
-cluster:
- # Configuration of the inter-cluster communication
- p2p:
+You should get something like this:
- # Use TLS for communication between peers
- enable_tls: true
+
+```python
+
+((26, 128), (32, 128))
```
-With TLS enabled, you must start using HTTPS connections. For example:
+Don't worry about query embeddings having the bigger shape in this case. ColBERT authors recommend to pad queries with [MASK] tokens to 32 tokens. They also recommend truncating queries to 32 tokens, however, we don't do that in FastEmbed so that you can put some straight into the queries.
-```bash
+## Compute similarity
-curl -X GET https://localhost:6333
-```
+
+This function calculates the relevance scores using the MaxSim operator, sorts the documents based on these scores, and returns the indices of the top-k documents.
```python
-from qdrant_client import QdrantClient
+import numpy as np
-client = QdrantClient(
- url=""https://localhost"",
- port=6333,
+def compute_relevance_scores(query_embedding: np.array, document_embeddings: np.array, k: int):
-)
+ """"""
-```
+ Compute relevance scores for top-k documents given a query.
-```typescript
+ :param query_embedding: Numpy array representing the query embedding, shape: [num_query_terms, embedding_dim]
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ :param document_embeddings: Numpy array representing embeddings for documents, shape: [num_documents, max_doc_length, embedding_dim]
+ :param k: Number of top documents to return
+ :return: Indices of the top-k documents based on their relevance scores
-const client = new QdrantClient({ url: ""https://localhost"", port: 6333 });
+ """"""
-```
+ # Compute batch dot-product of query_embedding and document_embeddings
+ # Resulting shape: [num_documents, num_query_terms, max_doc_length]
+ scores = np.matmul(query_embedding, document_embeddings.transpose(0, 2, 1))
-```rust
-use qdrant_client::client::QdrantClient;
+ # Apply max-pooling across document terms (axis=2) to find the max similarity per query term
+ # Shape after max-pool: [num_documents, num_query_terms]
-let client = QdrantClient::from_url(""https://localhost:6334"").build()?;
+ max_scores_per_query_term = np.max(scores, axis=2)
-```
+ # Sum the scores across query terms to get the total score for each document
-Certificate rotation is enabled with a default refresh time of one hour. This
+ # Shape after sum: [num_documents]
-reloads certificate files every hour while Qdrant is running. This way changed
+ total_scores = np.sum(max_scores_per_query_term, axis=1)
-certificates are picked up when they get updated externally. The refresh time
-can be tuned by changing the `tls.cert_ttl` setting. You can leave this on, even
-if you don't plan to update your certificates. Currently this is only supported
+ # Sort the documents based on their total scores and get the indices of the top-k documents
-for the REST API.
+ sorted_indices = np.argsort(total_scores)[::-1][:k]
-Optionally, you can enable client certificate validation on the server against a
+ return sorted_indices
-local certificate authority. Set the following properties and restart:
+```
+Calculate sorted indices.
-```yaml
-service:
+```python
- # Check user HTTPS client certificate against CA file specified in tls config
+sorted_indices = compute_relevance_scores(
- verify_https_client_certificate: false
+ np.array(query_embeddings[0]), np.array(document_embeddings), k=3
+)
+print(""Sorted document indices:"", sorted_indices)
-# TLS configuration.
+```
-# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
+The output shows the sorted document indices based on the relevance to the query.
-tls:
- # Certificate authority certificate file.
- # This certificate will be used to validate the certificates
+```python
- # presented by other nodes during inter-cluster communication.
+Sorted document indices: [0 1]
- #
+```
- # If verify_https_client_certificate is true, it will verify
- # HTTPS client certificate
- #
+## Show results
- # Required if cluster.p2p.enable_tls is true.
- ca_cert: ./tls/cacert.pem
-```
-",documentation/guides/security.md
-"---
+```python
-title: Quickstart
+print(f""Query: {queries[0]}"")
-weight: 10
+for index in sorted_indices:
-aliases:
+ print(f""Document: {documents[index]}"")
- - ../cloud-quick-start
+```
- - cloud-quick-start
----
+The query and corresponding sorted documents are displayed, showing the relevance of each document to the query.
-# Quickstart
-This page shows you how to use the Qdrant Cloud Console to create a free tier cluster and then connect to it with Qdrant Client.
+```bash
+Query: Are there any other late interaction text embedding models except ColBERT?
+Document: ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT.
-## Step 1: Create a Free Tier cluster
+Document: On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process
+```
+",documentation/fastembed/fastembed-colbert.md
+"---
+title: Working with SPLADE
-1. Start in the **Overview** section of the [Cloud Dashboard](https://cloud.qdrant.io).
+weight: 5
-2. Under **Set a Cluster Up** enter a **Cluster name**.
+---
-3. Click **Create Free Tier** and then **Continue**.
-4. Under **Get an API Key**, select the cluster and click **Get API Key**.
-5. Save the API key, as you won't be able to request it again. Click **Continue**.
+# How to Generate Sparse Vectors with SPLADE
-6. Save the code snippet provided to access your cluster. Click **Complete** to finish setup.
+SPLADE is a novel method for learning sparse text representation vectors, outperforming BM25 in tasks like information retrieval and document classification. Its main advantage is generating efficient and interpretable sparse vectors, making it effective for large-scale text data.
-![Embeddings](/docs/cloud/quickstart-cloud.png)
+## Setup
-## Step 2: Test cluster access
+First, install FastEmbed.
-After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one:
+```python
-```bash
+pip install -q fastembed
-curl \
+```
- -X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \
- --header 'api-key: '
-```
+Next, import the required modules for sparse embeddings and Python’s typing module.
-Open Terminal and run the request. You should get a response that looks like this:
+```python
-```bash
+from fastembed import SparseTextEmbedding, SparseEmbedding
-{""title"":""qdrant - vector search engine"",""version"":""1.4.1""}
+from typing import List
```
-> **Note:** The API key needs to be present in the request header every time you make a request via Rest or gRPC interface.
+You may always check the list of all supported sparse embedding models.
-## Step 3: Authenticate via SDK
+```python
-Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application.
-
-Our official Qdrant clients for Python, TypeScript, Go, Rust, and .NET all support the API key parameter.
+SparseTextEmbedding.list_supported_models()
+```
+This will return a list of models, each with its details such as model name, vocabulary size, description, and sources.
-```python
-from qdrant_client import QdrantClient
+```python
+[{'model': 'prithivida/Splade_PP_en_v1',
-qdrant_client = QdrantClient(
+ 'vocab_size': 30522,
- ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ 'description': 'Independent Implementation of SPLADE++ Model for English',
- api_key="""",
+ 'size_in_GB': 0.532,
-)
+ 'sources': {'hf': 'Qdrant/SPLADE_PP_en_v1'}}]
```
-```typescript
-
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Now, load the model.
-const client = new QdrantClient({
+```python
- host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+model_name = ""prithvida/Splade_PP_en_v1""
- apiKey: """",
+# This triggers the model download
-});
+model = SparseTextEmbedding(model_name=model_name)
```
+## Embed data
-```rust
-use qdrant_client::client::QdrantClient;
+You need to define a list of documents to be embedded.
+```python
+documents: List[str] = [
-let client = QdrantClient::from_url(""xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
+ ""Chandrayaan-3 is India's third lunar mission"",
- .with_api_key("""")
+ ""It aimed to land a rover on the Moon's surface - joining the US, China and Russia"",
- .build()
+ ""The mission is a follow-up to Chandrayaan-2, which had partial success"",
- .unwrap();
+ ""Chandrayaan-3 will be launched by the Indian Space Research Organisation (ISRO)"",
-```
+ ""The estimated cost of the mission is around $35 million"",
+ ""It will carry instruments to study the lunar surface and atmosphere"",
+ ""Chandrayaan-3 landed on the Moon's surface on 23rd August 2023"",
-```java
+ ""It consists of a lander named Vikram and a rover named Pragyan similar to Chandrayaan-2. Its propulsion module would act like an orbiter."",
-import io.qdrant.client.QdrantClient;
+ ""The propulsion module carries the lander and rover configuration until the spacecraft is in a 100-kilometre (62 mi) lunar orbit"",
-import io.qdrant.client.QdrantGrpcClient;
+ ""The mission used GSLV Mk III rocket for its launch"",
+ ""Chandrayaan-3 was launched from the Satish Dhawan Space Centre in Sriharikota"",
+ ""Chandrayaan-3 was launched earlier in the year 2023"",
-QdrantClient client =
+]
- new QdrantClient(
+```
- QdrantGrpcClient.newBuilder(
+Then, generate sparse embeddings for each document.
- ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+Here,`batch_size` is optional and helps to process documents in batches.
- 6334,
- true)
- .withApiKey("""")
+```python
- .build());
+sparse_embeddings_list: List[SparseEmbedding] = list(
-```
+ model.embed(documents, batch_size=6)
+)
+```
-```csharp
+## Retrieve embeddings
-using Qdrant.Client;
+`sparse_embeddings_list` contains sparse embeddings for the documents provided earlier. Each element in this list is a `SparseEmbedding` object that contains the sparse vector representation of a document.
-var client = new QdrantClient(
- host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- https: true,
+```python
- apiKey: """"
+index = 0
-);
+sparse_embeddings_list[index]
```
-",documentation/cloud/quickstart-cloud.md
-"---
-title: Authentication
-weight: 30
----
+This output is a `SparseEmbedding` object for the first document in our list. It contains two arrays: `values` and `indices`. - The `values` array represents the weights of the features (tokens) in the document. - The `indices` array represents the indices of these features in the model's vocabulary.
-# Authentication
+Each pair of corresponding `values` and `indices` represents a token and its weight in the document.
-This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key.
+```python
+SparseEmbedding(values=array([0.05297208, 0.01963477, 0.36459631, 1.38508618, 0.71776593,
+ 0.12667948, 0.46230844, 0.446771 , 0.26897505, 1.01519883,
-## Create API keys
+ 1.5655334 , 0.29412213, 1.53102326, 0.59785569, 1.1001817 ,
+ 0.02079751, 0.09955651, 0.44249091, 0.09747757, 1.53519952,
+ 1.36765671, 0.15740395, 0.49882549, 0.38629025, 0.76612782,
-The API key is only shown once after creation. If you lose it, you will need to create a new one.
+ 1.25805044, 0.39058095, 0.27236196, 0.45152301, 0.48262018,
-However, we recommend rotating the keys from time to time. To create additional API keys do the following.
+ 0.26085234, 1.35912788, 0.70710695, 1.71639752]), indices=array([ 1010, 1011, 1016, 1017, 2001, 2018, 2034, 2093, 2117,
+ 2319, 2353, 2509, 2634, 2686, 2796, 2817, 2922, 2959,
+ 3003, 3148, 3260, 3390, 3462, 3523, 3822, 4231, 4316,
-1. Go to the [Cloud Dashboard](https://qdrant.to/cloud).
+ 4774, 5590, 5871, 6416, 11926, 12076, 16469]))
-2. Select **Access Management** to display available API keys.
+```
-3. Click **Create** and choose a cluster name from the dropdown menu.
-> **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box.
-4. Click **OK** and retrieve your API key.
+## Examine weights
-## Authenticate via SDK
+Now, print the first 5 features and their weights for better understanding.
-Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application.
+```python
-Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter.
+for i in range(5):
+ print(f""Token at index {sparse_embeddings_list[0].indices[i]} has weight {sparse_embeddings_list[0].values[i]}"")
+```
-```bash
+The output will display the token indices and their corresponding weights for the first document.
-curl \
- -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
- --header 'api-key: '
+```python
+Token at index 1010 has weight 0.05297207832336426
+Token at index 1011 has weight 0.01963476650416851
-# Alternatively, you can use the `Authorization` header with the `Bearer` prefix
+Token at index 1016 has weight 0.36459630727767944
-curl \
+Token at index 1017 has weight 1.385086178779602
- -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
-
- --header 'Authorization: Bearer '
+Token at index 2001 has weight 0.7177659273147583
```
+## Analyze results
-```python
-from qdrant_client import QdrantClient
+Let's use the tokenizer vocab to make sense of these indices.
-qdrant_client = QdrantClient(
+```python
- ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+import json
+
+from tokenizers import Tokenizer
- api_key="""",
-)
+
+tokenizer = Tokenizer.from_pretrained(SparseTextEmbedding.list_supported_models()[0][""sources""][""hf""])
```
-```typescript
+The `get_tokens_and_weights` function takes a `SparseEmbedding` object and a `tokenizer` as input. It will construct a dictionary where the keys are the decoded tokens, and the values are their corresponding weights.
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+```python
-const client = new QdrantClient({
+def get_tokens_and_weights(sparse_embedding, tokenizer):
- host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ token_weight_dict = {}
- apiKey: """",
+ for i in range(len(sparse_embedding.indices)):
-});
+ token = tokenizer.decode([sparse_embedding.indices[i]])
-```
+ weight = sparse_embedding.values[i]
+ token_weight_dict[token] = weight
-```rust
-use qdrant_client::client::QdrantClient;
+ # Sort the dictionary by weights
+ token_weight_dict = dict(sorted(token_weight_dict.items(), key=lambda item: item[1], reverse=True))
+ return token_weight_dict
-let client = QdrantClient::from_url(""xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
- .with_api_key("""")
- .build()
+# Test the function with the first SparseEmbedding
- .unwrap();
+print(json.dumps(get_tokens_and_weights(sparse_embeddings_list[index], tokenizer), indent=4))
```
+## Dictionary output
-```java
-import io.qdrant.client.QdrantClient;
+The dictionary is then sorted by weights in descending order.
-import io.qdrant.client.QdrantGrpcClient;
+```python
+{
+ ""chandra"": 1.7163975238800049,
-QdrantClient client =
+ ""third"": 1.5655333995819092,
- new QdrantClient(
+ ""##ya"": 1.535199522972107,
- QdrantGrpcClient.newBuilder(
+ ""india"": 1.5310232639312744,
- ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ ""3"": 1.385086178779602,
- 6334,
+ ""mission"": 1.3676567077636719,
- true)
+ ""lunar"": 1.3591278791427612,
- .withApiKey("""")
+ ""moon"": 1.2580504417419434,
- .build());
+ ""indian"": 1.1001816987991333,
-```
+ ""##an"": 1.015198826789856,
+ ""3rd"": 0.7661278247833252,
+ ""was"": 0.7177659273147583,
-```csharp
+ ""spacecraft"": 0.7071069478988647,
-using Qdrant.Client;
+ ""space"": 0.5978556871414185,
+ ""flight"": 0.4988254904747009,
+ ""satellite"": 0.4826201796531677,
-var client = new QdrantClient(
+ ""first"": 0.46230843663215637,
- host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ ""expedition"": 0.4515230059623718,
- https: true,
+ ""three"": 0.4467709958553314,
- apiKey: """"
+ ""fourth"": 0.44249090552330017,
-);
+ ""vehicle"": 0.390580952167511,
-```
-",documentation/cloud/authentication.md
-"---
+ ""iii"": 0.3862902522087097,
-title: AWS Marketplace
+ ""2"": 0.36459630727767944,
-weight: 60
+ ""##3"": 0.2941221296787262,
----
+ ""planet"": 0.27236196398735046,
+ ""second"": 0.26897504925727844,
+ ""missions"": 0.2608523368835449,
-# Qdrant Cloud on AWS Marketplace
+ ""launched"": 0.15740394592285156,
+ ""had"": 0.12667948007583618,
+ ""largest"": 0.09955651313066483,
-## Overview
+ ""leader"": 0.09747757017612457,
+ "","": 0.05297207832336426,
+ ""study"": 0.02079751156270504,
-Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development. Please note that, while Qdrant's clusters run on AWS, you will still use the Qdrant Cloud infrastructure.
+ ""-"": 0.01963476650416851
+}
+```
-## Billing
+## Observations
-You don't need to use a credit card to sign up for Qdrant Cloud. Instead, all billing is processed through the AWS Marketplace and the usage of Qdrant is added to your existing billing for AWS services. It is common for AWS to abstract usage based pricing in the AWS marketplace, as there are too many factors to model when calculating billing from the AWS side.
+- The relative order of importance is quite useful. The most important tokens in the sentence have the highest weights.
-![pricing](/docs/cloud/pricing.png)
+- **Term Expansion:** The model can expand the terms in the document. This means that the model can generate weights for tokens that are not present in the document but are related to the tokens in the document. This is a powerful feature that allows the model to capture the context of the document. Here, you'll see that the model has added the tokens '3' from 'third' and 'moon' from 'lunar' to the sparse vector.
-The payment is carried out via your AWS Account. To get a clearer idea for the pricing structure, please use our [Billing Calculator](https://cloud.qdrant.io/calculator).
+## Design choices
-## How to subscribe
+- The weights are not normalized. This means that the sum of the weights is not 1 or 100. This is a common practice in sparse embeddings, as it allows the model to capture the importance of each token in the document.
+- Tokens are included in the sparse vector only if they are present in the model's vocabulary. This means that the model will not generate a weight for tokens that it has not seen during training.
+- Tokens do not map to words directly -- allowing you to gracefully handle typo errors and out-of-vocabulary tokens.",documentation/fastembed/fastembed-splade.md
+"---
-1. Go to [Qdrant's AWS Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg).
+title: ""FastEmbed & Qdrant""
-2. Click the bright orange button - **View purchase options**.
+weight: 3
-3. On the next screen, under Purchase, click **Subscribe**.
+---
-4. Up top, on the green banner, click **Set up your account**.
-![setup](/docs/cloud/setup.png)
+# Using FastEmbed with Qdrant for Vector Search
-You will be transferred outside of AWS to [Qdrant Cloud](https://qdrant.to/cloud) via your unique AWS Offer ID.
+## Install Qdrant Client
+```python
-The Billing Details screen will open in Qdrant Cloud Console. Stay in this console if you want to create your first Qdrant Cluster hosted on AWS.
+pip install qdrant-client
+```
-> **Note:** You do not have to return to the AWS Control Panel. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console.
+## Install FastEmbed
+Installing FastEmbed will let you quickly turn data to vectors, so that Qdrant can search over them.
-## Next steps
+```python
+pip install fastembed
+```
-Now that you have signed up via AWS Marketplace, please read our instructions to get started:
+## Initialize the client
-1. Learn more about [cluster creation and basic config](../../cloud/create-cluster/) in Qdrant Cloud.
+Qdrant Client has a simple in-memory mode that lets you try semantic search locally.
+```python
+from qdrant_client import QdrantClient
-2. Learn how to [authenticate and access your cluster](../../cloud/authentication/).
+client = QdrantClient("":memory:"") # Qdrant is running from RAM.
-3. Additional open source [documentation](../../troubleshooting/).
-",documentation/cloud/aws-marketplace.md
-"---
+```
-title: Create a cluster
-weight: 20
----
+## Add data
+Now you can add two sample documents, their associated metadata, and a point `id` for each.
-# Create a cluster
+```python
+docs = [""Qdrant has a LangChain integration for chatbots."", ""Qdrant has a LlamaIndex integration for agents.""]
-This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster.
+metadata = [
+ {""source"": ""langchain-docs""},
+ {""source"": ""llamaindex-docs""},
-> **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster.
+]
+ids = [42, 2]
+```
-1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io).
+## Load data to a collection
-2. Select **Clusters** and then click **+ Create**.
+Create a test collection and upsert your two documents to it.
-3. A window will open. Enter a cluster **Name**.
+```python
-4. Currently, you can deploy to AWS, GCP, or Azure.
+client.add(
-5. Choose your data center region. If you have latency concerns or other topology-related requirements, [**let us know**](mailto:cloud@qdrant.io).
+ collection_name=""test_collection"",
-6. Configure RAM size for each node (1GB to 64GB).
+ documents=docs,
-> Please read [**Capacity and Sizing**](../../cloud/capacity-sizing/) to make the right choice. If you need more capacity per node, [**let us know**](mailto:cloud@qdrant.io).
+ metadata=metadata,
-7. Choose the number of CPUs per node (0.5 core to 16 cores). The max/min number of CPUs is coupled to the chosen RAM size.
+ ids=ids
-8. Select the number of nodes you want the cluster to be deployed on.
+)
-> Each node is automatically attached with a disk space offering enough space for your data if you decide to put the metadata or even the index on the disk storage.
+```
-9. Click **Create** and wait for your cluster to be provisioned.
+## Run vector search
-Your cluster will be reachable on port 443 and 6333 (Rest) and 6334 (gRPC).
+Here, you will ask a dummy question that will allow you to retrieve a semantically relevant result.
-![Embeddings](/docs/cloud/create-cluster.png)
+```python
+search_result = client.query(
+ collection_name=""test_collection"",
-## Next steps
+ query_text=""Which integration is best for agents?""
+
+)
+
+print(search_result)
+
+```
+
+The semantic search engine will retrieve the most similar result in order of relevance. In this case, the second statement about LlamaIndex is more relevant.
-You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](../../cloud/authentication/) to create one or more API keys.
+```bash
+
+[QueryResponse(id=2, embedding=None, sparse_embedding=None,
+
+metadata={'document': 'Qdrant has a LlamaIndex integration for agents',
+
+'source': 'llamaindex-docs'}, document='Qdrant has a LlamaIndex integration for agents.',
+
+score=0.8749180370667156),
+QueryResponse(id=42, embedding=None, sparse_embedding=None,
+metadata={'document': 'Qdrant has a LangChain integration for chatbots.',
-Your new cluster is highly available and responsive to your application requirements and resource load. Read more in [**Cluster Scaling**](../../cloud/cluster-scaling/).
+'source': 'langchain-docs'}, document='Qdrant has a LangChain integration for chatbots.',
+score=0.8351846822959111)]
-",documentation/cloud/create-cluster.md
+```",documentation/fastembed/fastembed-semantic-search.md
"---
-title: Backups
+title: ""Quickstart""
-weight: 70
+weight: 2
---
-# Cloud Backups
+# How to Generate Text Embedings with FastEmbed
-Qdrant organizes cloud instances as clusters. On occasion, you may need to
+## Install FastEmbed
-restore your cluster because of application or system failure.
+```python
+pip install fastembed
+```
-You may already have a source of truth for your data in a regular database. If you
+Just for demo purposes, you will use Lists and NumPy to work with sample data.
-have a problem, you could reindex the data into your Qdrant vector search cluster.
+```python
-However, this process can take time. For high availability critical projects we
+from typing import List
-recommend replication. It guarantees the proper cluster functionality as long as
+import numpy as np
-at least one replica is running.
+```
-For other use-cases such as disaster recovery, you can set up automatic or
+## Load default model
-self-service backups.
+In this example, you will use the default text embedding model, `BAAI/bge-small-en-v1.5`.
-## Prerequisites
+```python
+from fastembed import TextEmbedding
+```
-You can back up your Qdrant clusters though the Qdrant Cloud
-Dashboard at https://cloud.qdrant.io. This section assumes that you've already
-set up your cluster, as described in the following sections:
+## Add sample data
-- [Create a cluster](/documentation/cloud/create-cluster/)
+Now, add two sample documents. Your documents must be in a list, and each document must be a string
-- Set up [Authentication](/documentation/cloud/authentication/)
+```python
-- Configure one or more [Collections](/documentation/concepts/collections/)
+documents: List[str] = [
+ ""FastEmbed is lighter than Transformers & Sentence-Transformers."",
+ ""FastEmbed is supported by and maintained by Qdrant."",
-## Automatic backups
+]
+```
+Download and initialize the model. Print a message to verify the process.
-You can set up automatic backups of your clusters with our Cloud UI. With the
-procedures listed in this page, you can set up
-snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you
+```python
-need. You can restore a cluster from the snapshot of your choice.
+embedding_model = TextEmbedding()
+print(""The model BAAI/bge-small-en-v1.5 is ready to use."")
+```
-> Note: When you restore a snapshot, consider the following:
+## Embed data
-> - The affected cluster is not available while a snapshot is being restored.
-> - If you changed the cluster setup after the copy was created, the cluster
- resets to the previous configuration.
+Generate embeddings for both documents.
-> - The previous configuration includes:
+```python
-> - CPU
+embeddings_generator = embedding_model.embed(documents)
-> - Memory
+embeddings_list = list(embeddings_generator)
-> - Node count
+len(embeddings_list[0])
-> - Qdrant version
+```
+Here is the sample document list. The default model creates vectors with 384 dimensions.
-### Configure a backup
+```bash
+Document: This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc.
-After you have taken the prerequisite steps, you can configure a backup with the
+Vector of type: with shape: (384,)
-[Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps:
+Document: fastembed is supported by and maintained by Qdrant.
+Vector of type: with shape: (384,)
+```
-1. Sign in to the dashboard
-1. Select Clusters.
-1. Select the cluster that you want to back up.
+## Visualize embeddings
- ![Select a cluster](/documentation/cloud/select-cluster.png)
+```python
-1. Find and select the **Backups** tab.
+print(""Embeddings:\n"", embeddings_list)
-1. Now you can set up a backup schedule.
+```
- The **Days of Retention** is the number of days after a backup snapshot is
+The embeddings don't look too interesting, but here is a visual.
- deleted.
-1. Alternatively, you can select **Backup now** to take an immediate snapshot.
+```bash
+Embeddings:
-![Configure a cluster backup](/documentation/cloud/backup-schedule.png)
+ [[-0.11154681 0.00976555 0.00524559 0.01951888 -0.01934952 0.02943449
+ -0.10519084 -0.00890122 0.01831438 0.01486796 -0.05642502 0.02561352
+ -0.00120165 0.00637456 0.02633459 0.0089221 0.05313658 0.03955453
-### Restore a backup
+ -0.04400245 -0.02929407 0.04691846 -0.02515868 0.00778646 -0.05410657
+...
+ -0.00243012 -0.01820582 0.02938612 0.02108984 -0.02178085 0.02971899
-If you have a backup, it appears in the list of **Available Backups**. You can
+ -0.00790564 0.03561783 0.0652488 -0.04371546 -0.05550042 0.02651665
-choose to restore or delete the backups of your choice.
+ -0.01116153 -0.01682246 -0.05976734 -0.03143916 0.06522726 0.01801389
+ -0.02611006 0.01627177 -0.0368538 0.03968835 0.027597 0.03305927]]
+```",documentation/fastembed/fastembed-quickstart.md
+"---
-![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png)
+title: ""FastEmbed""
+weight: 6
+---
-
+# What is FastEmbed?
-## Backups with a snapshot
+FastEmbed is a lightweight Python library built for embedding generation. It supports popular embedding models and offers a user-friendly experience for embedding data into vector space.
-Qdrant also offers a snapshot API which allows you to create a snapshot
+By using FastEmbed, you can ensure that your embedding generation process is not only fast and efficient but also highly accurate, meeting the needs of various machine learning and natural language processing applications.
-of a specific collection or your entire cluster. For more information, see our
-[snapshot documentation](/documentation/concepts/snapshots/).
+FastEmbed easily integrates with Qdrant for a variety of multimodal search purposes.
-Here is how you can take a snapshot and recover a collection:
+## How to get started with FastEmbed
-1. Take a snapshot:
- - For a single node cluster, call the snapshot endpoint on the exposed URL.
+|Beginner|Advanced|
- - For a multi node cluster call a snapshot on each node of the collection.
+|:-:|:-:|
- Specifically, prepend `node-{num}-` to your cluster URL.
+|[Generate Text Embedings with FastEmbed](fastembed-quickstart/)|[Combine FastEmbed with Qdrant for Vector Search](fastembed-semantic-search/)|
- Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0.
- - In the response, you'll see the name of the snapshot.
-2. Delete and recreate the collection.
+## Why is FastEmbed useful?
-3. Recover the snapshot:
- - Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host.
+- Light: Unlike other inference frameworks, such as PyTorch, FastEmbed requires very little external dependencies. Because it uses the ONNX runtime, it is perfect for serverless environments like AWS Lambda.
+- Fast: By using ONNX, FastEmbed ensures high-performance inference across various hardware platforms.
-## Backup considerations
+- Accurate: FastEmbed aims for better accuracy and recall than models like OpenAI’s `Ada-002`. It always uses model which demonstrate strong results on the MTEB leaderboard.
+- Support: FastEmbed supports a wide range of models, including multilingual ones, to meet diverse use case needs.
-Backups are incremental. For example, if you have two backups, backup number 2
-contains only the data that changed since backup number 1. This reduces the
-total cost of your backups.
+",documentation/fastembed/_index.md
+"---
+title: OpenLIT
+weight: 3100
-You can create multiple backup schedules.
+aliases: [ ../frameworks/openlit/ ]
+---
-When you restore a snapshot, any changes made after the date of the snapshot
-are lost.
-",documentation/cloud/backups.md
-"---
+# OpenLIT
-title: Capacity and sizing
-weight: 40
-aliases:
+[OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native LLM Application Observability tool and includes OpenTelemetry auto-instrumentation to monitor Qdrant and provide insights to improve database operations and application performance.
- - capacity
----
-# Capacity and sizing
+This page assumes you're using `qdrant-client` version 1.7.3 or above.
-We have been asked a lot about the optimal cluster configuration to serve a number of vectors.
+## Usage
-The only right answer is “It depends”.
+### Step 1: Install OpenLIT
-It depends on a number of factors and options you can choose for your collections.
+Open your command line or terminal and run:
-## Basic configuration
+```bash
-If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this:
+pip install openlit
+```
-```text
-memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5
+### Step 2: Initialize OpenLIT in your Application
+
+Integrating OpenLIT into LLM applications is straightforward with just **two lines of code**:
+
+
+
+```python
+
+import openlit
+
+
+
+openlit.init()
```
-Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process.
+OpenLIT directs the trace to your console by default. To forward telemetry data to an HTTP OTLP endpoint, configure the `otlp_endpoint` parameter or the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable.
-If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM.
+For OpenTelemetry backends requiring authentication, use the `otlp_headers` parameter or the `OTEL_EXPORTER_OTLP_HEADERS` environment variable with the required values.
-Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section.
+## Further Reading
-## Storage focused configuration
+With the LLM Observability data now being collected by OpenLIT, the next step is to visualize and analyze this data to get insights Qdrant's performance, behavior, and identify areas of improvement.
-If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage).
+To begin exploring your LLM Application's performance data within the OpenLIT UI, please see the [Quickstart Guide](https://docs.openlit.io/latest/quickstart).
-In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM.
+If you want to integrate and send the generated metrics and traces to your existing observability tools like Promethues+Jaeger, Grafana or more, refer to the [Official Documentation for OpenLIT Connections](https://docs.openlit.io/latest/connections/intro) for detailed instructions.
-The amount of available RAM will significantly affect the performance of the search.
-As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower.
+",documentation/observability/openlit.md
+"---
+title: Datadog
+---
-The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search.
+![Datadog Cover](/documentation/observability/datadog/datadog-cover.jpg)
-## Sub-groups oriented configuration
+[Datadog](https://www.datadoghq.com/) is a cloud-based monitoring and analytics platform that offers real-time monitoring of servers, databases, and numerous other tools and services. It provides visibility into the performance of applications and enables businesses to detect issues before they affect users.
-If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values,
+You can install the [Qdrant integration](https://docs.datadoghq.com/integrations/qdrant/) to get real-time metrics to monitor your Qdrant deployment within Datadog including:
-it is recommended to configure memory-map storage.
-For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently.
+- The performance of REST and gRPC interfaces with metrics such as total requests, total failures, and time to serve to identify potential bottlenecks and mitigate them.
-In this scenario only the active subset of vectors will be kept in RAM, which allows
-the fast search for the most active and recent users.
+- Information about the readiness of the cluster, and deployment (total peers, pending operations, etc.) to gain insights into your Qdrant deployment.
-In this case you can estimate required memory size as follows:
+### Usage
-```text
+- With the [Datadog Agent installed](https://docs.datadoghq.com/agent/basic_agent_usage), run the following command to add the Qdrant integration:
-memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5
+
+
+```shell
+
+datadog-agent integration install -t qdrant==1.0.0
```
-",documentation/cloud/capacity-sizing.md
-"---
-title: GCP Marketplace
-weight: 60
----
+- Edit the `qdrant.d/conf.yaml` file in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files/#agent-configuration-directory) to start collecting your [Qdrant metrics](/documentation/guides/monitoring/).
-# Qdrant Cloud on GCP Marketplace
+Most importantly, set the `openmetrics_endpoint` value to the `/metrics` endpoint of your Qdrant instance.
-Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant)
+```yaml
-listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for
+instances:
-hosting and application development. While Qdrant's clusters run on GCP, you are using the
+ ## @param openmetrics_endpoint - string - optional
-Qdrant Cloud infrastructure.
+ ## The URL exposing metrics in the OpenMetrics format.
+ - openmetrics_endpoint: http://localhost:6333/metrics
+```
-## Billing
+If the Qdrant instance requires authentication, you can specify the token by configuring [`extra_headers`](https://github.com/DataDog/integrations-core/blob/26f9ae7660f042c43f5d771f0c937ff805cf442c/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example#L553C1-L558C35).
-You don't need a credit card to sign up for Qdrant Cloud. Instead, all billing is
-processed through the GCP Marketplace. Usage is added to your existing billing
-for GCP.
+```yaml
+# @param extra_headers - mapping - optional
+# Additional headers to send with every request.
-Payment is made through your GCP Account. Our [Billing Calculator](https://cloud.qdrant.io/calculator)
+extra_headers:
-can provide more information about costs.
+ api-key:
+```
-Costs from cloud providers are based on usage. You can subscribe to Qdrant on
-the GCP Marketplace without paying more.
+- Restart the Datadog agent.
-## How to subscribe
+- You can now head over to the Datadog dashboard to view the [metrics](https://docs.datadoghq.com/integrations/qdrant/#data-collected) emitted by the Qdrant check.
-1. Go to the [GCP Marketplace listing for Qdrant](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant).
+## Further Reading
-1. Select **Subscribe**. (If you have already subscribed, select
- **Manage on Provider**.)
-1. On the next screen, choose options as required, and select **Subscribe**.
+- [Getting started with Datadog](https://docs.datadoghq.com/getting_started/)
-1. On the pop-up window that appers, select **Sign up with Qdrant**.
+- [Qdrant integration source](https://github.com/DataDog/integrations-extras/tree/master/qdrant)
+",documentation/observability/datadog.md
+"---
+title: OpenLLMetry
+weight: 2300
-GCP transfers you to the [Qdrant Cloud](https://cloud.qdrant.io/).
+aliases: [ ../frameworks/openllmetry/ ]
+---
-The Billing Details screen opens in the Qdrant Cloud Console. If you do not
-already see a menu, select the ""hamburger"" icon (with three short horizontal
+# OpenLLMetry
-lines) in the upper-left corner of the window.
+OpenLLMetry from [Traceloop](https://www.traceloop.com/) is a set of extensions built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM application.
-> **Note:** You do not have to return to GCP. All Qdrant infrastructure is provisioned from the Qdrant Cloud Console.
+OpenLLMetry supports instrumenting the `qdrant_client` Python library and exporting the traces to various observability platforms, as described in their [Integrations catalog](https://www.traceloop.com/docs/openllmetry/integrations/introduction#the-integrations-catalog).
-## Next steps
+This page assumes you're using `qdrant-client` version 1.7.3 or above.
-Now that you have signed up through GCP, please read our instructions to get started:
+## Usage
-1. Learn more about how you can [Create a cluster](/documentation/cloud/create-cluster/).
+To set up OpenLLMetry, follow these steps:
-1. Learn how to [Authenticate](/documentation/cloud/authentication/) and access your cluster.
-",documentation/cloud/gcp-marketplace.md
-"---
-title: Cluster scaling
-weight: 50
+1. Install the SDK:
----
+```console
-# Cluster scaling
+pip install traceloop-sdk
+```
-The amount of data is always growing and at some point you might need to upgrade the capacity of your cluster.
-There are different options for how it can be done.
+1. Instantiate the SDK:
-## Vertical scaling
+```python
+
+from traceloop.sdk import Traceloop
-Vertical scaling, also known as vertical expansion, is the process of increasing the capacity of a cluster by adding more resources, such as memory, storage, or processing power.
+Traceloop.init()
+```
-You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application.
-If your cluster consists of several nodes each node will need to be scaled to the same size.
+You're now tracing your `qdrant_client` usage with OpenLLMetry!
-Please note that vertical cluster scaling will require a short downtime period to restart your cluster.
-In order to avoid a downtime you can make use of data replication, which can be configured on the collection level.
-Vertical scaling can be initiated on the cluster detail page via the button ""scale up"".
+## Without the SDK
-## Horizontal scaling
+Since Traceloop provides standard OpenTelemetry instrumentations, you can use them as standalone packages. To do so, follow these steps:
+
+
+
+1. Install the package:
-Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations.
+```console
-The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded.
+pip install opentelemetry-instrumentation-qdrant
-At some point, adding more resources to a cluster can become impractical or cost-prohibitive.
+```
-In such cases, horizontal scaling may be a more effective solution.
-Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them.
-The horizontal scaling at Qdrant starts on the collection level.
+1. Instantiate the `QdrantInstrumentor`.
-You have to choose the number of shards you want to distribute your collection around while creating the collection.
-Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details.
+```python
+from opentelemetry.instrumentation.qdrant import QdrantInstrumentor
-Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node.
+QdrantInstrumentor().instrument()
-With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling).
+```
-We will be glad to consult you on an optimal strategy for scaling.
+## Further Reading
-[Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution. We plan to introduce an auto-scaling functionality. Since it is one of most desired features, it has a high priority on our Cloud roadmap.
-",documentation/cloud/cluster-scaling.md
+
+
+- 📚 OpenLLMetry [API reference](https://www.traceloop.com/docs/api-reference/introduction)
+
+- 📄 [Source Code](https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-qdrant)
+",documentation/observability/openllmetry.md
"---
-title: Qdrant Cloud
+title: Observability
-weight: 20
+weight: 15
---
-# About Qdrant Cloud
+## Observability Integrations
-Qdrant Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant instances on the cloud.
+| Tool | Description |
-We provide you with the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure.
+| ----------------------------- | -------------------------------------------------------------------------------------- |
+| [OpenLIT](./openlit/) | Platform for OpenTelemetry-native Observability & Evals for LLMs and Vector Databases. |
+| [OpenLLMetry](./openllmetry/) | Set of OpenTelemetry extensions to add Observability for your LLM application. |
-Transitioning from on-premise to the cloud version of Qdrant does not require changing anything in the way you interact with the service. All you have to do is [create a Qdrant Cloud account](https://qdrant.to/cloud) and [provide a new API key]({{< ref ""/documentation/cloud/authentication"" >}}) to each request.
+| [Datadog](./datadog/) | Cloud-based monitoring and analytics platform. |
+",documentation/observability/_index.md
+"---
+title: Setup Hybrid Cloud
+weight: 1
-The transition is even easier if you use the official client libraries. For example, the [Python Client](https://github.com/qdrant/qdrant-client) has the support of the API key already built-in, so you only need to provide it once, when the QdrantClient instance is created.
+---
-### Cluster configuration
+# Creating a Hybrid Cloud Environment
-Each instance comes pre-configured with the following tools, features and support services:
+The following instruction set will show you how to properly set up a **Qdrant cluster** in your **Hybrid Cloud Environment**.
-- Automatically created with the latest available version of Qdrant.
+To learn how Hybrid Cloud works, [read the overview document](/documentation/hybrid-cloud/).
-- Upgradeable to later versions of Qdrant as they are released.
-- Equipped with monitoring and logging to observe the health of each cluster.
-- Accessible through the Qdrant Cloud Console.
+## Prerequisites
-- Vertically scalable.
-- Offered on AWS and GCP, with Azure currently in development.
+- **Kubernetes cluster:** To create a Hybrid Cloud Environment, you need a [standard compliant](https://www.cncf.io/training/certification/software-conformance/) Kubernetes cluster. You can run this cluster in any cloud, on-premise or edge environment, with distributions that range from AWS EKS to VMWare vSphere.
+- **Storage:** For storage, you need to set up the Kubernetes cluster with a Container Storage Interface (CSI) driver that provides block storage. For vertical scaling, the CSI driver needs to support volume expansion. For backups and restores, the driver needs to support CSI snapshots and restores.
-### Getting started with Qdrant Cloud
+
-To use Qdrant Cloud, you will need to create at least one cluster. There are two ways to start:
-1. [**Create a Free Tier cluster**]({{< ref ""/documentation/cloud/quickstart-cloud"" >}}) with 1 node and a default configuration (1GB RAM, 0.5 CPU and 4GB Disk). This option is perfect for prototyping and you don't need a credit card to join.
-2. [**Configure a custom cluster**]({{< ref ""/documentation/cloud/create-cluster"" >}}) with additional nodes and more resources. For this option, you will have to provide billing information.
+- **Permissions:** To install the Qdrant Kubernetes Operator you need to have `cluster-admin` access in your Kubernetes cluster.
+- **Connection:** The Qdrant Kubernetes Operator in your cluster needs to be able to connect to Qdrant Cloud. It will create an outgoing connection to `cloud.qdrant.io` on port `443`.
+- **Locations:** By default, the Qdrant Cloud Agent and Operator pulls Helm charts and container images from `registry.cloud.qdrant.io`. The Qdrant database container image is pulled from `docker.io`.
-We recommend that you use the Free Tier cluster for testing purposes. The capacity should be enough to serve up to 1M vectors of 768dim. To calculate your needs, refer to [capacity planning]({{< ref ""/documentation/cloud/capacity-sizing"" >}}).
+> **Note:** You can also mirror these images and charts into your own registry and pull them from there.
-### Support & Troubleshooting
+### CLI tools
-All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord). Our Support Engineers are available to help you anytime.
+During the onboarding, you will need to deploy the Qdrant Kubernetes Operator and Agent using Helm. Make sure you have the following tools installed:
-Additionally, paid customers can also contact support via channels provided during cluster creation and/or on-boarding.
-",documentation/cloud/_index.md
-"---
-title: Storage
-weight: 80
+* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
-aliases:
+* [helm](https://helm.sh/docs/intro/install/)
- - ../storage
----
+You will need to have access to the Kubernetes cluster with `kubectl` and `helm` configured to connect to it. Please refer the documentation of your Kubernetes distribution for more information.
-# Storage
+### Required artifacts
-All data within one collection is divided into segments.
-Each segment has its independent vector and payload storage as well as indexes.
+Container images:
-Data stored in segments usually do not overlap.
+- `docker.io/qdrant/qdrant`
-However, storing the same point in different segments will not cause problems since the search contains a deduplication mechanism.
+- `registry.cloud.qdrant.io/qdrant/qdrant-cloud-agent`
+- `registry.cloud.qdrant.io/qdrant/qdrant-operator`
+- `registry.cloud.qdrant.io/qdrant/cluster-manager`
-The segments consist of vector and payload storages, vector and payload [indexes](../indexing), and id mapper, which stores the relationship between internal and external ids.
+- `registry.cloud.qdrant.io/qdrant/prometheus`
+- `registry.cloud.qdrant.io/qdrant/prometheus-config-reloader`
+- `registry.cloud.qdrant.io/qdrant/kube-state-metrics`
-A segment can be `appendable` or `non-appendable` depending on the type of storage and index used.
-You can freely add, delete and query data in the `appendable` segment.
-With `non-appendable` segment can only read and delete data.
+Open Containers Initiative (OCI) Helm charts:
-The configuration of the segments in the collection can be different and independent of one another, but at least one `appendable' segment must be present in a collection.
+- `registry.cloud.qdrant.io/qdrant-charts/qdrant-cloud-agent`
+- `registry.cloud.qdrant.io/qdrant-charts/qdrant-operator`
+- `registry.cloud.qdrant.io/qdrant-charts/qdrant-cluster-manager`
-## Vector storage
+- `registry.cloud.qdrant.io/qdrant-charts/prometheus`
-Depending on the requirements of the application, Qdrant can use one of the data storage options.
+## Installation
-The choice has to be made between the search speed and the size of the RAM used.
+1. To set up Hybrid Cloud, open the Qdrant Cloud Console at [cloud.qdrant.io](https://cloud.qdrant.io). On the dashboard, select **Hybrid Cloud**.
-**In-memory storage** - Stores all vectors in RAM, has the highest speed since disk access is required only for persistence.
+2. Before creating your first Hybrid Cloud Environment, you have to provide billing information and accept the Hybrid Cloud license agreement. The installation wizard will guide you through the process.
-**Memmap storage** - Creates a virtual address space associated with the file on disk. [Wiki](https://en.wikipedia.org/wiki/Memory-mapped_file).
-Mmapped files are not directly loaded into RAM. Instead, they use page cache to access the contents of the file.
-This scheme allows flexible use of available memory. With sufficient RAM, it is almost as fast as in-memory storage.
+> **Note:** You will only be charged for the Qdrant cluster you create in a Hybrid Cloud Environment, but not for the environment itself.
+3. Now you can specify the following:
-### Configuring Memmap storage
+- **Name:** A name for the Hybrid Cloud Environment
+- **Kubernetes Namespace:** The Kubernetes namespace for the operator and agent. Once you select a namespace, you can't change it.
-There are two ways to configure the usage of memmap(also known as on-disk) storage:
+You can also configure the StorageClass and VolumeSnapshotClass to use for the Qdrant databases, if you want to deviate from the default settings of your cluster.
-- Set up `on_disk` option for the vectors in the collection create API:
+4. You can then enter the YAML configuration for your Kubernetes operator. Qdrant supports a specific list of configuration options, as described in the [Qdrant Operator configuration](/documentation/hybrid-cloud/operator-configuration/) section.
-*Available as of v1.2.0*
+5. (Optional) If you have special requirements for any of the following, activate the **Show advanced configuration** option:
-```http
+- If you use a proxy to connect from your infrastructure to the Qdrant Cloud API, you can specify the proxy URL, credentials and cetificates.
-PUT /collections/{collection_name}
+- Container registry URL for Qdrant Operator and Agent images. The default is .
-{
+- Helm chart repository URL for the Qdrant Operator and Agent. The default is .
- ""vectors"": {
+- Log level for the operator and agent
- ""size"": 768,
- ""distance"": ""Cosine"",
- ""on_disk"": true
+6. Once complete, click **Create**.
- }
-}
-```
+> **Note:** All settings but the Kubernetes namespace can be changed later.
-```python
+### Generate Installation Command
-from qdrant_client import QdrantClient, models
+After creating your Hybrid Cloud, select **Generate Installation Command** to generate a script that you can run in your Kubernetes cluster which will perform the initial installation of the Kubernetes operator and agent. It will:
-client = QdrantClient(""localhost"", port=6333)
+- Create the Kubernetes namespace, if not present
-client.create_collection(
+- Set up the necessary secrets with credentials to access the Qdrant container registry and the Qdrant Cloud API.
- collection_name=""{collection_name}"",
+- Sign in to the Helm registry at `registry.cloud.qdrant.io`
- vectors_config=models.VectorParams(
+- Install the Qdrant cloud agent and Kubernetes operator chart
- size=768, distance=models.Distance.COSINE, on_disk=True
- ),
-)
+You need this command only for the initial installation. After that, you can update the agent and operator using the Qdrant Cloud Console.
-```
+> **Note:** If you generate the installation command a second time, it will re-generate the included secrets, and you will have to apply the command again to update them.
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+## Deleting a Hybrid Cloud Environment
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+To delete a Hybrid Cloud Environment, first delete all Qdrant database clusters in it. Then you can delete the environment itself.
-client.createCollection(""{collection_name}"", {
- vectors: {
+To clean up your Kubernetes cluster, after deleting the Hybrid Cloud Environment, you can use the following command:
- size: 768,
- distance: ""Cosine"",
- on_disk: true,
+```shell
- },
+helm -n the-qdrant-namespace delete qdrant-cloud-agent
-});
+helm -n the-qdrant-namespace delete qdrant-prometheus
-```
+helm -n the-qdrant-namespace delete qdrant-operator
+kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-cloud-agent -p '{""metadata"":{""finalizers"":null}}' --type=merge
+kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-prometheus -p '{""metadata"":{""finalizers"":null}}' --type=merge
-```rust
+kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-operator -p '{""metadata"":{""finalizers"":null}}' --type=merge
-use qdrant_client::{
+kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-cloud-agent -p '{""metadata"":{""finalizers"":null}}' --type=merge
- client::QdrantClient,
+kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-prometheus -p '{""metadata"":{""finalizers"":null}}' --type=merge
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
+kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-operator -p '{""metadata"":{""finalizers"":null}}' --type=merge
-};
+kubectl -n the-qdrant-namespace patch HelmRepository.cd.qdrant.io qdrant-cloud -p '{""metadata"":{""finalizers"":null}}' --type=merge
+kubectl delete namespace the-qdrant-namespace
+kubectl get crd -o name | grep qdrant | xargs -n 1 kubectl delete
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+```
+",documentation/hybrid-cloud/hybrid-cloud-setup.md
+"---
+title: Configure the Qdrant Operator
+weight: 3
-client
+---
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
+# Configuring Qdrant Operator: Advanced Options
- config: Some(Config::Params(VectorParams {
- size: 768,
- distance: Distance::Cosine.into(),
+The Qdrant Operator has several configuration options, which can be configured in the advanced section of your Hybrid Cloud Environment.
- on_disk: Some(true),
- ..Default::default()
- })),
+The following YAML shows all configuration options with their default values:
- }),
- ..Default::default()
- })
+```yaml
- .await?;
+# Retention for the backup history of Qdrant clusters
-```
+backupHistoryRetentionDays: 2
+# Timeout configuration for the Qdrant operator operations
+operationTimeout: 7200 # 2 hours
-```java
+handlerTimeout: 21600 # 6 hours
-import io.qdrant.client.QdrantClient;
+backupTimeout: 12600 # 3.5 hours
-import io.qdrant.client.QdrantGrpcClient;
+# Incremental backoff configuration for the Qdrant operator operations
-import io.qdrant.client.grpc.Collections.Distance;
+backOff:
-import io.qdrant.client.grpc.Collections.VectorParams;
+ minDelay: 5
+ maxDelay: 300
+ increment: 5
-QdrantClient client =
+# node_selector: {}
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+# tolerations: []
+# Default ingress configuration for a Qdrant cluster
+ingress:
-client
+ enabled: false
- .createCollectionAsync(
+ provider: KubernetesIngress # or NginxIngress
- ""{collection_name}"",
+# kubernetesIngress:
- VectorParams.newBuilder()
+# ingressClassName: """"
- .setSize(768)
+# Default storage configuration for a Qdrant cluster
- .setDistance(Distance.Cosine)
+#storage:
- .setOnDisk(true)
+# Default VolumeSnapshotClass for a Qdrant cluster
- .build())
+# snapshot_class: ""csi-snapclass""
- .get();
+# Default StorageClass for a Qdrant cluster, uses cluster default StorageClass if not set
-```
+# default_storage_class_names:
+# StorageClass for DB volumes
+# db: """"
-```csharp
+# StorageClass for snapshot volumes
-using Qdrant.Client;
+# snapshots: """"
-using Qdrant.Client.Grpc;
+# Default scheduling configuration for a Qdrant cluster
+#scheduling:
+# default_topology_spread_constraints: []
-var client = new QdrantClient(""localhost"", 6334);
+# default_pod_disruption_budget: {}
+qdrant:
+# Default security context for Qdrant cluster
-await client.CreateCollectionAsync(
+# securityContext:
- ""{collection_name}"",
+# enabled: false
- new VectorParams
+# user: """"
- {
+# fsGroup: """"
- Size = 768,
+# group: """"
- Distance = Distance.Cosine,
+# Default Qdrant image configuration
- OnDisk = true
+# image:
- }
+# pull_secret: """"
-);
+# pull_policy: IfNotPresent
-```
+# repository: qdrant/qdrant
+# Default Qdrant log_level
+# log_level: INFO
-This will create a collection with all vectors immediately stored in memmap storage.
+# Default network policies to create for a qdrant cluster
-This is the recommended way, in case your Qdrant instance operates with fast disks and you are working with large collections.
+ networkPolicies:
+ ingress:
+ - ports:
+ - protocol: TCP
+ port: 6333
-- Set up `memmap_threshold_kb` option. This option will set the threshold after which the segment will be converted to memmap storage.
+ - protocol: TCP
+ port: 6334
+# Allow DNS resolution from qdrant pods at Kubernetes internal DNS server
-There are two ways to do this:
+ egress:
+ - to:
+ - namespaceSelector:
-1. You can set the threshold globally in the [configuration file](../../guides/configuration/). The parameter is called `memmap_threshold_kb`.
+ matchLabels:
-2. You can set the threshold for each collection separately during [creation](../collections/#create-collection) or [update](../collections/#update-collection-parameters).
+ kubernetes.io/metadata.name: kube-system
+ ports:
+ - protocol: UDP
-```http
+ port: 53
-PUT /collections/{collection_name}
+```
+",documentation/hybrid-cloud/operator-configuration.md
+"---
-{
+title: Networking, Logging & Monitoring
- ""vectors"": {
+weight: 4
- ""size"": 768,
+---
- ""distance"": ""Cosine""
+# Configuring Networking, Logging & Monitoring in Qdrant Hybrid Cloud
- },
- ""optimizers_config"": {
- ""memmap_threshold"": 20000
+## Configure network policies
- }
-}
-```
+For security reasons, each database cluster is secured with network policies. By default, database pods only allow egress traffic between each and allow ingress traffic to ports 6333 (rest) and 6334 (grpc) from within the Kubernetes cluster.
-```python
+You can modify the default network policies in the Hybrid Cloud environment configuration:
-from qdrant_client import QdrantClient, models
+```yaml
-client = QdrantClient(""localhost"", port=6333)
+qdrant:
+ networkPolicies:
+ ingress:
-client.create_collection(
+ - from:
- collection_name=""{collection_name}"",
+ - ipBlock:
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ cidr: 192.168.0.0/22
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
+ - podSelector:
-)
+ matchLabels:
-```
+ app: client-app
+ namespaceSelector:
+ matchLabels:
-```typescript
+ kubernetes.io/metadata.name: client-namespace
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ - podSelector:
+ matchLabels:
+ app: traefik
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: kube-system
-client.createCollection(""{collection_name}"", {
+ ports:
- vectors: {
+ - port: 6333
- size: 768,
+ protocol: TCP
- distance: ""Cosine"",
+ - port: 6334
- },
+ protocol: TCP
- optimizers_config: {
+```
- memmap_threshold: 20000,
- },
-});
+## Logging
-```
+You can access the logs with kubectl or the Kubernetes log management tool of your choice. For example:
-```rust
-use qdrant_client::{
- client::QdrantClient,
+```bash
- qdrant::{
+kubectl -n qdrant-namespace logs -l app=qdrant,cluster-id=9a9f48c7-bb90-4fb2-816f-418a46a74b24
- vectors_config::Config, CreateCollection, Distance, OptimizersConfigDiff, VectorParams,
+```
- VectorsConfig,
- },
-};
+**Configuring log levels:** You can configure log levels for the databases individually in the configuration section of the Qdrant Cluster detail page. The log level for the **Qdrant Cloud Agent** and **Operator** can be set in the [Hybrid Cloud Environment configuration](/documentation/hybrid-cloud/operator-configuration/).
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+## Monitoring
-client
+The Qdrant Cloud console gives you access to basic metrics about CPU, memory and disk usage of your Qdrant clusters. You can also access Prometheus metrics endpoint of your Qdrant databases. Finally, you can use a Kubernetes workload monitoring tool of your choice to monitor your Qdrant clusters.
+",documentation/hybrid-cloud/networking-logging-monitoring.md
+"---
- .create_collection(&CreateCollection {
+title: Deployment Platforms
- collection_name: ""{collection_name}"".to_string(),
+weight: 5
- vectors_config: Some(VectorsConfig {
+---
- config: Some(Config::Params(VectorParams {
- size: 768,
- distance: Distance::Cosine.into(),
+# Qdrant Hybrid Cloud: Hosting Platforms & Deployment Options
- ..Default::default()
- })),
- }),
+This page provides an overview of how to deploy Qdrant Hybrid Cloud on various managed Kubernetes platforms.
- optimizers_config: Some(OptimizersConfigDiff {
- memmap_threshold: Some(20000),
- ..Default::default()
+For a general list of prerequisites and installation steps, see our [Hybrid Cloud setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- }),
- ..Default::default()
- })
+![Akamai](/documentation/cloud/cloud-providers/akamai.jpg)
- .await?;
-```
+## Akamai (Linode)
-```java
-import io.qdrant.client.QdrantClient;
+[The Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/) is a managed container orchestration engine built on top of Kubernetes. LKE enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All LKE instances are equipped with a fully managed control plane at no additional cost.
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
+First, consult Linode's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on LKE**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+### More on Linode Kubernetes Engine
-QdrantClient client =
+- [Getting Started with LKE](https://www.linode.com/docs/products/compute/kubernetes/get-started/)
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+- [LKE Guides](https://www.linode.com/docs/products/compute/kubernetes/guides/)
+- [LKE API Reference](https://www.linode.com/docs/api/)
-client
- .createCollectionAsync(
+At the time of writing, Linode [does not support CSI Volume Snapshots](https://github.com/linode/linode-blockstorage-csi-driver/issues/107).
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
+![AWS](/documentation/cloud/cloud-providers/aws.jpg)
- VectorsConfig.newBuilder()
- .setParams(
- VectorParams.newBuilder()
+## Amazon Web Services (AWS)
- .setSize(768)
- .setDistance(Distance.Cosine)
- .build())
+[Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/) is a managed service to run Kubernetes in the AWS cloud and on-premises data centers which can then be paired with Qdrant's hybrid cloud. With Amazon EKS, you can take advantage of all the performance, scale, reliability, and availability of AWS infrastructure, as well as integrations with AWS networking and security services.
- .build())
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
+First, consult AWS' managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on AWS**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- .build())
- .get();
-```
+### More on Amazon Elastic Kubernetes Service
-```csharp
+- [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/)
-using Qdrant.Client;
+- [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
-using Qdrant.Client.Grpc;
+- [Amazon EKS API Reference](https://docs.aws.amazon.com/eks/latest/APIReference/Welcome.html)
-var client = new QdrantClient(""localhost"", 6334);
+Your EKS cluster needs the EKS EBS CSI driver or a similar storage driver:
+- [Amazon EBS CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html)
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+To allow vertical scaling, you need a StorageClass with volume expansion enabled:
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+- [Amazon EBS CSI Volume Resizing](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/examples/kubernetes/resizing/README.md)
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }
-);
-```
+```yaml
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
-The rule of thumb to set the memmap threshold parameter is simple:
+metadata:
+ annotations:
+ storageclass.kubernetes.io/is-default-class: ""true""
-- if you have a balanced use scenario - set memmap threshold the same as `indexing_threshold` (default is 20000). In this case the optimizer will not make any extra runs and will optimize all thresholds at once.
+ name: ebs-sc
-- if you have a high write load and low RAM - set memmap threshold lower than `indexing_threshold` to e.g. 10000. In this case the optimizer will convert the segments to memmap storage first and will only apply indexing after that.
+provisioner: ebs.csi.aws.com
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
-In addition, you can use memmap storage not only for vectors, but also for HNSW index.
+allowVolumeExpansion: true
-To enable this, you need to set the `hnsw_config.on_disk` parameter to `true` during collection [creation](../collections/#create-a-collection) or [updating](../collections/#update-collection-parameters).
+```
-```http
+To allow backups and restores, your EKS cluster needs the CSI snapshot controller:
-PUT /collections/{collection_name}
+- [Amazon EBS CSI Snapshot Controller](https://docs.aws.amazon.com/eks/latest/userguide/csi-snapshot-controller.html)
-{
- ""vectors"": {
- ""size"": 768,
+And you need to create a VolumeSnapshotClass:
- ""distance"": ""Cosine""
- },
- ""optimizers_config"": {
+```yaml
- ""memmap_threshold"": 20000
+apiVersion: snapshot.storage.k8s.io/v1
- },
+kind: VolumeSnapshotClass
- ""hnsw_config"": {
+metadata:
- ""on_disk"": true
+ name: csi-snapclass
- }
+deletionPolicy: Delete
-}
+driver: ebs.csi.aws.com
```
-```python
+![Civo](/documentation/cloud/cloud-providers/civo.jpg)
-from qdrant_client import QdrantClient, models
+## Civo
-client = QdrantClient(""localhost"", port=6333)
+[Civo Kubernetes](https://www.civo.com/kubernetes) is a robust, scalable, and managed Kubernetes service. Civo supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the Civo Portal.
-client.create_collection(
- collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+First, consult Civo's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Civo**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
- hnsw_config=models.HnswConfigDiff(on_disk=True),
-)
+### More on Civo Kubernetes
-```
+- [Getting Started with Civo Kubernetes](https://www.civo.com/docs/kubernetes)
-```typescript
+- [Civo Tutorials](https://www.civo.com/learn)
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+- [Frequently Asked Questions on Civo](https://www.civo.com/docs/faq)
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+To allow backups and restores, you need to create a VolumeSnapshotClass:
-client.createCollection(""{collection_name}"", {
+```yaml
- vectors: {
+apiVersion: snapshot.storage.k8s.io/v1
- size: 768,
+kind: VolumeSnapshotClass
- distance: ""Cosine"",
+metadata:
- },
+ name: csi-snapclass
- optimizers_config: {
+deletionPolicy: Delete
- memmap_threshold: 20000,
+driver: csi.civo.com
- },
+```
- hnsw_config: {
- on_disk: true,
- },
+![Digital Ocean](/documentation/cloud/cloud-providers/digital-ocean.jpg)
-});
-```
+## Digital Ocean
-```rust
-use qdrant_client::{
+[DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.
- client::QdrantClient,
- qdrant::{
- vectors_config::Config, CreateCollection, Distance, HnswConfigDiff,
+First, consult Digital Ocean's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on DigitalOcean**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- OptimizersConfigDiff, VectorParams, VectorsConfig,
- },
-};
+### More on DigitalOcean Kubernetes
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+- [Getting Started with DOKS](https://docs.digitalocean.com/products/kubernetes/getting-started/quickstart/)
+- [DOKS - How To Guides](https://docs.digitalocean.com/products/kubernetes/how-to/)
+- [DOKS - Reference Manual](https://docs.digitalocean.com/products/kubernetes/reference/)
-client
- .create_collection(&CreateCollection {
- collection_name: ""{collection_name}"".to_string(),
+![Gcore](/documentation/cloud/cloud-providers/gcore.svg)
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
- size: 768,
+## Gcore
- distance: Distance::Cosine.into(),
- ..Default::default()
- })),
+[Gcore Managed Kubernetes](https://gcore.com/cloud/managed-kubernetes) is a managed container orchestration engine built on top of Kubernetes. Gcore enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All Gcore instances are equipped with a fully managed control plane at no additional cost.
- }),
- optimizers_config: Some(OptimizersConfigDiff {
- memmap_threshold: Some(20000),
+First, consult Gcore's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Gcore**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- ..Default::default()
- }),
- hnsw_config: Some(HnswConfigDiff {
+### More on Gcore Kubernetes Engine
- on_disk: Some(true),
- ..Default::default()
- }),
+- [Getting Started with Kubnetes on Gcore](https://gcore.com/docs/cloud/kubernetes/about-gcore-kubernetes)
- ..Default::default()
- })
- .await?;
+![Google Cloud Platform](/documentation/cloud/cloud-providers/gcp.jpg)
-```
+## Google Cloud Platform (GCP)
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) is a managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. GKE provides the operational power of Kubernetes while managing many of the underlying components, such as the control plane and nodes, for you.
-import io.qdrant.client.grpc.Collections.CreateCollection;
-import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Collections.HnswConfigDiff;
+First, consult GCP's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on GCP**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+### More on the Google Kubernetes Engine
-QdrantClient client =
+- [Getting Started with GKE](https://cloud.google.com/kubernetes-engine/docs/quickstart)
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+- [GKE Tutorials](https://cloud.google.com/kubernetes-engine/docs/tutorials)
+- [GKE Documentation](https://cloud.google.com/kubernetes-engine/docs/)
-client
- .createCollectionAsync(
+To allow backups and restores, your GKE cluster needs the CSI VolumeSnapshot controller and class:
- CreateCollection.newBuilder()
+- [Google GKE Volume Snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots)
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
- VectorsConfig.newBuilder()
+```yaml
- .setParams(
+apiVersion: snapshot.storage.k8s.io/v1
- VectorParams.newBuilder()
+kind: VolumeSnapshotClass
- .setSize(768)
+metadata:
- .setDistance(Distance.Cosine)
+ name: csi-snapclass
- .build())
+deletionPolicy: Delete
- .build())
+driver: pd.csi.storage.gke.io
- .setOptimizersConfig(
+```
- OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
- .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
- .build())
+![Microsoft Azure](/documentation/cloud/cloud-providers/azure.jpg)
- .get();
-```
+## Mircrosoft Azure
-```csharp
-using Qdrant.Client;
+With [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-in/products/kubernetes-service), you can start developing and deploying cloud-native apps in Azure, data centres, or at the edge. Get unified management and governance for on-premises, edge, and multi-cloud Kubernetes clusters. Interoperate with Azure security, identity, cost management, and migration services.
-using Qdrant.Client.Grpc;
+First, consult Azure's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Azure**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-var client = new QdrantClient(""localhost"", 6334);
+### More on Azure Kubernetes Service
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
+- [Getting Started with AKS](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-start-here)
- optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
+- [AKS Documentation](https://learn.microsoft.com/en-in/azure/aks/)
- hnswConfig: new HnswConfigDiff { OnDisk = true }
+- [Best Practices with AKS](https://learn.microsoft.com/en-in/azure/aks/best-practices)
-);
-```
+To allow backups and restores, your AKS cluster needs the CSI VolumeSnapshot controller and class:
+- [Azure AKS Volume Snapshots](https://learn.microsoft.com/en-us/azure/aks/azure-disk-csi#create-a-volume-snapshot)
-## Payload storage
+```yaml
-Qdrant supports two types of payload storages: InMemory and OnDisk.
+apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshotClass
+metadata:
-InMemory payload storage is organized in the same way as in-memory vectors.
+ name: csi-snapclass
-The payload data is loaded into RAM at service startup while disk and [RocksDB](https://rocksdb.org/) are used for persistence only.
+deletionPolicy: Delete
-This type of storage works quite fast, but it may require a lot of space to keep all the data in RAM, especially if the payload has large values attached - abstracts of text or even images.
+driver: disk.csi.azure.com
+```
-In the case of large payload values, it might be better to use OnDisk payload storage.
-This type of storage will read and write payload directly to RocksDB, so it won't require any significant amount of RAM to store.
+![Oracle Cloud Infrastructure](/documentation/cloud/cloud-providers/oracle.jpg)
-The downside, however, is the access latency.
-If you need to query vectors with some payload-based conditions - checking values stored on disk might take too much time.
-In this scenario, we recommend creating a payload index for each field used in filtering conditions to avoid disk access.
+## Oracle Cloud Infrastructure
-Once you create the field index, Qdrant will preserve all values of the indexed field in RAM regardless of the payload storage type.
+[Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/) is a managed Kubernetes solution that enables you to deploy Kubernetes clusters while ensuring stable operations for both the control plane and the worker nodes through automatic scaling, upgrades, and security patching. Additionally, OKE offers a completely serverless Kubernetes experience with virtual nodes.
-You can specify the desired type of payload storage with [configuration file](../../guides/configuration/) or with collection parameter `on_disk_payload` during [creation](../collections/#create-collection) of the collection.
+First, consult OCI's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OCI**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-## Versioning
+### More on OCI Container Engine
-To ensure data integrity, Qdrant performs all data changes in 2 stages.
-In the first step, the data is written to the Write-ahead-log(WAL), which orders all operations and assigns them a sequential number.
+- [Getting Started with OCI](https://docs.oracle.com/en-us/iaas/Content/ContEng/home.htm)
+- [Frequently Asked Questions on OCI](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/faq/)
-Once a change has been added to the WAL, it will not be lost even if a power loss occurs.
+- [OCI Product Updates](https://docs.oracle.com/en-us/iaas/releasenotes/services/conteng/)
-Then the changes go into the segments.
-Each segment stores the last version of the change applied to it as well as the version of each individual point.
-If the new change has a sequential number less than the current version of the point, the updater will ignore the change.
+To allow backups and restores, your OCI cluster needs the CSI VolumeSnapshot controller and class:
-This mechanism allows Qdrant to safely and efficiently restore the storage from the WAL in case of an abnormal shutdown.
-",documentation/concepts/storage.md
-"---
+- [Prerequisites for Creating Volume Snapshots
-title: Explore
+](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV.htm#contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV-PV_From_Snapshot_CSI__section_volume-snapshot-prerequisites)
-weight: 55
-aliases:
- - ../explore
+```yaml
----
+apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshotClass
+metadata:
-# Explore the data
+ name: csi-snapclass
+deletionPolicy: Delete
+driver: blockvolume.csi.oraclecloud.com
-After mastering the concepts in [search](../search), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning.
+```
-## Recommendation API
+![OVHcloud](/documentation/cloud/cloud-providers/ovh.jpg)
-In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points.
+## OVHcloud
-REST API - API Schema definition is available [here](https://qdrant.github.io/qdrant/redoc/index.html#operation/recommend_points)
+[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
-```http
+First, consult OVHcloud's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OVHcloud**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-POST /collections/{collection_name}/points/recommend
-{
- ""positive"": [100, 231],
+### More on Service Managed Kubernetes by OVHcloud
- ""negative"": [718, [0.2, 0.3, 0.4, 0.5]],
- ""filter"": {
- ""must"": [
+- [Getting Started with OVH Managed Kubernetes](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-getting-started)
- {
+- [OVH Managed Kubernetes Documentation](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s)
- ""key"": ""city"",
+- [OVH Managed Kubernetes Tutorials](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-tutorials)
- ""match"": {
- ""value"": ""London""
- }
+![Red Hat](/documentation/cloud/cloud-providers/redhat.jpg)
- }
- ]
- },
+## Red Hat OpenShift
- ""strategy"": ""average_vector"",
- ""limit"": 3
-}
+[Red Hat OpenShift Kubernetes Engine](https://www.redhat.com/en/technologies/cloud-computing/openshift/kubernetes-engine) provides you with the basic functionality of Red Hat OpenShift. It offers a subset of the features that Red Hat OpenShift Container Platform offers, like full access to an enterprise-ready Kubernetes environment and an extensive compatibility test matrix with many of the software elements that you might use in your data centre.
-```
+First, consult Red Hat's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Red Hat OpenShift**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-```python
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+### More on OpenShift Kubernetes Engine
-client = QdrantClient(""localhost"", port=6333)
+- [Getting Started with Red Hat OpenShift Kubernetes](https://docs.openshift.com/container-platform/4.15/getting_started/kubernetes-overview.html)
+- [Red Hat OpenShift Kubernetes Documentation](https://docs.openshift.com/container-platform/4.15/welcome/index.html)
+- [Installing on Container Platforms](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/installing/index)
-client.recommend(
- collection_name=""{collection_name}"",
- positive=[100, 231],
+Qdrant databases need a persistent storage solution. See [Openshift Storage Overview](https://docs.openshift.com/container-platform/4.15/storage/index.html).
- negative=[718, [0.2, 0.3, 0.4, 0.5]],
- strategy=models.RecommendStrategy.AVERAGE_VECTOR,
- query_filter=models.Filter(
+To allow vertical scaling, you need a StorageClass with [volume expansion enabled](https://docs.openshift.com/container-platform/4.15/storage/expanding-persistent-volumes.html).
- must=[
- models.FieldCondition(
- key=""city"",
+To allow backups and restores, your OpenShift cluster needs the [CSI snapshot controller](https://docs.openshift.com/container-platform/4.15/storage/container_storage_interface/persistent-storage-csi-snapshots.html), and you need to create a VolumeSnapshotClass.
- match=models.MatchValue(
- value=""London"",
- ),
+![Scaleway](/documentation/cloud/cloud-providers/scaleway.jpg)
- )
- ]
- ),
+## Scaleway
- limit=3,
-)
-```
+[Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane.
-```typescript
+First, consult Scaleway's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Scaleway**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+### More on Scaleway Kubernetes
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+- [Getting Started with Scaleway Kubernetes](https://www.scaleway.com/en/docs/containers/kubernetes/quickstart/#how-to-add-a-scaleway-pool-to-a-kubernetes-cluster)
-client.recommend(""{collection_name}"", {
+- [Scaleway Kubernetes Documentation](https://www.scaleway.com/en/docs/containers/kubernetes/)
- positive: [100, 231],
+- [Frequently Asked Questions on Scaleway Kubernetes](https://www.scaleway.com/en/docs/faq/kubernetes/)
- negative: [718, [0.2, 0.3, 0.4, 0.5]],
- strategy: ""average_vector"",
- filter: {
+![STACKIT](/documentation/cloud/cloud-providers/stackit.jpg)
- must: [
- {
- key: ""city"",
+## STACKIT
- match: {
- value: ""London"",
- },
+[STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) is a robust, scalable, and managed Kubernetes service. SKE supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the STACKIT Portal.
- },
- ],
- },
+First, consult STACKIT's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on STACKIT**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
- limit: 3,
-});
-```
+### More on STACKIT Kubernetes Engine
-```rust
+- [Getting Started with SKE](https://docs.stackit.cloud/stackit/en/getting-started-ske-10125565.html)
-use qdrant_client::{
+- [SKE Tutorials](https://docs.stackit.cloud/stackit/en/tutorials-ske-66683162.html)
- client::QdrantClient,
+- [Frequently Asked Questions on SKE](https://docs.stackit.cloud/stackit/en/faq-known-issues-of-ske-28476393.html)
- qdrant::{Condition, Filter, RecommendPoints, RecommendStrategy},
-};
+To allow backups and restores, you need to create a VolumeSnapshotClass:
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+```yaml
+apiVersion: snapshot.storage.k8s.io/v1
-client
+kind: VolumeSnapshotClass
- .recommend(&RecommendPoints {
+metadata:
- collection_name: ""{collection_name}"".to_string(),
+ name: csi-snapclass
- positive: vec![100.into(), 200.into()],
+deletionPolicy: Delete
- positive_vectors: vec![vec![100.0, 231.0].into()],
+driver: cinder.csi.openstack.org
- negative: vec![718.into()],
+```
- negative_vectors: vec![vec![0.2, 0.3, 0.4, 0.5].into()],
- strategy: Some(RecommendStrategy::AverageVector.into()),
- filter: Some(Filter::must([Condition::matches(
+![Vultr](/documentation/cloud/cloud-providers/vultr.jpg)
- ""city"",
- ""London"".to_string(),
- )])),
+## Vultr
- limit: 3,
- ..Default::default()
- })
+[Vultr Kubernetes Engine (VKE)](https://www.vultr.com/kubernetes/) is a fully-managed product offering with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.
- .await?;
-```
+First, consult Vultr's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Vultr**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-```java
-import java.util.List;
+### More on Vultr Kubernetes Engine
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+- [VKE Guide](https://docs.vultr.com/vultr-kubernetes-engine)
-import static io.qdrant.client.PointIdFactory.id;
+- [VKE Documentation](https://docs.vultr.com/)
-import static io.qdrant.client.VectorFactory.vector;
+- [Frequently Asked Questions on VKE](https://docs.vultr.com/vultr-kubernetes-engine#frequently-asked-questions)
-import io.qdrant.client.QdrantClient;
+At the time of writing, Vultr does not support CSI Volume Snapshots.
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.Filter;
-import io.qdrant.client.grpc.Points.RecommendPoints;
+![Kubernetes](/documentation/cloud/cloud-providers/kubernetes.jpg)
-import io.qdrant.client.grpc.Points.RecommendStrategy;
+## Generic Kubernetes Support (on-premises, cloud, edge)
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+Qdrant Hybrid Cloud works with any Kubernetes cluster that meets the [standard compliance](https://www.cncf.io/training/certification/software-conformance/) requirements.
-client
- .recommendAsync(
+This includes for example:
- RecommendPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .addAllPositive(List.of(id(100), id(200)))
+- [VMWare Tanzu](https://tanzu.vmware.com/kubernetes-grid)
- .addAllPositiveVectors(List.of(vector(100.0f, 231.0f)))
+- [Red Hat OpenShift](https://www.openshift.com/)
- .addAllNegative(List.of(id(718)))
+- [SUSE Rancher](https://www.rancher.com/)
- .addAllPositiveVectors(List.of(vector(0.2f, 0.3f, 0.4f, 0.5f)))
+- [Canonical Kubernetes](https://ubuntu.com/kubernetes)
- .setStrategy(RecommendStrategy.AverageVector)
+- [RKE](https://rancher.com/docs/rke/latest/en/)
- .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")))
+- [RKE2](https://docs.rke2.io/)
- .setLimit(3)
+- [K3s](https://k3s.io/)
- .build())
- .get();
-```
+Qdrant databases need persistent block storage. Most storage solutions provide a CSI driver that can be used with Kubernetes. See [CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) for more information.
-Example result of this API would be
+To allow vertical scaling, you need a StorageClass with volume expansion enabled. See [Volume Expansion](https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion) for more information.
-```json
+To allow backups and restores, your CSI driver needs to support volume snapshots cluster needs the CSI VolumeSnapshot controller and class. See [CSI Volume Snapshots](https://kubernetes-csi.github.io/docs/snapshot-controller.html) for more information.
-{
- ""result"": [
- { ""id"": 10, ""score"": 0.81 },
+## Next Steps
- { ""id"": 14, ""score"": 0.75 },
- { ""id"": 11, ""score"": 0.73 }
- ],
+Once you've got a Kubernetes cluster deployed on a platform of your choosing, you can begin setting up Qdrant Hybrid Cloud. Head to our Qdrant Hybrid Cloud [setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/) for instructions.
+",documentation/hybrid-cloud/platform-deployment-options.md
+"---
- ""status"": ""ok"",
+title: Create a Cluster
- ""time"": 0.001
+weight: 2
-}
+---
-```
+# Creating a Qdrant Cluster in Hybrid Cloud
-The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case.
+Once you have created a Hybrid Cloud Environment, you can create a Qdrant cluster in that enviroment. Use the same process to [Create a cluster](/documentation/cloud/create-cluster/). Make sure to select your Hybrid Cloud Environment as the target.
-### Average vector strategy
+Note that in the ""Kubernetes Configuration"" section you can additionally configure:
-The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation.
+* Node selectors for the Qdrant database pods
-The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula:
+* Toleration for the Qdrant database pods
+* Additional labels for the Qdrant database pods
+* A service type and annotations for the Qdrant database service
-```rust
-avg_positive + avg_positive - avg_negative
-```
+These settings can also be changed after the cluster is created on the cluster detail page.
-In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`.
+### Authentication to your Qdrant clusters
-This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `""strategy"": ""average_vector""` in the recommendation request.
+In Hybrid Cloud the authentication information is provided by Kubernetes secrets.
-### Best score strategy
+You can configure authentication for your Qdrant clusters in the ""Configuration"" section of the Qdrant Cluster detail page. There you can configure the Kubernetes secret name and key to be used as an API key and/or read-only API key.
-*Available as of v1.6.0*
+One way to create a secret is with kubectl:
-A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one.
+```shell
-The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula:
+kubectl create secret generic qdrant-api-key --from-literal=api-key=your-secret-api-key --namespace the-qdrant-namespace
+```
-```rust
-let score = if best_positive_score > best_negative_score {
+The resulting secret will look like this:
- best_positive_score;
-} else {
- -(best_negative_score * best_negative_score);
+```yaml
-};
+apiVersion: v1
-```
+data:
+ api-key: ...
+kind: Secret
-
+ namespace: the-qdrant-namespace
+type: kubernetes.io/tls
+```
-Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic.
+With this command the secret name would be `qdrant-api-key` and the key would be `api-key`.
-
+If you want to retrieve the secret again, you can also use `kubectl`:
-To use this algorithm, you need to set `""strategy"": ""best_score""` in the recommendation request.
+```shell
+kubectl get secret qdrant-api-key -o jsonpath=""{.data.api-key}"" --namespace the-qdrant-namespace | base64 --decode
+```
-#### Using only negative examples
+### Exposing Qdrant clusters to your client applications
-A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one.
+You can expose your Qdrant clusters to your client applications using Kubernetes services and ingresses. By default, a `ClusterIP` service is created for each Qdrant cluster.
-Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning.
+Within your Kubernetes cluster, you can access the Qdrant cluster using the service name and port:
-### Multiple vectors
+```
-*Available as of v0.10.0*
+http://qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.qdrant-namespace.svc:6333
+```
-If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request:
+This endpoint is also visible on the cluster detail page.
-```http
-POST /collections/{collection_name}/points/recommend
+If you want to access the database from your local developer machine, you can use `kubectl port-forward` to forward the service port to your local machine:
-{
- ""positive"": [100, 231],
- ""negative"": [718],
+```
- ""using"": ""image"",
+kubectl --namespace your-qdrant-namespace port-forward service/qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24 6333:6333
- ""limit"": 10
+```
- }
-```
+You can also expose the database outside the Kubernetes cluster with a `LoadBalancer` (if supported in your Kubernetes environment) or `NodePort` service or an ingress.
-```python
-client.recommend(
+The service type and necessary annotations can be configured in the ""Kubernetes Configuration"" section during cluster creation, or on the cluster detail page.
- collection_name=""{collection_name}"",
- positive=[100, 231],
- negative=[718],
+Especially if you create a LoadBalancer Service, you may need to provide annotations for the loadbalancer configration. Please refer to the documention of your cloud provider for more details.
- using=""image"",
- limit=10,
-)
+Examples:
-```
+* [AWS EKS LoadBalancer annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/)
-```typescript
+* [Azure AKS Public LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard)
-client.recommend(""{collection_name}"", {
+* [Azure AKS Internal LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/internal-lb)
- positive: [100, 231],
+* [GCP GKE LoadBalancer annotations](https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters)
- negative: [718],
- using: ""image"",
- limit: 10,
+You could also create a Loadbalancer service manually like this:
-});
-```
+```yaml
+apiVersion: v1
-```rust
+kind: Service
-use qdrant_client::qdrant::RecommendPoints;
+metadata:
+ name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24-lb
+ namespace: qdrant-namespace
-client
+spec:
- .recommend(&RecommendPoints {
+ type: LoadBalancer
- collection_name: ""{collection_name}"".to_string(),
+ ports:
- positive: vec![100.into(), 231.into()],
+ - name: http
- negative: vec![718.into()],
+ port: 6333
- using: Some(""image"".to_string()),
+ - name: grpc
- limit: 10,
+ port: 6334
- ..Default::default()
+ selector:
- })
+ app: qdrant
- .await?;
+ cluster-id: 9a9f48c7-bb90-4fb2-816f-418a46a74b24
```
-```java
+An ingress could look like this:
-import java.util.List;
+```yaml
-import static io.qdrant.client.PointIdFactory.id;
-
-
-
-import io.qdrant.client.grpc.Points.RecommendPoints;
-
-
-
-client
+apiVersion: networking.k8s.io/v1
- .recommendAsync(
+kind: Ingress
- RecommendPoints.newBuilder()
+metadata:
- .setCollectionName(""{collection_name}"")
+ name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24
- .addAllPositive(List.of(id(100), id(231)))
+ namespace: qdrant-namespace
- .addAllNegative(List.of(id(718)))
+spec:
- .setUsing(""image"")
+ rules:
- .setLimit(10)
+ - host: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.your-domain.com
- .build())
+ http:
- .get();
+ paths:
-```
+ - path: /
+ pathType: Prefix
+ backend:
-```csharp
+ service:
-using Qdrant.Client;
+ name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24
+ port:
+ number: 6333
-var client = new QdrantClient(""localhost"", 6334);
+```
-await client.RecommendAsync(
+Please refer to the Kubernetes, ingress controller and cloud provider documention for more details.
- collectionName: ""{collection_name}"",
- positive: new ulong[] { 100, 231 },
- negative: new ulong[] { 718 },
+If you expose the database like this, you will be able to see this also reflected as an endpoint on the cluster detail page. And will see the Qdrant database dashboard link pointing to it.
- usingVector: ""image"",
- limit: 10
-);
+### Configuring TLS
-```
+If you want to configure TLS for accessing your Qdrant database in Hybrid Cloud, there are two options:
-Parameter `using` specifies which stored vectors to use for the recommendation.
+* You can offload TLS at the ingress or loadbalancer level.
-### Lookup vectors from another collection
+* You can configure TLS directly in the Qdrant database.
-*Available as of v0.11.6*
+If you want to configure TLS directly in the Qdrant database, you can reference a secret containing the TLS certificate and key in the ""Configuration"" section of the Qdrant Cluster detail page.
-If you have collections with vectors of the same dimensionality,
+To create such a secret, you can use `kubectl`:
-and you want to look for recommendations in one collection based on the vectors of another collection,
-you can use the `lookup_from` parameter.
+```shell
+ kubectl create secret tls qdrant-tls --cert=mydomain.com.crt --key=mydomain.com.key --namespace the-qdrant-namespace
-It might be useful, e.g. in the item-to-user recommendations scenario.
+```
-Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections.
+The resulting secret will look like this:
-```http
-POST /collections/{collection_name}/points/recommend
+```yaml
+apiVersion: v1
-{
+data:
- ""positive"": [100, 231],
+ tls.crt: ...
- ""negative"": [718],
+ tls.key: ...
- ""using"": ""image"",
+kind: Secret
- ""limit"": 10,
+metadata:
- ""lookup_from"": {
+ name: qdrant-tls
- ""collection"":""{external_collection_name}"",
+ namespace: the-qdrant-namespace
- ""vector"":""{external_vector_name}""
+type: kubernetes.io/tls
- }
+```
-}
-```
+With this command the secret name to enter into the UI would be `qdrant-tls` and the keys would be `tls.crt` and `tls.key`.",documentation/hybrid-cloud/hybrid-cloud-cluster-creation.md
+"---
+title: Hybrid Cloud
-```python
+weight: 9
-client.recommend(
+---
- collection_name=""{collection_name}"",
- positive=[100, 231],
- negative=[718],
+# Qdrant Hybrid Cloud
- using=""image"",
- limit=10,
- lookup_from=models.LookupLocation(
+Seamlessly deploy and manage your vector database across diverse environments, ensuring performance, security, and cost efficiency for AI-driven applications.
- collection=""{external_collection_name}"",
- vector=""{external_vector_name}""
- ),
+[Qdrant Hybrid Cloud](/hybrid-cloud/) integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service.
-)
-```
+You can use [Qdrant Cloud's UI](/documentation/cloud/create-cluster/) to create and manage your database clusters, while they still remain within your infrastructure. **All Qdrant databases will operate solely within your network, using your storage and compute resources. All user data will stay securely within your environment and won't be accessible by the Qdrant Cloud platform, or anyone else outside your organization.**
-```typescript
-client.recommend(""{collection_name}"", {
+Qdrant Hybrid Cloud ensures data privacy, deployment flexibility, low latency, and delivers cost savings, elevating standards for vector search and AI applications.
- positive: [100, 231],
- negative: [718],
- using: ""image"",
+**How it works:** Qdrant Hybrid Cloud relies on Kubernetes and works with any standard compliant Kubernetes distribution. When you onboard a Kubernetes cluster as a Hybrid Cloud Environment, you can deploy the Qdrant Kubernetes Operator and Cloud Agent into this cluster. These will manage Qdrant databases within your Kubernetes cluster and establish an outgoing connection to Qdrant Cloud to transport telemetry and receive management instructions. You can then benefit from the same cloud management features and transport telemetry that is available with any managed Qdrant Cloud cluster.
- limit: 10,
- lookup_from: {
- ""collection"" : ""{external_collection_name}"",
+
- ""vector"" : ""{external_vector_name}""
- },
-});
+**Setup instructions:** To begin using Qdrant Hybrid Cloud, [read our installation guide](/documentation/hybrid-cloud/hybrid-cloud-setup/).
-```
+## Hybrid Cloud architecture
-```rust
-use qdrant_client::qdrant::{LookupLocation, RecommendPoints};
+The Hybrid Cloud onboarding will install a Kubernetes Operator and Cloud Agent into your Kubernetes cluster.
-client
- .recommend(&RecommendPoints {
+The Cloud Agent will establish an outgoing connection to `cloud.qdrant.io` on port `443` to transport telemetry and receive management instructions. It will also interact with the Kubernetes API through a ServiceAccount to create, read, update and delete the necessary Qdrant CRs (Custom Resources) based on the configuration setup in the Qdrant Cloud Console.
- collection_name: ""{collection_name}"".to_string(),
- positive: vec![100.into(), 231.into()],
- negative: vec![718.into()],
+The Qdrant Kubernetes Operator will manage the Qdrant databases within your Kubernetes cluster. Based on the Qdrant CRs, it will interact with the Kubernetes API through a ServiceAccount to create and manage the necessary resources to deploy and run Qdrant databases, such as Pods, Services, ConfigMaps, and Secrets.
- using: Some(""image"".to_string()),
- limit: 10,
- lookup_from: Some(LookupLocation {
+Both component's access is limited to the Kubernetes namespace that you chose during the onboarding process.
- collection_name: ""{external_collection_name}"".to_string(),
- vector_name: Some(""{external_vector_name}"".to_string()),
- ..Default::default()
+After the initial onboarding, the lifecycle of these components will be controlled by the Qdrant Cloud platform via the built-in Helm controller.
- }),
- ..Default::default()
- })
+You don't need to expose your Kubernetes Cluster to the Qdrant Cloud platform, you don't need to open any ports for incoming traffic, and you don't need to provide any Kubernetes or cloud provider credentials to the Qdrant Cloud platform.
- .await?;
-```
+![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png)
+",documentation/hybrid-cloud/_index.md
+"---
+title: Multitenancy
-```java
+weight: 12
-import java.util.List;
+aliases:
+ - ../tutorials/multiple-partitions
+ - /tutorials/multiple-partitions/
-import static io.qdrant.client.PointIdFactory.id;
+---
+# Configure Multitenancy
-import io.qdrant.client.grpc.Points.LookupLocation;
-import io.qdrant.client.grpc.Points.RecommendPoints;
+**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up.
-client
+**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
- .recommendAsync(
- RecommendPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
+## Partition by payload
- .addAllPositive(List.of(id(100), id(231)))
- .addAllNegative(List.of(id(718)))
- .setUsing(""image"")
+When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users.
- .setLimit(10)
- .setLookupFrom(
- LookupLocation.newBuilder()
+> ### NOTE
- .setCollectionName(""{external_collection_name}"")
+>
- .setVectorName(""{external_vector_name}"")
+> The key doesn't necessarily need to be named `group_id`. You can choose a name that best suits your data structure and naming conventions.
- .build())
- .build())
- .get();
+1. Add a `group_id` field to each vector in the collection.
-```
+```http
-```csharp
+PUT /collections/{collection_name}/points
-using Qdrant.Client;
+{
-using Qdrant.Client.Grpc;
+ ""points"": [
+ {
+ ""id"": 1,
-var client = new QdrantClient(""localhost"", 6334);
+ ""payload"": {""group_id"": ""user_1""},
+ ""vector"": [0.9, 0.1, 0.1]
+ },
-await client.RecommendAsync(
+ {
- collectionName: ""{collection_name}"",
+ ""id"": 2,
- positive: new ulong[] { 100, 231 },
+ ""payload"": {""group_id"": ""user_1""},
- negative: new ulong[] { 718 },
+ ""vector"": [0.1, 0.9, 0.1]
- usingVector: ""image"",
+ },
- limit: 10,
+ {
- lookupFrom: new LookupLocation
+ ""id"": 3,
- {
+ ""payload"": {""group_id"": ""user_2""},
- CollectionName = ""{external_collection_name}"",
+ ""vector"": [0.1, 0.1, 0.9]
- VectorName = ""{external_vector_name}"",
+ },
- }
+ ]
-);
+}
```
-Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists.
-
-These vectors then used to perform the recommendation in the current collection, comparing against the ""using"" or default vector.
+```python
+client.upsert(
+ collection_name=""{collection_name}"",
+ points=[
+ models.PointStruct(
-## Batch recommendation API
+ id=1,
+ payload={""group_id"": ""user_1""},
+ vector=[0.9, 0.1, 0.1],
-*Available as of v0.10.0*
+ ),
+ models.PointStruct(
+ id=2,
-Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests.
+ payload={""group_id"": ""user_1""},
+ vector=[0.1, 0.9, 0.1],
+ ),
-```http
+ models.PointStruct(
-POST /collections/{collection_name}/points/recommend/batch
+ id=3,
-{
+ payload={""group_id"": ""user_2""},
- ""searches"": [
+ vector=[0.1, 0.1, 0.9],
- {
+ ),
- ""filter"": {
+ ],
- ""must"": [
+)
- {
+```
- ""key"": ""city"",
- ""match"": {
- ""value"": ""London""
+```typescript
- }
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- }
- ]
- },
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- ""negative"": [718],
- ""positive"": [100, 231],
- ""limit"": 10
+client.upsert(""{collection_name}"", {
- },
+ points: [
- {
+ {
- ""filter"": {
+ id: 1,
- ""must"": [
+ payload: { group_id: ""user_1"" },
- {
+ vector: [0.9, 0.1, 0.1],
- ""key"": ""city"",
+ },
- ""match"": {
+ {
- ""value"": ""London""
+ id: 2,
- }
+ payload: { group_id: ""user_1"" },
- }
+ vector: [0.1, 0.9, 0.1],
- ]
+ },
- },
+ {
- ""negative"": [300],
+ id: 3,
- ""positive"": [200, 67],
+ payload: { group_id: ""user_2"" },
- ""limit"": 10
+ vector: [0.1, 0.1, 0.9],
- }
+ },
- ]
+ ],
-}
+});
```
-```python
+```rust
-from qdrant_client import QdrantClient
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
-from qdrant_client.http import models
+use qdrant_client::Qdrant;
-client = QdrantClient(""localhost"", port=6333)
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-filter = models.Filter(
+client
- must=[
+ .upsert_points(UpsertPointsBuilder::new(
- models.FieldCondition(
+ ""{collection_name}"",
- key=""city"",
+ vec![
- match=models.MatchValue(
+ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""group_id"", ""user_1"".into())]),
- value=""London"",
+ PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""group_id"", ""user_1"".into())]),
- ),
+ PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""group_id"", ""user_2"".into())]),
- )
+ ],
- ]
+ ))
-)
+ .await?;
+```
-recommend_queries = [
- models.RecommendRequest(
+```java
- positive=[100, 231], negative=[718], filter=filter, limit=3
+import java.util.List;
- ),
+import java.util.Map;
- models.RecommendRequest(positive=[200, 67], negative=[300], filter=filter, limit=3),
-]
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-client.recommend_batch(collection_name=""{collection_name}"", requests=recommend_queries)
+import io.qdrant.client.grpc.Points.PointStruct;
-```
+QdrantClient client =
-```typescript
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+client
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ .upsertAsync(
+ ""{collection_name}"",
+ List.of(
-const filter = {
+ PointStruct.newBuilder()
- must: [
+ .setId(id(1))
- {
+ .setVectors(vectors(0.9f, 0.1f, 0.1f))
- key: ""city"",
+ .putAllPayload(Map.of(""group_id"", value(""user_1"")))
- match: {
+ .build(),
- value: ""London"",
+ PointStruct.newBuilder()
- },
+ .setId(id(2))
- },
+ .setVectors(vectors(0.1f, 0.9f, 0.1f))
- ],
+ .putAllPayload(Map.of(""group_id"", value(""user_1"")))
-};
+ .build(),
+ PointStruct.newBuilder()
+ .setId(id(3))
-const searches = [
+ .setVectors(vectors(0.1f, 0.1f, 0.9f))
- {
+ .putAllPayload(Map.of(""group_id"", value(""user_2"")))
- positive: [100, 231],
+ .build()))
- negative: [718],
+ .get();
- filter,
+```
- limit: 3,
- },
- {
+```csharp
- positive: [200, 67],
+using Qdrant.Client;
- negative: [300],
+using Qdrant.Client.Grpc;
- filter,
- limit: 3,
- },
+var client = new QdrantClient(""localhost"", 6334);
-];
+await client.UpsertAsync(
-client.recommend_batch(""{collection_name}"", {
+ collectionName: ""{collection_name}"",
- searches,
+ points: new List
-});
+ {
-```
+ new()
+ {
+ Id = 1,
-```rust
+ Vectors = new[] { 0.9f, 0.1f, 0.1f },
-use qdrant_client::{
+ Payload = { [""group_id""] = ""user_1"" }
- client::QdrantClient,
+ },
- qdrant::{Condition, Filter, RecommendBatchPoints, RecommendPoints},
+ new()
-};
+ {
+ Id = 2,
+ Vectors = new[] { 0.1f, 0.9f, 0.1f },
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ Payload = { [""group_id""] = ""user_1"" }
+ },
+ new()
-let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]);
+ {
+ Id = 3,
+ Vectors = new[] { 0.1f, 0.1f, 0.9f },
-let recommend_queries = vec![
+ Payload = { [""group_id""] = ""user_2"" }
- RecommendPoints {
+ }
- collection_name: ""{collection_name}"".to_string(),
+ }
- positive: vec![100.into(), 231.into()],
+);
- negative: vec![718.into()],
+```
- filter: Some(filter.clone()),
- limit: 3,
- ..Default::default()
+```go
- },
+import (
- RecommendPoints {
+ ""context""
- collection_name: ""{collection_name}"".to_string(),
- positive: vec![200.into(), 67.into()],
- negative: vec![300.into()],
+ ""github.com/qdrant/go-client/qdrant""
- filter: Some(filter),
+)
- limit: 3,
- ..Default::default()
- },
+client, err := qdrant.NewClient(&qdrant.Config{
-];
+ Host: ""localhost"",
+ Port: 6334,
+})
-client
- .recommend_batch(&RecommendBatchPoints {
- collection_name: ""{collection_name}"".to_string(),
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
- recommend_points: recommend_queries,
+ CollectionName: ""{collection_name}"",
- ..Default::default()
+ Points: []*qdrant.PointStruct{
- })
+ {
- .await?;
+ Id: qdrant.NewIDNum(1),
-```
+ Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
+ Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_1""}),
+ },
-```java
+ {
-import java.util.List;
+ Id: qdrant.NewIDNum(2),
+ Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
+ Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_1""}),
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+ },
-import static io.qdrant.client.PointIdFactory.id;
+ {
+ Id: qdrant.NewIDNum(3),
+ Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
-import io.qdrant.client.QdrantClient;
+ Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_2""}),
-import io.qdrant.client.QdrantGrpcClient;
+ },
-import io.qdrant.client.grpc.Points.Filter;
+ },
-import io.qdrant.client.grpc.Points.RecommendPoints;
+})
+```
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+2. Use a filter along with `group_id` to filter vectors for each user.
-Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build();
+```http
+POST /collections/{collection_name}/points/query
+{
-List recommendQueries =
+ ""query"": [0.1, 0.1, 0.9],
- List.of(
+ ""filter"": {
- RecommendPoints.newBuilder()
+ ""must"": [
- .addAllPositive(List.of(id(100), id(231)))
+ {
- .addAllNegative(List.of(id(718)))
+ ""key"": ""group_id"",
- .setFilter(filter)
+ ""match"": {
- .setLimit(3)
+ ""value"": ""user_1""
- .build(),
+ }
- RecommendPoints.newBuilder()
+ }
- .addAllPositive(List.of(id(200), id(67)))
+ ]
- .addAllNegative(List.of(id(300)))
+ },
- .setFilter(filter)
+ ""limit"": 10
- .setLimit(3)
+}
- .build());
+```
-client.recommendBatchAsync(""{collection_name}"", recommendQueries, null).get();
+```python
-```
+from qdrant_client import QdrantClient, models
-```csharp
+client = QdrantClient(url=""http://localhost:6333"")
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+client.query_points(
+ collection_name=""{collection_name}"",
+ query=[0.1, 0.1, 0.9],
-var client = new QdrantClient(""localhost"", 6334);
+ query_filter=models.Filter(
+ must=[
+ models.FieldCondition(
-var filter = MatchKeyword(""city"", ""london"");
+ key=""group_id"",
+ match=models.MatchValue(
+ value=""user_1"",
-await client.RecommendBatchAsync(
+ ),
- collectionName: ""{collection_name}"",
+ )
- recommendSearches:
+ ]
- [
+ ),
- new()
+ limit=10,
- {
+)
- CollectionName = ""{collection_name}"",
+```
- Positive = { new PointId[] { 100, 231 } },
- Negative = { new PointId[] { 718 } },
- Limit = 3,
+```typescript
- Filter = filter,
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- },
- new()
- {
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- CollectionName = ""{collection_name}"",
- Positive = { new PointId[] { 200, 67 } },
- Negative = { new PointId[] { 300 } },
+client.query(""{collection_name}"", {
- Limit = 3,
+ query: [0.1, 0.1, 0.9],
- Filter = filter,
+ filter: {
- }
+ must: [{ key: ""group_id"", match: { value: ""user_1"" } }],
- ]
+ },
-);
+ limit: 10,
-```
+});
+```
-The result of this API contains one array per recommendation requests.
+```rust
+use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder};
-```json
+use qdrant_client::Qdrant;
-{
- ""result"": [
- [
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- { ""id"": 10, ""score"": 0.81 },
- { ""id"": 14, ""score"": 0.75 },
- { ""id"": 11, ""score"": 0.73 }
+client
- ],
+ .query(
- [
+ QueryPointsBuilder::new(""{collection_name}"")
- { ""id"": 1, ""score"": 0.92 },
+ .query(vec![0.1, 0.1, 0.9])
- { ""id"": 3, ""score"": 0.89 },
+ .limit(10)
- { ""id"": 9, ""score"": 0.75 }
+ .filter(Filter::must([Condition::matches(
- ]
+ ""group_id"",
- ],
+ ""user_1"".to_string(),
- ""status"": ""ok"",
+ )])),
- ""time"": 0.001
+ )
-}
+ .await?;
```
-## Discovery API
+```java
+import java.util.List;
-*Available as of v1.7*
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-REST API Schema definition available [here](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/discover_points)
+import io.qdrant.client.grpc.Points.Filter;
+import io.qdrant.client.grpc.Points.QueryPoints;
-In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones).
+import static io.qdrant.client.QueryFactory.nearest;
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs.
+QdrantClient client =
-Discovery API lets you do two new types of search:
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-- **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context.
-- **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized
+client.queryAsync(
+ QueryPoints.newBuilder()
-The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data.
+ .setCollectionName(""{collection_name}"")
+ .setFilter(
+ Filter.newBuilder().addMust(matchKeyword(""group_id"", ""user_1"")).build())
-
+ .setQuery(nearest(0.1f, 0.1f, 0.9f))
+ .setLimit(10)
+ .build())
-### Discovery search
+ .get();
+```
-This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed.
+```csharp
+using Qdrant.Client;
-![Discovery search](/docs/discovery-search.png)
+using Qdrant.Client.Grpc;
+using static Qdrant.Client.Grpc.Conditions;
-The formula for the discovery score can be expressed as:
+var client = new QdrantClient(""localhost"", 6334);
-$$
-\text{rank}(v^+, v^-) = \begin{cases}
+await client.QueryAsync(
- 1, &\quad s(v^+) \geq s(v^-) \\\\
+ collectionName: ""{collection_name}"",
- -1, &\quad s(v^+) < s(v^-)
+ query: new float[] { 0.1f, 0.1f, 0.9f },
-\end{cases}
+ filter: MatchKeyword(""group_id"", ""user_1""),
-$$
+ limit: 10
-where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as:
+);
-$$
+```
- \text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-),
-$$
-where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second.
+```go
+import (
+ ""context""
-Example:
+ ""github.com/qdrant/go-client/qdrant""
-```http
+)
-POST /collections/{collection_name}/points/discover
+client, err := qdrant.NewClient(&qdrant.Config{
-{
+ Host: ""localhost"",
- ""target"": [0.2, 0.1, 0.9, 0.7],
+ Port: 6334,
- ""context"": [
+})
- {
- ""positive"": 100,
- ""negative"": 718
+client.Query(context.Background(), &qdrant.QueryPoints{
- },
+ CollectionName: ""{collection_name}"",
- {
+ Query: qdrant.NewQuery(0.1, 0.1, 0.9),
- ""positive"": 200,
+ Filter: &qdrant.Filter{
- ""negative"": 300
+ Must: []*qdrant.Condition{
- }
+ qdrant.NewMatch(""group_id"", ""user_1""),
- ],
+ },
- ""limit"": 10
+ },
-}
+})
```
-```python
+## Calibrate performance
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead.
-client = QdrantClient(""localhost"", port=6333)
+By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process.
-discover_queries = [
- models.DiscoverRequest(
+To implement this approach, you should:
- target=[0.2, 0.1, 0.9, 0.7],
- context=[
- models.ContextExamplePair(
+1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16.
- positive=100,
+2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection.
- negative=718,
- ),
- models.ContextExamplePair(
+```http
- positive=200,
+PUT /collections/{collection_name}
- negative=300,
+{
- ),
+ ""vectors"": {
- ],
+ ""size"": 768,
- limit=10,
-
- ),
-
-]
+ ""distance"": ""Cosine""
-```
+ },
+ ""hnsw_config"": {
+ ""payload_m"": 16,
-```typescript
+ ""m"": 0
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ }
+}
+```
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```python
-client.discover(""{collection_name}"", {
+from qdrant_client import QdrantClient, models
- target: [0.2, 0.1, 0.9, 0.7],
- context: [
- {
+client = QdrantClient(url=""http://localhost:6333"")
- positive: 100,
- negative: 718,
- },
+client.create_collection(
- {
+ collection_name=""{collection_name}"",
- positive: 200,
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- negative: 300,
+ hnsw_config=models.HnswConfigDiff(
- },
+ payload_m=16,
- ],
+ m=0,
- limit: 10,
+ ),
-});
+)
```
-```rust
-
-use qdrant_client::{
-
- client::QdrantClient,
-
- qdrant::{
-
- target_vector::Target, vector_example::Example, ContextExamplePair, DiscoverPoints,
-
- TargetVector, VectorExample,
-
- },
+```typescript
-};
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client
+client.createCollection(""{collection_name}"", {
- .discover(&DiscoverPoints {
+ vectors: {
- collection_name: ""{collection_name}"".to_string(),
+ size: 768,
- target: Some(TargetVector {
+ distance: ""Cosine"",
- target: Some(Target::Single(VectorExample {
+ },
- example: Some(Example::Vector(vec![0.2, 0.1, 0.9, 0.7].into())),
+ hnsw_config: {
- })),
+ payload_m: 16,
- }),
+ m: 0,
- context: vec![
+ },
- ContextExamplePair {
+});
- positive: Some(VectorExample {
+```
- example: Some(Example::Id(100.into())),
- }),
- negative: Some(VectorExample {
+```rust
- example: Some(Example::Id(718.into())),
+use qdrant_client::qdrant::{
- }),
+ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder,
- },
+};
- ContextExamplePair {
+use qdrant_client::Qdrant;
- positive: Some(VectorExample {
- example: Some(Example::Id(200.into())),
- }),
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- negative: Some(VectorExample {
- example: Some(Example::Id(300.into())),
- }),
+client
- },
+ .create_collection(
- ],
+ CreateCollectionBuilder::new(""{collection_name}"")
- limit: 10,
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
- ..Default::default()
+ .hnsw_config(HnswConfigDiffBuilder::default().payload_m(16).m(0)),
- })
+ )
.await?;
@@ -17520,27 +17153,19 @@ client
```java
-import java.util.List;
-
-
-
-import static io.qdrant.client.PointIdFactory.id;
-
-import static io.qdrant.client.VectorFactory.vector;
-
-
-
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.ContextExamplePair;
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.Distance;
-import io.qdrant.client.grpc.Points.DiscoverPoints;
+import io.qdrant.client.grpc.Collections.HnswConfigDiff;
-import io.qdrant.client.grpc.Points.TargetVector;
+import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Points.VectorExample;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
@@ -17552,45 +17177,29 @@ QdrantClient client =
client
- .discoverAsync(
+ .createCollectionAsync(
- DiscoverPoints.newBuilder()
+ CreateCollection.newBuilder()
.setCollectionName(""{collection_name}"")
- .setTarget(
-
- TargetVector.newBuilder()
-
- .setSingle(
-
- VectorExample.newBuilder()
-
- .setVector(vector(0.2f, 0.1f, 0.9f, 0.7f))
-
- .build()))
-
- .addAllContext(
-
- List.of(
-
- ContextExamplePair.newBuilder()
+ .setVectorsConfig(
- .setPositive(VectorExample.newBuilder().setId(id(100)))
+ VectorsConfig.newBuilder()
- .setNegative(VectorExample.newBuilder().setId(id(718)))
+ .setParams(
- .build(),
+ VectorParams.newBuilder()
- ContextExamplePair.newBuilder()
+ .setSize(768)
- .setPositive(VectorExample.newBuilder().setId(id(200)))
+ .setDistance(Distance.Cosine)
- .setNegative(VectorExample.newBuilder().setId(id(300)))
+ .build())
- .build()))
+ .build())
- .setLimit(10)
+ .setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build())
.build())
@@ -17612,1552 +17221,1551 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.DiscoverAsync(
+await client.CreateCollectionAsync(
collectionName: ""{collection_name}"",
- target: new TargetVector
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- {
+ hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 }
- Single = new VectorExample { Vector = new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, }
+);
- },
+```
- context:
- [
- new()
+```go
- {
+import (
- Positive = new VectorExample { Id = 100 },
+ ""context""
- Negative = new VectorExample { Id = 718 }
- },
- new()
+ ""github.com/qdrant/go-client/qdrant""
- {
+)
- Positive = new VectorExample { Id = 200 },
- Negative = new VectorExample { Id = 300 }
- }
+client, err := qdrant.NewClient(&qdrant.Config{
- ],
+ Host: ""localhost"",
- limit: 10
+ Port: 6334,
-);
+})
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-
+})
+```
-### Context search
+3. Create keyword payload index for `group_id` field.
-Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples.
+
-We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities.
+```http
-$$
+PUT /collections/{collection_name}/index
-\text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0)
+{
-$$
+ ""field_name"": ""group_id"",
+ ""field_schema"": {
+ ""type"": ""keyword"",
-Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function.
+ ""is_tenant"": true
+ }
+}
-Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases.
+```
-Example:
+```python
+client.create_payload_index(
+ collection_name=""{collection_name}"",
-```http
+ field_name=""group_id"",
-POST /collections/{collection_name}/points/discover
+ field_schema=models.KeywordIndexParams(
+ type=""keywprd"",
+ is_tenant=True,
-{
+ ),
- ""context"": [
+)
- {
+```
- ""positive"": 100,
- ""negative"": 718
- },
+```typescript
- {
+client.createPayloadIndex(""{collection_name}"", {
- ""positive"": 200,
+ field_name: ""group_id"",
- ""negative"": 300
+ field_schema: {
- }
+ type: ""keyword"",
- ],
+ is_tenant: true,
- ""limit"": 10
+ },
-}
+});
```
-```python
-
-from qdrant_client import QdrantClient
+```rust
-from qdrant_client.http import models
+use qdrant_client::qdrant::{
+ CreateFieldIndexCollectionBuilder,
+ KeywordIndexParamsBuilder,
-client = QdrantClient(""localhost"", port=6333)
+ FieldType
+};
+use qdrant_client::{Qdrant, QdrantError};
-discover_queries = [
- models.DiscoverRequest(
- context=[
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- models.ContextExamplePair(
- positive=100,
- negative=718,
+client.create_field_index(
- ),
+ CreateFieldIndexCollectionBuilder::new(
- models.ContextExamplePair(
+ ""{collection_name}"",
- positive=200,
+ ""group_id"",
- negative=300,
+ FieldType::Keyword,
- ),
+ ).field_index_params(
- ],
+ KeywordIndexParamsBuilder::default()
- limit=10,
+ .is_tenant(true)
- ),
+ )
-]
+ ).await?;
```
-```typescript
-
-import { QdrantClient } from ""@qdrant/js-client-rest"";
-
-
-
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```java
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-client.discover(""{collection_name}"", {
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
- context: [
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
- {
+import io.qdrant.client.grpc.Collections.KeywordIndexParams;
- positive: 100,
- negative: 718,
- },
+QdrantClient client =
- {
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- positive: 200,
- negative: 300,
- },
+client
- ],
+ .createPayloadIndexAsync(
- limit: 10,
+ ""{collection_name}"",
-});
+ ""group_id"",
-```
+ PayloadSchemaType.Keyword,
+ PayloadIndexParams.newBuilder()
+ .setKeywordIndexParams(
-```rust
+ KeywordIndexParams.newBuilder()
-use qdrant_client::{
+ .setIsTenant(true)
- client::QdrantClient,
+ .build())
- qdrant::{vector_example::Example, ContextExamplePair, DiscoverPoints, VectorExample},
+ .build(),
-};
+ null,
+ null,
+ null)
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ .get();
+```
-client
- .discover(&DiscoverPoints {
+```csharp
- collection_name: ""{collection_name}"".to_string(),
+using Qdrant.Client;
- context: vec![
- ContextExamplePair {
- positive: Some(VectorExample {
+var client = new QdrantClient(""localhost"", 6334);
- example: Some(Example::Id(100.into())),
- }),
- negative: Some(VectorExample {
+await client.CreatePayloadIndexAsync(
- example: Some(Example::Id(718.into())),
+ collectionName: ""{collection_name}"",
- }),
+ fieldName: ""group_id"",
- },
+ schemaType: PayloadSchemaType.Keyword,
- ContextExamplePair {
+ indexParams: new PayloadIndexParams
- positive: Some(VectorExample {
+ {
- example: Some(Example::Id(200.into())),
+ KeywordIndexParams = new KeywordIndexParams
- }),
+ {
- negative: Some(VectorExample {
+ IsTenant = true
- example: Some(Example::Id(300.into())),
+ }
- }),
+ }
- },
+);
- ],
+```
- limit: 10,
- ..Default::default()
- })
+```go
- .await?;
+import (
-```
+ ""context""
-```java
+ ""github.com/qdrant/go-client/qdrant""
-import java.util.List;
+)
-import static io.qdrant.client.PointIdFactory.id;
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-import io.qdrant.client.QdrantClient;
+})
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.ContextExamplePair;
-import io.qdrant.client.grpc.Points.DiscoverPoints;
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
-import io.qdrant.client.grpc.Points.VectorExample;
+ CollectionName: ""{collection_name}"",
+ FieldName: ""group_id"",
+ FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
-QdrantClient client =
+ FieldIndexParams: qdrant.NewPayloadIndexParams(
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ &qdrant.KeywordIndexParams{
+ IsTenant: qdrant.PtrOf(true),
+ }),
-client
+})
- .discoverAsync(
+```
- DiscoverPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .addAllContext(
+`is_tenant=true` parameter is optional, but specifying it provides storage with additional inforamtion about the usage patterns the collection is going to use.
- List.of(
+When specified, storage structure will be organized in a way to co-locate vectors of the same tenant together, which can significantly improve performance in some cases.
- ContextExamplePair.newBuilder()
- .setPositive(VectorExample.newBuilder().setId(id(100)))
- .setNegative(VectorExample.newBuilder().setId(id(718)))
- .build(),
- ContextExamplePair.newBuilder()
+## Limitations
- .setPositive(VectorExample.newBuilder().setId(id(200)))
- .setNegative(VectorExample.newBuilder().setId(id(300)))
- .build()))
+One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
+",documentation/guides/multiple-partitions.md
+"---
- .setLimit(10)
+title: Administration
- .build())
+weight: 10
- .get();
+aliases:
-```
+ - ../administration
+---
-
-",documentation/concepts/explore.md
-"---
-title: Optimizer
+A locking API enables users to restrict the possible operations on a qdrant process.
-weight: 70
+It is important to mention that:
-aliases:
+- The configuration is not persistent therefore it is necessary to lock again following a restart.
- - ../optimizer
+- Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup.
----
+Lock request sample:
-# Optimizer
+```http
-It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely.
+POST /locks
+{
+ ""error_message"": ""write is forbidden"",
-Storage optimization in Qdrant occurs at the segment level (see [storage](../storage)).
+ ""write"": true
-In this case, the segment to be optimized remains readable for the time of the rebuild.
+}
+```
-![Segment optimization](/docs/optimization.svg)
+Write flags enables/disables write lock.
+If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage.
-The availability is achieved by wrapping the segment into a proxy that transparently handles data changes.
+However, deletion operations or updates are not forbidden under the write lock.
-Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates.
+This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data.
-## Vacuum Optimizer
+You can optionally provide the error message that should be used for error responses to users.
-The simplest example of a case where you need to rebuild a segment repository is to remove points.
+## Recovery mode
-Like many other databases, Qdrant does not delete entries immediately after a query.
-Instead, it marks records as deleted and ignores them for future queries.
+*Available as of v1.2.0*
-This strategy allows us to minimize disk access - one of the slowest operations.
-However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system.
+Recovery mode can help in situations where Qdrant fails to start repeatedly.
+When starting in recovery mode, Qdrant only loads collection metadata to prevent
+going out of memory. This allows you to resolve out of memory situations, for
-To avoid these adverse effects, Vacuum Optimizer is used.
+example, by deleting a collection. After resolving Qdrant can be restarted
-It is used if the segment has accumulated too many deleted records.
+normally to continue operation.
-The criteria for starting the optimizer are defined in the configuration file.
+In recovery mode, collection operations are limited to
+[deleting](../../concepts/collections/#delete-collection) a
+collection. That is because only collection metadata is loaded during recovery.
-Here is an example of parameter values:
+To enable recovery mode with the Qdrant Docker image you must set the
-```yaml
+environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try
-storage:
+to start normally first, and restarts in recovery mode if initialisation fails
- optimizers:
+due to an out of memory error. This behavior is disabled by default.
- # The minimal fraction of deleted vectors in a segment, required to perform segment optimization
- deleted_threshold: 0.2
- # The minimal number of vectors in a segment, required to perform segment optimization
+If using a Qdrant binary, recovery mode can be enabled by setting a recovery
- vacuum_min_vector_number: 1000
+message in an environment variable, such as
-```
+`QDRANT__STORAGE__RECOVERY_MODE=""My recovery message""`.
+",documentation/guides/administration.md
+"---
+title: Troubleshooting
+weight: 170
-## Merge Optimizer
+aliases:
+ - ../tutorials/common-errors
+ - /documentation/troubleshooting/
-The service may require the creation of temporary segments.
+---
-Such segments, for example, are created as copy-on-write segments during optimization itself.
+# Solving common errors
-It is also essential to have at least one small segment that Qdrant will use to store frequently updated data.
-On the other hand, too many small segments lead to suboptimal search performance.
+## Too many files open (OS error 24)
-There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created.
+Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log:
-The criteria for starting the optimizer are defined in the configuration file.
+```text
+Error: Too many files open (OS error 24)
-Here is an example of parameter values:
+```
-```yaml
+In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container:
-storage:
- optimizers:
- # If the number of segments exceeds this value, the optimizer will merge the smallest segments.
+```bash
- max_segment_number: 5
+docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest
```
-## Indexing Optimizer
+The command above will set both soft and hard limits to `10000`.
-Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records.
+If you are not using Docker, the following command will change the limit for the current user session:
-So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan.
+```bash
-The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached.
+ulimit -n 10000
+```
-The criteria for starting the optimizer are defined in the configuration file.
+Please note, the command should be executed before you run Qdrant server.
-Here is an example of parameter values:
+## Can't open Collections meta Wal
-```yaml
-storage:
+When starting a Qdrant instance as part of a distributed deployment, you may
- optimizers:
+come across an error message similar to this:
- # Maximum size (in kilobytes) of vectors to store in-memory per segment.
- # Segments larger than this threshold will be stored as read-only memmaped file.
- # Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value.
+```bash
- # To disable memmap storage, set this to `0`.
+Can't open Collections meta Wal: Os { code: 11, kind: WouldBlock, message: ""Resource temporarily unavailable"" }
- # Note: 1Kb = 1 vector of size 256
+```
- memmap_threshold_kb: 200000
+It means that Qdrant cannot start because a collection cannot be loaded. Its
- # Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing
+associated [WAL](../../concepts/storage/#versioning) files are currently
- # Default value is 20,000, based on .
+unavailable, likely because the same files are already being used by another
- # To disable vector indexing, set to `0`.
+Qdrant instance.
- # Note: 1kB = 1 vector of size 256.
- indexing_threshold_kb: 20000
-```
+Each node must have their own separate storage directory, volume or mount.
-In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections).
+The formed cluster will take care of sharing all data with each node, putting it
+all in the correct places for you. If using Kubernetes, each node must have
+their own volume. If using Docker, each node must have their own storage mount
-Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index.",documentation/concepts/optimizer.md
+or volume. If using Qdrant directly, each node must have their own storage
+
+directory.
+",documentation/guides/common-errors.md
"---
-title: Search
+title: Configuration
-weight: 50
+weight: 160
aliases:
- - ../search
+ - ../configuration
+
+ - /guides/configuration/
---
-# Similarity search
+# Configuration
-Searching for the nearest vectors is at the core of many representational learning applications.
+To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files.
-Modern neural networks are trained to transform objects into vectors so that objects close in the real world appear close in vector space.
-It could be, for example, texts with similar meanings, visually similar pictures, or songs of the same genre.
+The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
-![Embeddings](/docs/encoders.png)
+To change the default configuration, add a new configuration file and specify
+the path with `--config-path path/to/custom_config.yaml`. If running in
-## Metrics
+production mode, you could also choose to overwrite `config/production.yaml`.
+See [ordering](#order-and-priority) for details on how configurations are
+loaded.
-There are many ways to estimate the similarity of vectors with each other.
-In Qdrant terms, these ways are called metrics.
-The choice of metric depends on vectors obtaining and, in particular, on the method of neural network encoder training.
+The [Installation](../installation/) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods.
-Qdrant supports these most popular types of metrics:
+## Order and priority
-* Dot product: `Dot` - https://en.wikipedia.org/wiki/Dot_product
+*Effective as of v1.2.1*
-* Cosine similarity: `Cosine` - https://en.wikipedia.org/wiki/Cosine_similarity
-* Euclidean distance: `Euclid` - https://en.wikipedia.org/wiki/Euclidean_distance
-* Manhattan distance: `Manhattan`* - https://en.wikipedia.org/wiki/Taxicab_geometry *Available as of v1.7
+Multiple configurations may be loaded on startup. All of them are merged into a
+single effective configuration that is used by Qdrant.
-The most typical metric used in similarity learning models is the cosine metric.
+Configurations are loaded in the following order, if present:
-![Embeddings](/docs/cos.png)
+1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml))
+2. File `config/config.yaml`
-Qdrant counts this metric in 2 steps, due to which a higher search speed is achieved.
+3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`)
-The first step is to normalize the vector when adding it to the collection.
+4. File `config/local.yaml`
-It happens only once for each vector.
+5. Config provided with `--config-path PATH` (if set)
+6. [Environment variables](#environment-variables)
-The second step is the comparison of vectors.
-In this case, it becomes equivalent to dot production - a very fast operation due to SIMD.
+This list is from least to most significant. Properties in later configurations
+will overwrite those loaded before it. For example, a property set with
+`--config-path` will overwrite those in other files.
-## Query planning
+Most of these files are included by default in the Docker container. But it is
-Depending on the filter used in the search - there are several possible scenarios for query execution.
+likely that they are absent on your local machine if you run the `qdrant` binary
-Qdrant chooses one of the query execution options depending on the available indexes, the complexity of the conditions and the cardinality of the filtering result.
+manually.
-This process is called query planning.
+If file 2 or 3 are not found, a warning is shown on startup.
-The strategy selection process relies heavily on heuristics and can vary from release to release.
+If file 5 is provided but not found, an error is shown on startup.
-However, the general principles are:
+Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`.
-* planning is performed for each segment independently (see [storage](../storage) for more information about segments)
-* prefer a full scan if the amount of points is below a threshold
-* estimate the cardinality of a filtered result before selecting a strategy
+## Environment variables
-* retrieve points using payload index (see [indexing](../indexing)) if cardinality is below threshold
-* use filterable vector index if the cardinality is above a threshold
+It is possible to set configuration properties using environment variables.
+Environment variables are always the most significant and cannot be overwritten
-You can adjust the threshold using a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), as well as independently for each collection.
+(see [ordering](#order-and-priority)).
-## Search API
+All environment variables are prefixed with `QDRANT__` and are separated with
+`__`.
-Let's look at an example of a search query.
+These variables:
-REST API - API Schema definition is available [here](https://qdrant.github.io/qdrant/redoc/index.html#operation/search_points)
+```bash
+QDRANT__LOG_LEVEL=INFO
-```http
+QDRANT__SERVICE__HTTP_PORT=6333
-POST /collections/{collection_name}/points/search
+QDRANT__SERVICE__ENABLE_TLS=1
-{
+QDRANT__TLS__CERT=./tls/cert.pem
- ""filter"": {
+QDRANT__TLS__CERT_TTL=3600
- ""must"": [
+```
- {
- ""key"": ""city"",
- ""match"": {
+result in this configuration:
- ""value"": ""London""
- }
- }
+```yaml
- ]
+log_level: INFO
- },
+service:
- ""params"": {
+ http_port: 6333
- ""hnsw_ef"": 128,
+ enable_tls: true
- ""exact"": false
+tls:
- },
+ cert: ./tls/cert.pem
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ cert_ttl: 3600
- ""limit"": 3
+```
-}
-```
+To run Qdrant locally with a different HTTP port you could use:
-```python
-from qdrant_client import QdrantClient
+```bash
-from qdrant_client.http import models
+QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant
+```
-client = QdrantClient(""localhost"", port=6333)
+## Configuration file example
-client.search(
- collection_name=""{collection_name}"",
+```yaml
- query_filter=models.Filter(
+log_level: INFO
- must=[
- models.FieldCondition(
- key=""city"",
+storage:
- match=models.MatchValue(
+ # Where to store all the data
- value=""London"",
+ storage_path: ./storage
- ),
- )
- ]
+ # Where to store snapshots
- ),
+ snapshots_path: ./snapshots
- search_params=models.SearchParams(hnsw_ef=128, exact=False),
- query_vector=[0.2, 0.1, 0.9, 0.7],
- limit=3,
+ snapshots_config:
-)
+ # ""local"" or ""s3"" - where to store snapshots
-```
+ snapshots_storage: local
+ # s3_config:
+ # bucket: """"
-```typescript
+ # region: """"
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ # access_key: """"
+ # secret_key: """"
+ # endpoint_url: """"
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ # Where to store temporary files
-client.search(""{collection_name}"", {
+ # If null, temporary snapshot are stored in: storage/snapshots_temp/
- filter: {
+ temp_path: null
- must: [
- {
- key: ""city"",
+ # If true - point's payload will not be stored in memory.
- match: {
+ # It will be read from the disk every time it is requested.
- value: ""London"",
+ # This setting saves RAM by (slightly) increasing the response time.
- },
+ # Note: those payload values that are involved in filtering and are indexed - remain in RAM.
- },
+ on_disk_payload: true
- ],
- },
- params: {
+ # Maximum number of concurrent updates to shard replicas
- hnsw_ef: 128,
+ # If `null` - maximum concurrency is used.
- exact: false,
+ update_concurrency: null
- },
- vector: [0.2, 0.1, 0.9, 0.7],
- limit: 3,
+ # Write-ahead-log related configuration
-});
+ wal:
-```
+ # Size of a single WAL segment
+ wal_capacity_mb: 32
-```rust
-use qdrant_client::{
+ # Number of WAL segments to create ahead of actual data requirement
- client::QdrantClient,
+ wal_segments_ahead: 0
- qdrant::{Condition, Filter, SearchParams, SearchPoints},
-};
+ # Normal node - receives all updates and answers all queries
+ node_type: ""Normal""
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ # Listener node - receives all updates, but does not answer search/read queries
-client
+ # Useful for setting up a dedicated backup node
- .search_points(&SearchPoints {
+ # node_type: ""Listener""
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([Condition::matches(
- ""city"",
+ performance:
- ""London"".to_string(),
+ # Number of parallel threads used for search operations. If 0 - auto selection.
- )])),
+ max_search_threads: 0
- params: Some(SearchParams {
- hnsw_ef: Some(128),
- exact: Some(false),
+ # Max number of threads (jobs) for running optimizations across all collections, each thread runs one job.
- ..Default::default()
+ # If 0 - have no limit and choose dynamically to saturate CPU.
- }),
+ # Note: each optimization job will also use `max_indexing_threads` threads by itself for index building.
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ max_optimization_threads: 0
- limit: 3,
- ..Default::default()
- })
+ # CPU budget, how many CPUs (threads) to allocate for an optimization job.
- .await?;
+ # If 0 - auto selection, keep 1 or more CPUs unallocated depending on CPU size
-```
+ # If negative - subtract this number of CPUs from the available CPUs.
+ # If positive - use this exact number of CPUs.
+ optimizer_cpu_budget: 0
-```java
-import java.util.List;
+ # Prevent DDoS of too many concurrent updates in distributed mode.
+ # One external update usually triggers multiple internal updates, which breaks internal
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+ # timings. For example, the health check timing and consensus timing.
+ # If null - auto selection.
+ update_rate_limit: null
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.Filter;
+ # Limit for number of incoming automatic shard transfers per collection on this node, does not affect user-requested transfers.
-import io.qdrant.client.grpc.Points.SearchParams;
+ # The same value should be used on all nodes in a cluster.
-import io.qdrant.client.grpc.Points.SearchPoints;
+ # Default is to allow 1 transfer.
+ # If null - allow unlimited transfers.
+ #incoming_shard_transfers_limit: 1
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ # Limit for number of outgoing automatic shard transfers per collection on this node, does not affect user-requested transfers.
+ # The same value should be used on all nodes in a cluster.
-client
+ # Default is to allow 1 transfer.
- .searchAsync(
+ # If null - allow unlimited transfers.
- SearchPoints.newBuilder()
+ #outgoing_shard_transfers_limit: 1
- .setCollectionName(""{collection_name}"")
+
- .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build())
+ # Enable async scorer which uses io_uring when rescoring.
- .setParams(SearchParams.newBuilder().setExact(false).setHnswEf(128).build())
+ # Only supported on Linux, must be enabled in your kernel.
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ # See:
- .setLimit(3)
+ #async_scorer: false
- .build())
+
- .get();
+ optimizers:
-```
+ # The minimal fraction of deleted vectors in a segment, required to perform segment optimization
+ deleted_threshold: 0.2
-```csharp
-using Qdrant.Client;
+ # The minimal number of vectors in a segment, required to perform segment optimization
-using Qdrant.Client.Grpc;
+ vacuum_min_vector_number: 1000
-using static Qdrant.Client.Grpc.Conditions;
+ # Target amount of segments optimizer will try to keep.
-var client = new QdrantClient(""localhost"", 6334);
+ # Real amount of segments may vary depending on multiple parameters:
+ # - Amount of stored points
+ # - Current write RPS
-await client.SearchAsync(
+ #
- collectionName: ""{collection_name}"",
+ # It is recommended to select default number of segments as a factor of the number of search threads,
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ # so that each segment would be handled evenly by one of the threads.
- filter: MatchKeyword(""city"", ""London""),
+ # If `default_segment_number = 0`, will be automatically selected by the number of available CPUs
- searchParams: new SearchParams { Exact = false, HnswEf = 128 },
+ default_segment_number: 0
- limit: 3
-);
-```
+ # Do not create segments larger this size (in KiloBytes).
+ # Large segments might require disproportionately long indexation times,
+ # therefore it makes sense to limit the size of segments.
-In this example, we are looking for vectors similar to vector `[0.2, 0.1, 0.9, 0.7]`.
+ #
-Parameter `limit` (or its alias - `top`) specifies the amount of most similar results we would like to retrieve.
+ # If indexation speed have more priority for your - make this parameter lower.
+ # If search speed is more important - make this parameter higher.
+ # Note: 1Kb = 1 vector of size 256
-Values under the key `params` specify custom parameters for the search.
+ # If not set, will be automatically selected considering the number of available CPUs.
-Currently, it could be:
+ max_segment_size_kb: null
-* `hnsw_ef` - value that specifies `ef` parameter of the HNSW algorithm.
+ # Maximum size (in KiloBytes) of vectors to store in-memory per segment.
-* `exact` - option to not use the approximate search (ANN). If set to true, the search may run for a long as it performs a full scan to retrieve exact results.
+ # Segments larger than this threshold will be stored as read-only memmaped file.
-* `indexed_only` - With this option you can disable the search in those segments where vector index is not built yet. This may be useful if you want to minimize the impact to the search performance whilst the collection is also being updated. Using this option may lead to a partial result if the collection is not fully indexed yet, consider using it only if eventual consistency is acceptable for your use case.
+ # To enable memmap storage, lower the threshold
+ # Note: 1Kb = 1 vector of size 256
+ # To explicitly disable mmap optimization, set to `0`.
-Since the `filter` parameter is specified, the search is performed only among those points that satisfy the filter condition.
+ # If not set, will be disabled by default.
-See details of possible filters and their work in the [filtering](../filtering) section.
+ memmap_threshold_kb: null
-Example result of this API would be
+ # Maximum size (in KiloBytes) of vectors allowed for plain index.
+ # Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md
+ # Note: 1Kb = 1 vector of size 256
-```json
+ # To explicitly disable vector indexing, set to `0`.
-{
+ # If not set, the default value will be used.
- ""result"": [
+ indexing_threshold_kb: 20000
- { ""id"": 10, ""score"": 0.81 },
- { ""id"": 14, ""score"": 0.75 },
- { ""id"": 11, ""score"": 0.73 }
+ # Interval between forced flushes.
- ],
+ flush_interval_sec: 5
- ""status"": ""ok"",
- ""time"": 0.001
-}
+ # Max number of threads (jobs) for running optimizations per shard.
-```
+ # Note: each optimization job will also use `max_indexing_threads` threads by itself for index building.
+ # If null - have no limit and choose dynamically to saturate CPU.
+ # If 0 - no optimization threads, optimizations will be disabled.
-The `result` contains ordered by `score` list of found point ids.
+ max_optimization_threads: null
-Note that payload and vector data is missing in these results by default.
+ # This section has the same options as 'optimizers' above. All values specified here will overwrite the collections
-See [payload and vector in the result](#payload-and-vector-in-the-result) on how
+ # optimizers configs regardless of the config above and the options specified at collection creation.
-to include it.
+ #optimizers_overwrite:
+ # deleted_threshold: 0.2
+ # vacuum_min_vector_number: 1000
-*Available as of v0.10.0*
+ # default_segment_number: 0
+ # max_segment_size_kb: null
+ # memmap_threshold_kb: null
-If the collection was created with multiple vectors, the name of the vector to use for searching should be provided:
+ # indexing_threshold_kb: 20000
+ # flush_interval_sec: 5
+ # max_optimization_threads: null
-```http
-POST /collections/{collection_name}/points/search
-{
+ # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
- ""vector"": {
+ hnsw_index:
- ""name"": ""image"",
+ # Number of edges per node in the index graph. Larger the value - more accurate the search, more space required.
- ""vector"": [0.2, 0.1, 0.9, 0.7]
+ m: 16
- },
- ""limit"": 3
-}
+ # Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index.
-```
+ ef_construct: 100
-```python
+ # Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
-from qdrant_client import QdrantClient
+ # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
-from qdrant_client.http import models
+ # in this case full-scan search should be preferred by query planner and additional indexing is not required.
+ # Note: 1Kb = 1 vector of size 256
+ full_scan_threshold_kb: 10000
-client = QdrantClient(""localhost"", port=6333)
+ # Number of parallel threads used for background index building.
-client.search(
+ # If 0 - automatically select.
- collection_name=""{collection_name}"",
+ # Best to keep between 8 and 16 to prevent likelihood of building broken/inefficient HNSW graphs.
- query_vector=(""image"", [0.2, 0.1, 0.9, 0.7]),
+ # On small CPUs, less threads are used.
- limit=3,
+ max_indexing_threads: 0
-)
-```
+ # Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false
+ on_disk: false
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ # Custom M param for hnsw graph built for payload index. If not set, default M will be used.
+ payload_m: null
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ # Default shard transfer method to use if none is defined.
-client.search(""{collection_name}"", {
+ # If null - don't have a shard transfer preference, choose automatically.
- vector: {
+ # If stream_records, snapshot or wal_delta - prefer this specific method.
- name: ""image"",
+ # More info: https://qdrant.tech/documentation/guides/distributed_deployment/#shard-transfer-method
- vector: [0.2, 0.1, 0.9, 0.7],
+ shard_transfer_method: null
- },
- limit: 3,
-});
+ # Default parameters for collections
-```
+ collection:
+ # Number of replicas of each shard that network tries to maintain
+ replication_factor: 1
-```rust
-use qdrant_client::{client::QdrantClient, qdrant::SearchPoints};
+ # How many replicas should apply the operation for us to consider it successful
+ write_consistency_factor: 1
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ # Default parameters for vectors.
-client
+ vectors:
- .search_points(&SearchPoints {
+ # Whether vectors should be stored in memory or on disk.
- collection_name: ""{collection_name}"".to_string(),
+ on_disk: null
- vector: vec![0.2, 0.1, 0.9, 0.7],
- vector_name: Some(""image"".to_string()),
- limit: 3,
+ # shard_number_per_node: 1
- ..Default::default()
- })
- .await?;
+ # Default quantization configuration.
-```
+ # More info: https://qdrant.tech/documentation/guides/quantization
+ quantization: null
-```java
-import java.util.List;
+service:
+ # Maximum size of POST data in a single request in megabytes
+ max_request_size_mb: 32
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.SearchPoints;
+ # Number of parallel workers used for serving the api. If 0 - equal to the number of available cores.
+ # If missing - Same as storage.max_search_threads
+ max_workers: 0
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ # Host to bind the service on
+ host: 0.0.0.0
-client
- .searchAsync(
- SearchPoints.newBuilder()
+ # HTTP(S) port to bind the service on
- .setCollectionName(""{collection_name}"")
+ http_port: 6333
- .setVectorName(""image"")
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
- .setLimit(3)
+ # gRPC port to bind the service on.
- .build())
+ # If `null` - gRPC is disabled. Default: null
- .get();
+ # Comment to disable gRPC:
-```
+ grpc_port: 6334
-```csharp
+ # Enable CORS headers in REST API.
-using Qdrant.Client;
+ # If enabled, browsers would be allowed to query REST endpoints regardless of query origin.
+ # More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
+ # Default: true
-var client = new QdrantClient(""localhost"", 6334);
+ enable_cors: true
-await client.SearchAsync(
+ # Enable HTTPS for the REST and gRPC API
- collectionName: ""{collection_name}"",
+ enable_tls: false
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- vectorName: ""image"",
- limit: 3
+ # Check user HTTPS client certificate against CA file specified in tls config
-);
+ verify_https_client_certificate: false
-```
+ # Set an api-key.
-Search is processing only among vectors with the same name.
+ # If set, all requests must include a header with the api-key.
+ # example header: `api-key: `
+ #
-*Available as of v1.7.0*
+ # If you enable this you should also enable TLS.
+ # (Either above or via an external service like nginx.)
+ # Sending an api-key over an unencrypted channel is insecure.
-If the collection was created with sparse vectors, the name of the sparse vector to use for searching should be provided:
+ #
+ # Uncomment to enable.
+ # api_key: your_secret_api_key_here
-You can still use payload filtering and other features of the search API with sparse vectors.
+ # Set an api-key for read-only operations.
-There are however important differences between dense and sparse vector search:
+ # If set, all requests must include a header with the api-key.
+ # example header: `api-key: `
+ #
-| Index| Sparse Query | Dense Query |
+ # If you enable this you should also enable TLS.
-| --- | --- | --- |
+ # (Either above or via an external service like nginx.)
-| Scoring Metric | Default is `Dot product`, no need to specify it | `Distance` has supported metrics e.g. Dot, Cosine |
+ # Sending an api-key over an unencrypted channel is insecure.
-| Search Type | Always exact in Qdrant | HNSW is an approximate NN |
+ #
-| Return Behaviour | Returns only vectors with non-zero values in the same indices as the query vector | Returns `limit` vectors |
+ # Uncomment to enable.
+ # read_only_api_key: your_secret_read_only_api_key_here
-In general, the speed of the search is proportional to the number of non-zero values in the query vector.
+ # Uncomment to enable JWT Role Based Access Control (RBAC).
+ # If enabled, you can generate JWT tokens with fine-grained rules for access control.
-```http
+ # Use generated token instead of API key.
-POST /collections/{collection_name}/points/search
+ #
-{
+ # jwt_rbac: true
- ""vector"": {
- ""name"": ""text"",
- ""vector"": {
+cluster:
- ""indices"": [6, 7],
+ # Use `enabled: true` to run Qdrant in distributed deployment mode
- ""values"": [1.0, 2.0]
+ enabled: false
- }
- },
- ""limit"": 3
+ # Configuration of the inter-cluster communication
-}
+ p2p:
-```
+ # Port for internal communication between peers
+ port: 6335
-```python
-from qdrant_client import QdrantClient
+ # Use TLS for communication between peers
-from qdrant_client.http import models
+ enable_tls: false
-client = QdrantClient(""localhost"", port=6333)
+ # Configuration related to distributed consensus algorithm
+ consensus:
+ # How frequently peers should ping each other.
-client.search(
+ # Setting this parameter to lower value will allow consensus
- collection_name=""{collection_name}"",
+ # to detect disconnected nodes earlier, but too frequent
- query_vector=models.NamedSparseVector(
+ # tick period may create significant network and CPU overhead.
- name=""text"",
+ # We encourage you NOT to change this parameter unless you know what you are doing.
- vector=models.SparseVector(
+ tick_period_ms: 100
- indices=[1, 7],
- values=[2.0, 1.0],
- ),
- ),
- limit=3,
+# Set to true to prevent service from sending usage statistics to the developers.
-)
+# Read more: https://qdrant.tech/documentation/guides/telemetry
-```
+telemetry_disabled: false
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+# TLS configuration.
+# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+tls:
+ # Server certificate chain file
+ cert: ./tls/cert.pem
-client.search(""{collection_name}"", {
- vector: {
- name: ""text"",
+ # Server private key file
- vector: {
+ key: ./tls/key.pem
- indices: [1, 7],
- values: [2.0, 1.0]
- },
+ # Certificate authority certificate file.
- },
+ # This certificate will be used to validate the certificates
- limit: 3,
+ # presented by other nodes during inter-cluster communication.
-});
+ #
-```
+ # If verify_https_client_certificate is true, it will verify
+ # HTTPS client certificate
+ #
-```rust
+ # Required if cluster.p2p.enable_tls is true.
-use qdrant_client::{client::QdrantClient, client::Vector, qdrant::SearchPoints};
+ ca_cert: ./tls/cacert.pem
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ # TTL in seconds to reload certificate from disk, useful for certificate rotations.
+ # Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication).
+ # If `null` - TTL is disabled.
-let sparse_vector: Vector = vec![(1, 2.0), (7, 1.0)].into();
+ cert_ttl: 3600
+```
-client
- .search_points(&SearchPoints {
+## Validation
- collection_name: ""{collection_name}"".to_string(),
- vector_name: Some(""text"".to_string()),
- sparse_indices: sparse_vector.indices,
+*Available since v1.1.1*
- vector: sparse_vector.data,
- limit: 3,
- ..Default::default()
+The configuration is validated on startup. If a configuration is loaded but
- })
+validation fails, a warning is logged. E.g.:
- .await?;
-```
+```text
+WARN Settings configuration file has validation errors:
-```java
+WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger
-import java.util.List;
+WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000
+```
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+The server will continue to operate. Any validation errors should be fixed as
-import io.qdrant.client.grpc.Points.SearchPoints;
+soon as possible though to prevent problematic behavior.
+",documentation/guides/configuration.md
+"---
-import io.qdrant.client.grpc.Points.SparseIndices;
+title: Optimize Resources
+weight: 11
+aliases:
-QdrantClient client =
+ - ../tutorials/optimize
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+---
-client
+# Optimize Qdrant
-.searchAsync(
- SearchPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
+Different use cases have different requirements for balancing between memory, speed, and precision.
- .setVectorName(""text"")
+Qdrant is designed to be flexible and customizable so you can tune it to your needs.
- .addAllVector(List.of(2.0f, 1.0f))
- .setSparseIndices(SparseIndices.newBuilder().addAllData(List.of(1, 7)).build())
- .setLimit(3)
+![Trafeoff](/docs/tradeoff.png)
- .build())
-.get();
-```
+Let's look deeper into each of those possible optimization scenarios.
-```csharp
+## Prefer low memory footprint with high speed search
-using Qdrant.Client;
+The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads.
-var client = new QdrantClient(""localhost"", 6334);
+Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads.
-await client.SearchAsync(
- collectionName: ""{collection_name}"",
- vector: new float[] { 2.0f, 1.0f },
+To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration:
- vectorName: ""text"",
- limit: 3,
- sparseIndices: new uint[] { 1, 7 }
+```http
-);
+PUT /collections/{collection_name}
-```
+{
+ ""vectors"": {
+ ""size"": 768,
-### Filtering results by score
+ ""distance"": ""Cosine"",
+ ""on_disk"": true
+ },
-In addition to payload filtering, it might be useful to filter out results with a low similarity score.
+ ""quantization_config"": {
-For example, if you know the minimal acceptance score for your model and do not want any results which are less similar than the threshold.
+ ""scalar"": {
-In this case, you can use `score_threshold` parameter of the search query.
+ ""type"": ""int8"",
-It will exclude all results with a score worse than the given.
+ ""always_ram"": true
+ }
+ }
-
+}
+```
-### Payload and vector in the result
+```python
+from qdrant_client import QdrantClient, models
-By default, retrieval methods do not return any stored information such as
-payload and vectors. Additional parameters `with_vectors` and `with_payload`
-alter this behavior.
+client = QdrantClient(url=""http://localhost:6333"")
-Example:
+client.create_collection(
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
-```http
+ quantization_config=models.ScalarQuantization(
-POST /collections/{collection_name}/points/search
+ scalar=models.ScalarQuantizationConfig(
-{
+ type=models.ScalarType.INT8,
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ always_ram=True,
- ""with_vectors"": true,
+ ),
- ""with_payload"": true
+ ),
-}
+)
```
-```python
+```typescript
-client.search(
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- collection_name=""{collection_name}"",
- query_vector=[0.2, 0.1, 0.9, 0.7],
- with_vectors=True,
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- with_payload=True,
-)
-```
+client.createCollection(""{collection_name}"", {
+ vectors: {
+ size: 768,
-```typescript
+ distance: ""Cosine"",
-client.search(""{collection_name}"", {
+ on_disk: true,
- vector: [0.2, 0.1, 0.9, 0.7],
+ },
- with_vector: true,
+ quantization_config: {
- with_payload: true,
+ scalar: {
+
+ type: ""int8"",
+
+ always_ram: true,
+
+ },
+
+ },
});
@@ -19167,31 +18775,41 @@ client.search(""{collection_name}"", {
```rust
-use qdrant_client::{client::QdrantClient, qdrant::SearchPoints};
+use qdrant_client::qdrant::{
+ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
+ VectorParamsBuilder,
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
client
- .search_points(&SearchPoints {
+ .create_collection(
- collection_name: ""{collection_name}"".to_string(),
+ CreateCollectionBuilder::new(""{collection_name}"")
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
- with_payload: Some(true.into()),
+ .quantization_config(
- with_vectors: Some(true.into()),
+ ScalarQuantizationBuilder::default()
- limit: 3,
+ .r#type(QuantizationType::Int8.into())
- ..Default::default()
+ .always_ram(true),
- })
+ ),
+
+ )
.await?;
@@ -19201,21 +18819,25 @@ client
```java
-import java.util.List;
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CreateCollection;
-import static io.qdrant.client.WithPayloadSelectorFactory.enable;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
-import io.qdrant.client.QdrantClient;
+import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
-import io.qdrant.client.WithVectorsSelectorFactory;
+import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Points.SearchPoints;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
@@ -19227,19 +18849,45 @@ QdrantClient client =
client
- .searchAsync(
+ .createCollectionAsync(
- SearchPoints.newBuilder()
+ CreateCollection.newBuilder()
.setCollectionName(""{collection_name}"")
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ .setVectorsConfig(
- .setWithPayload(enable(true))
+ VectorsConfig.newBuilder()
+
+ .setParams(
+
+ VectorParams.newBuilder()
+
+ .setSize(768)
+
+ .setDistance(Distance.Cosine)
+
+ .setOnDisk(true)
+
+ .build())
+
+ .build())
+
+ .setQuantizationConfig(
+
+ QuantizationConfig.newBuilder()
- .setWithVectors(WithVectorsSelectorFactory.enable(true))
+ .setScalar(
+
+ ScalarQuantization.newBuilder()
+
+ .setType(QuantizationType.Int8)
+
+ .setAlwaysRam(true)
+
+ .build())
- .setLimit(3)
+ .build())
.build())
@@ -19253,23 +18901,27 @@ client
using Qdrant.Client;
+using Qdrant.Client.Grpc;
+
var client = new QdrantClient(""localhost"", 6334);
-await client.SearchAsync(
+await client.CreateCollectionAsync(
collectionName: ""{collection_name}"",
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true },
- payloadSelector: true,
+ quantizationConfig: new QuantizationConfig
- vectorsSelector: true,
+ {
- limit: 3
+ Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
+
+ }
);
@@ -19277,23 +18929,85 @@ await client.SearchAsync(
-You can use `with_payload` to scope to or filter a specific payload subset.
+```go
-You can even specify an array of items to include, such as `city`,
+import (
-`village`, and `town`:
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+
+ Size: 768,
+
+ Distance: qdrant.Distance_Cosine,
+
+ OnDisk: qdrant.PtrOf(true),
+
+ }),
+
+ QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{
+
+ Type: qdrant.QuantizationType_Int8,
+
+ AlwaysRam: qdrant.PtrOf(true),
+
+ }),
+
+})
+
+```
+
+
+
+`on_disk` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM.
+
+
+
+Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision.
```http
-POST /collections/{collection_name}/points/search
+POST /collections/{collection_name}/points/query
{
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ ""query"": [0.2, 0.1, 0.9, 0.7],
- ""with_payload"": [""city"", ""village"", ""town""]
+ ""params"": {
+
+ ""quantization"": {
+
+ ""rescore"": false
+
+ }
+
+ },
+
+ ""limit"": 10
}
@@ -19303,23 +19017,25 @@ POST /collections/{collection_name}/points/search
```python
-from qdrant_client import QdrantClient
-
-from qdrant_client.http import models
+from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+client = QdrantClient(url=""http://localhost:6333"")
-client.search(
+client.query_points(
collection_name=""{collection_name}"",
- query_vector=[0.2, 0.1, 0.9, 0.7],
+ query=[0.2, 0.1, 0.9, 0.7],
- with_payload=[""city"", ""village"", ""town""],
+ search_params=models.SearchParams(
+
+ quantization=models.QuantizationSearchParams(rescore=False)
+
+ ),
)
@@ -19337,11 +19053,19 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.search(""{collection_name}"", {
+client.query(""{collection_name}"", {
- vector: [0.2, 0.1, 0.9, 0.7],
+ query: [0.2, 0.1, 0.9, 0.7],
- with_payload: [""city"", ""village"", ""town""],
+ params: {
+
+ quantization: {
+
+ rescore: false,
+
+ },
+
+ },
});
@@ -19351,29 +19075,39 @@ client.search(""{collection_name}"", {
```rust
-use qdrant_client::{client::QdrantClient, qdrant::SearchPoints};
+use qdrant_client::qdrant::{
+
+ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
+
+};
+
+use qdrant_client::Qdrant;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
client
- .search_points(&SearchPoints {
+ .query(
- collection_name: ""{collection_name}"".to_string(),
+ QueryPointsBuilder::new(""{collection_name}"")
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ .query(vec![0.2, 0.1, 0.9, 0.7])
- with_payload: Some(vec![""city"", ""village"", ""town""].into()),
+ .limit(3)
- limit: 3,
+ .params(
- ..Default::default()
+ SearchParamsBuilder::default()
- })
+ .quantization(QuantizationSearchParamsBuilder::default().rescore(false)),
+
+ ),
+
+ )
.await?;
@@ -19383,19 +19117,19 @@ client
```java
-import java.util.List;
-
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-import static io.qdrant.client.WithPayloadSelectorFactory.include;
+import io.qdrant.client.grpc.Points.QuantizationSearchParams;
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.SearchParams;
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.SearchPoints;
+import static io.qdrant.client.QueryFactory.nearest;
@@ -19405,23 +19139,29 @@ QdrantClient client =
-client
+client.queryAsync(
- .searchAsync(
+ QueryPoints.newBuilder()
- SearchPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
- .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ .setParams(
- .setWithPayload(include(List.of(""city"", ""village"", ""town"")))
+ SearchParams.newBuilder()
- .setLimit(3)
+ .setQuantization(
- .build())
+ QuantizationSearchParams.newBuilder().setRescore(false).build())
- .get();
+ .build())
+
+ .setLimit(3)
+
+ .build())
+
+ .get();
```
@@ -19439,23 +19179,17 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.SearchAsync(
+await client.QueryAsync(
collectionName: ""{collection_name}"",
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- payloadSelector: new WithPayloadSelector
+ searchParams: new SearchParams
{
- Include = new PayloadIncludeSelector
-
- {
-
- Fields = { new string[] { ""city"", ""village"", ""town"" } }
-
- }
+ Quantization = new QuantizationSearchParams { Rescore = false }
},
@@ -19467,445 +19201,459 @@ await client.SearchAsync(
-Or use `include` or `exclude` explicitly. For example, to exclude `city`:
+```go
+import (
+ ""context""
-```http
-POST /collections/{collection_name}/points/search
-{
+ ""github.com/qdrant/go-client/qdrant""
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+)
- ""with_payload"": {
- ""exclude"": [""city""]
- }
+client, err := qdrant.NewClient(&qdrant.Config{
-}
+ Host: ""localhost"",
-```
+ Port: 6334,
+})
-```python
-from qdrant_client import QdrantClient
+client.Query(context.Background(), &qdrant.QueryPoints{
-from qdrant_client.http import models
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ Params: &qdrant.SearchParams{
-client = QdrantClient(""localhost"", port=6333)
+ Quantization: &qdrant.QuantizationSearchParams{
+ Rescore: qdrant.PtrOf(true),
+ },
-client.search(
+ },
- collection_name=""{collection_name}"",
+})
- query_vector=[0.2, 0.1, 0.9, 0.7],
+```
- with_payload=models.PayloadSelectorExclude(
- exclude=[""city""],
- ),
+## Prefer high precision with low memory footprint
-)
-```
+In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index.
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+```http
+PUT /collections/{collection_name}
+{
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ ""vectors"": {
+ ""size"": 768,
+ ""distance"": ""Cosine"",
-client.search(""{collection_name}"", {
+ ""on_disk"": true
- vector: [0.2, 0.1, 0.9, 0.7],
+ },
- with_payload: {
+ ""hnsw_config"": {
- exclude: [""city""],
+ ""on_disk"": true
- },
+ }
-});
+}
```
-```rust
+```python
-use qdrant_client::{
+from qdrant_client import QdrantClient, models
- client::QdrantClient,
- qdrant::{
- with_payload_selector::SelectorOptions, PayloadExcludeSelector, SearchPoints,
+client = QdrantClient(url=""http://localhost:6333"")
- WithPayloadSelector,
- },
-};
+client.create_collection(
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ hnsw_config=models.HnswConfigDiff(on_disk=True),
+)
+```
-client
- .search_points(&SearchPoints {
- collection_name: ""{collection_name}"".to_string(),
+```typescript
- vector: vec![0.2, 0.1, 0.9, 0.7],
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- with_payload: Some(WithPayloadSelector {
- selector_options: Some(SelectorOptions::Exclude(PayloadExcludeSelector {
- fields: vec![""city"".to_string()],
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- })),
- }),
- limit: 3,
+client.createCollection(""{collection_name}"", {
- ..Default::default()
+ vectors: {
- })
+ size: 768,
- .await?;
+ distance: ""Cosine"",
-```
+ on_disk: true,
+ },
+ hnsw_config: {
-```java
+ on_disk: true,
-import java.util.List;
+ },
+});
+```
-import static io.qdrant.client.WithPayloadSelectorFactory.exclude;
+```rust
-import io.qdrant.client.QdrantClient;
+use qdrant_client::qdrant::{
-import io.qdrant.client.QdrantGrpcClient;
+ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder,
-import io.qdrant.client.grpc.Points.SearchPoints;
+};
+use qdrant_client::{Qdrant, QdrantError};
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
client
- .searchAsync(
+ .create_collection(
- SearchPoints.newBuilder()
+ CreateCollectionBuilder::new(""{collection_name}"")
- .setCollectionName(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ .hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)),
- .setWithPayload(exclude(List.of(""city"")))
+ )
- .setLimit(3)
+ .await?;
- .build())
+```
- .get();
-```
+```java
+import io.qdrant.client.QdrantClient;
-```csharp
+import io.qdrant.client.QdrantGrpcClient;
-using Qdrant.Client;
+import io.qdrant.client.grpc.Collections.CreateCollection;
-using Qdrant.Client.Grpc;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.HnswConfigDiff;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-var client = new QdrantClient(""localhost"", 6334);
+import io.qdrant.client.grpc.Collections.VectorParams;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
-await client.SearchAsync(
- collectionName: ""{collection_name}"",
+QdrantClient client =
- vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- payloadSelector: new WithPayloadSelector
- {
- Exclude = new PayloadExcludeSelector { Fields = { new string[] { ""city"" } } }
+client
- },
+ .createCollectionAsync(
- limit: 3
+ CreateCollection.newBuilder()
-);
+ .setCollectionName(""{collection_name}"")
-```
+ .setVectorsConfig(
+ VectorsConfig.newBuilder()
+ .setParams(
-It is possible to target nested fields using a dot notation:
+ VectorParams.newBuilder()
-- `payload.nested_field` - for a nested field
+ .setSize(768)
-- `payload.nested_array[].sub_field` - for projecting nested fields within an array
+ .setDistance(Distance.Cosine)
+ .setOnDisk(true)
+ .build())
-Accessing array elements by index is currently not supported.
+ .build())
+ .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
+ .build())
-## Batch search API
+ .get();
+
+```
-*Available as of v0.10.0*
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-The batch search API enables to perform multiple search requests via a single request.
+var client = new QdrantClient(""localhost"", 6334);
-Its semantic is straightforward, `n` batched search requests are equivalent to `n` singular search requests.
+await client.CreateCollectionAsync(
-This approach has several advantages. Logically, fewer network connections are required which can be very beneficial on its own.
+ collectionName: ""{collection_name}"",
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
+ hnswConfig: new HnswConfigDiff { OnDisk = true }
-More importantly, batched requests will be efficiently processed via the query planner which can detect and optimize requests if they have the same `filter`.
+);
+```
-This can have a great effect on latency for non trivial filters as the intermediary results can be shared among the request.
+```go
+import (
-In order to use it, simply pack together your search requests. All the regular attributes of a search request are of course available.
+ ""context""
-```http
+ ""github.com/qdrant/go-client/qdrant""
-POST /collections/{collection_name}/points/search/batch
+)
-{
- ""searches"": [
- {
+client, err := qdrant.NewClient(&qdrant.Config{
- ""filter"": {
+ Host: ""localhost"",
- ""must"": [
+ Port: 6334,
- {
+})
- ""key"": ""city"",
- ""match"": {
- ""value"": ""London""
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- }
+ CollectionName: ""{collection_name}"",
- }
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
- ]
+ Size: 768,
- },
+ Distance: qdrant.Distance_Cosine,
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ OnDisk: qdrant.PtrOf(true),
- ""limit"": 3
+ }),
- },
+ HnswConfig: &qdrant.HnswConfigDiff{
- {
+ OnDisk: qdrant.PtrOf(true),
- ""filter"": {
+ },
- ""must"": [
+})
- {
+```
- ""key"": ""city"",
- ""match"": {
- ""value"": ""London""
+In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM.
- }
- }
- ]
+```json
- },
+...
- ""vector"": [0.5, 0.3, 0.2, 0.3],
+""hnsw_config"": {
- ""limit"": 3
+ ""m"": 64,
- }
+ ""ef_construct"": 512,
- ]
+ ""on_disk"": true
}
+...
+
```
-```python
+The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search.
-from qdrant_client import QdrantClient
+You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS.
-from qdrant_client.http import models
+## Prefer high precision with high speed search
-client = QdrantClient(""localhost"", port=6333)
+For high speed and high precision search it is critical to keep as much data in RAM as possible.
-filter = models.Filter(
+By default, Qdrant follows this approach, but you can tune it to your needs.
- must=[
- models.FieldCondition(
- key=""city"",
+It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring.
- match=models.MatchValue(
- value=""London"",
- ),
+```http
- )
+PUT /collections/{collection_name}
- ]
+{
-)
+ ""vectors"": {
+
+ ""size"": 768,
+ ""distance"": ""Cosine""
+ },
-search_queries = [
+ ""quantization_config"": {
- models.SearchRequest(vector=[0.2, 0.1, 0.9, 0.7], filter=filter, limit=3),
+ ""scalar"": {
- models.SearchRequest(vector=[0.5, 0.3, 0.2, 0.3], filter=filter, limit=3),
+ ""type"": ""int8"",
-]
+ ""always_ram"": true
+ }
+ }
-client.search_batch(collection_name=""{collection_name}"", requests=search_queries)
+}
```
-```typescript
+```python
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+from qdrant_client import QdrantClient, models
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+client = QdrantClient(url=""http://localhost:6333"")
-const filter = {
+client.create_collection(
- must: [
+ collection_name=""{collection_name}"",
- {
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- key: ""city"",
+ quantization_config=models.ScalarQuantization(
- match: {
+ scalar=models.ScalarQuantizationConfig(
- value: ""London"",
+ type=models.ScalarType.INT8,
- },
+ always_ram=True,
- },
+ ),
- ],
+ ),
-};
+)
+```
-const searches = [
- {
+```typescript
- vector: [0.2, 0.1, 0.9, 0.7],
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- filter,
- limit: 3,
- },
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- {
- vector: [0.5, 0.3, 0.2, 0.3],
- filter,
+client.createCollection(""{collection_name}"", {
- limit: 3,
+ vectors: {
+
+ size: 768,
+
+ distance: ""Cosine"",
},
-];
+ quantization_config: {
+
+ scalar: {
+ type: ""int8"",
+ always_ram: true,
-client.searchBatch(""{collection_name}"", {
+ },
- searches,
+ },
});
@@ -19915,131 +19663,121 @@ client.searchBatch(""{collection_name}"", {
```rust
-use qdrant_client::{
+use qdrant_client::qdrant::{
- client::QdrantClient,
+ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
- qdrant::{Condition, Filter, SearchBatchPoints, SearchPoints},
+ VectorParamsBuilder,
};
+use qdrant_client::Qdrant;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]);
+client
+ .create_collection(
-let searches = vec![
+ CreateCollectionBuilder::new(""{collection_name}"")
- SearchPoints {
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
- collection_name: ""{collection_name}"".to_string(),
+ .quantization_config(
- vector: vec![0.2, 0.1, 0.9, 0.7],
+ ScalarQuantizationBuilder::default()
- filter: Some(filter.clone()),
+ .r#type(QuantizationType::Int8.into())
- limit: 3,
+ .always_ram(true),
- ..Default::default()
+ ),
- },
+ )
- SearchPoints {
+ .await?;
- collection_name: ""{collection_name}"".to_string(),
+```
- vector: vec![0.5, 0.3, 0.2, 0.3],
- filter: Some(filter),
- limit: 3,
+```java
- ..Default::default()
+import io.qdrant.client.QdrantClient;
- },
+import io.qdrant.client.QdrantGrpcClient;
-];
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-client
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
- .search_batch_points(&SearchBatchPoints {
+import io.qdrant.client.grpc.Collections.QuantizationType;
- collection_name: ""{collection_name}"".to_string(),
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
- search_points: searches,
+import io.qdrant.client.grpc.Collections.VectorParams;
- read_consistency: None,
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- ..Default::default()
- })
- .await?;
+QdrantClient client =
-```
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-```java
+client
-import java.util.List;
+ .createCollectionAsync(
+ CreateCollection.newBuilder()
+ .setCollectionName(""{collection_name}"")
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+ .setVectorsConfig(
+ VectorsConfig.newBuilder()
+ .setParams(
-import io.qdrant.client.QdrantClient;
+ VectorParams.newBuilder()
-import io.qdrant.client.QdrantGrpcClient;
+ .setSize(768)
-import io.qdrant.client.grpc.Points.Filter;
+ .setDistance(Distance.Cosine)
-import io.qdrant.client.grpc.Points.SearchPoints;
+ .build())
+ .build())
+ .setQuantizationConfig(
-QdrantClient client =
-
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-
-
-
-Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build();
-
-List searches =
-
- List.of(
-
- SearchPoints.newBuilder()
-
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
-
- .setFilter(filter)
+ QuantizationConfig.newBuilder()
- .setLimit(3)
+ .setScalar(
- .build(),
+ ScalarQuantization.newBuilder()
- SearchPoints.newBuilder()
+ .setType(QuantizationType.Int8)
- .addAllVector(List.of(0.5f, 0.3f, 0.2f, 0.3f))
+ .setAlwaysRam(true)
- .setFilter(filter)
+ .build())
- .setLimit(3)
+ .build())
- .build());
+ .build())
-client.searchBatchAsync(""{collection_name}"", searches, null).get();
+ .get();
```
@@ -20051,129 +19789,103 @@ using Qdrant.Client;
using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
-
var client = new QdrantClient(""localhost"", 6334);
-var filter = MatchKeyword(""city"", ""London"");
-
-
-
-var searches = new List
-
-{
-
- new()
-
- {
-
- Vector = { new float[] { 0.2f, 0.1f, 0.9f, 0.7f } },
-
- Filter = filter,
+await client.CreateCollectionAsync(
- Limit = 3
+ collectionName: ""{collection_name}"",
- },
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine},
- new()
+ quantizationConfig: new QuantizationConfig
{
- Vector = { new float[] { 0.5f, 0.3f, 0.2f, 0.3f } },
-
- Filter = filter,
-
- Limit = 3
+ Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
}
-};
-
-
-
-await client.SearchBatchAsync(collectionName: ""{collection_name}"", searches: searches);
+);
```
-The result of this API contains one array per search requests.
-
-
-
-```json
+```go
-{
+import (
- ""result"": [
+ ""context""
- [
- { ""id"": 10, ""score"": 0.81 },
- { ""id"": 14, ""score"": 0.75 },
+ ""github.com/qdrant/go-client/qdrant""
- { ""id"": 11, ""score"": 0.73 }
+)
- ],
- [
- { ""id"": 1, ""score"": 0.92 },
+client, err := qdrant.NewClient(&qdrant.Config{
- { ""id"": 3, ""score"": 0.89 },
+ Host: ""localhost"",
- { ""id"": 9, ""score"": 0.75 }
+ Port: 6334,
- ]
+})
- ],
- ""status"": ""ok"",
- ""time"": 0.001
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-}
+ CollectionName: ""{collection_name}"",
-```
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 768,
+ Distance: qdrant.Distance_Cosine,
-## Pagination
+ }),
+ QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{
+ Type: qdrant.QuantizationType_Int8,
-*Available as of v0.8.3*
+ AlwaysRam: qdrant.PtrOf(true),
+ }),
+})
-Search and [recommendation](../explore/#recommendation-api) APIs allow to skip first results of the search and return only the result starting from some specified offset:
+```
-Example:
+There are also some search-time parameters you can use to tune the search accuracy and speed:
```http
-POST /collections/{collection_name}/points/search
+POST /collections/{collection_name}/points/query
{
- ""vector"": [0.2, 0.1, 0.9, 0.7],
+ ""query"": [0.2, 0.1, 0.9, 0.7],
- ""with_vectors"": true,
+ ""params"": {
- ""with_payload"": true,
+ ""hnsw_ef"": 128,
- ""limit"": 10,
+ ""exact"": false
- ""offset"": 100
+ },
+
+ ""limit"": 3
}
@@ -20183,27 +19895,23 @@ POST /collections/{collection_name}/points/search
```python
-from qdrant_client import QdrantClient
+from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+client = QdrantClient(url=""http://localhost:6333"")
-client.search(
+client.query_points(
collection_name=""{collection_name}"",
- query_vector=[0.2, 0.1, 0.9, 0.7],
-
- with_vectors=True,
-
- with_payload=True,
+ query=[0.2, 0.1, 0.9, 0.7],
- limit=10,
+ search_params=models.SearchParams(hnsw_ef=128, exact=False),
- offset=100,
+ limit=3,
)
@@ -20221,17 +19929,19 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.search(""{collection_name}"", {
+client.query(""{collection_name}"", {
- vector: [0.2, 0.1, 0.9, 0.7],
+ query: [0.2, 0.1, 0.9, 0.7],
- with_vector: true,
+ params: {
- with_payload: true,
+ hnsw_ef: 128,
- limit: 10,
+ exact: false,
- offset: 100,
+ },
+
+ limit: 3,
});
@@ -20241,33 +19951,29 @@ client.search(""{collection_name}"", {
```rust
-use qdrant_client::{client::QdrantClient, qdrant::SearchPoints};
-
+use qdrant_client::qdrant::{QueryPointsBuilder, SearchParamsBuilder};
+use qdrant_client::Qdrant;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-client
-
- .search_points(&SearchPoints {
- collection_name: ""{collection_name}"".to_string(),
- vector: vec![0.2, 0.1, 0.9, 0.7],
+client
- with_vectors: Some(true.into()),
+ .query(
- with_payload: Some(true.into()),
+ QueryPointsBuilder::new(""{collection_name}"")
- limit: 10,
+ .query(vec![0.2, 0.1, 0.9, 0.7])
- offset: Some(100),
+ .limit(3)
- ..Default::default()
+ .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)),
- })
+ )
.await?;
@@ -20277,51 +19983,41 @@ client
```java
-import java.util.List;
-
-
-
-import static io.qdrant.client.WithPayloadSelectorFactory.enable;
-
-
-
import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.WithVectorsSelectorFactory;
+import io.qdrant.client.grpc.Points.QueryPoints;
-import io.qdrant.client.grpc.Points.SearchPoints;
+import io.qdrant.client.grpc.Points.SearchParams;
-QdrantClient client =
+import static io.qdrant.client.QueryFactory.nearest;
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+QdrantClient client =
-client
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- .searchAsync(
- SearchPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
+client.queryAsync(
- .addAllVector(List.of(0.2f, 0.1f, 0.9f, 0.7f))
+ QueryPoints.newBuilder()
- .setWithPayload(enable(true))
+ .setCollectionName(""{collection_name}"")
- .setWithVectors(WithVectorsSelectorFactory.enable(true))
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
- .setLimit(10)
+ .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build())
- .setOffset(100)
+ .setLimit(3)
- .build())
+ .build())
- .get();
+ .get();
```
@@ -20331,25 +20027,23 @@ client
using Qdrant.Client;
+using Qdrant.Client.Grpc;
-var client = new QdrantClient(""localhost"", 6334);
-
+var client = new QdrantClient(""localhost"", 6334);
-await client.SearchAsync(
- ""{collection_name}"",
- new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+await client.QueryAsync(
- payloadSelector: true,
+ collectionName: ""{collection_name}"",
- vectorsSelector: true,
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- limit: 10,
+ searchParams: new SearchParams { HnswEf = 128, Exact = false },
- offset: 100
+ limit: 3
);
@@ -20357,1031 +20051,996 @@ await client.SearchAsync(
-Is equivalent to retrieving the 11th page with 10 records per page.
-
+```go
+import (
-
+ ""context""
-Vector-based retrieval in general and HNSW index in particular, are not designed to be paginated.
+ ""github.com/qdrant/go-client/qdrant""
-It is impossible to retrieve Nth closest vector without retrieving the first N vectors first.
+)
-However, using the offset parameter saves the resources by reducing network traffic and the number of times the storage is accessed.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-Using an `offset` parameter, will require to internally retrieve `offset + limit` points, but only access payload and vector from the storage those points which are going to be actually returned.
+})
-## Grouping API
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-*Available as of v1.2.0*
+ Params: &qdrant.SearchParams{
+ HnswEf: qdrant.PtrOf(uint64(128)),
+ Exact: qdrant.PtrOf(false),
-It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results.
+ },
+})
+```
-For example, if you have a large document split into multiple chunks, and you want to search or [recommend](../explore/#recommendation-api) on a per-document basis, you can group the results by the document ID.
+- `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512.
-Consider having points with the following payloads:
+- `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth.
-```json
+## Latency vs Throughput
-[
- {
- ""id"": 0,
+- There are two main approaches to measure the speed of search:
- ""payload"": {
+ - latency of the request - the time from the moment request is submitted to the moment a response is received
- ""chunk_part"": 0,
+ - throughput - the number of requests per second the system can handle
- ""document_id"": ""a""
- },
- ""vector"": [0.91]
+Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another.
- },
- {
- ""id"": 1,
+To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\.
- ""payload"": {
+You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster.
- ""chunk_part"": 1,
- ""document_id"": [""a"", ""b""]
- },
+```http
- ""vector"": [0.8]
+PUT /collections/{collection_name}
- },
+{
- {
+ ""vectors"": {
- ""id"": 2,
+ ""size"": 768,
- ""payload"": {
+ ""distance"": ""Cosine""
- ""chunk_part"": 2,
+ },
- ""document_id"": ""a""
+ ""optimizers_config"": {
- },
+ ""default_segment_number"": 16
- ""vector"": [0.2]
+ }
- },
+}
- {
+```
- ""id"": 3,
- ""payload"": {
- ""chunk_part"": 0,
+```python
- ""document_id"": 123
+from qdrant_client import QdrantClient, models
- },
- ""vector"": [0.79]
- },
+client = QdrantClient(url=""http://localhost:6333"")
- {
- ""id"": 4,
- ""payload"": {
+client.create_collection(
- ""chunk_part"": 1,
+ collection_name=""{collection_name}"",
- ""document_id"": 123
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- },
+ optimizers_config=models.OptimizersConfigDiff(default_segment_number=16),
- ""vector"": [0.75]
+)
- },
+```
- {
- ""id"": 5,
- ""payload"": {
+```typescript
- ""chunk_part"": 0,
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- ""document_id"": -10
- },
- ""vector"": [0.6]
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- }
-]
-```
+client.createCollection(""{collection_name}"", {
+ vectors: {
+ size: 768,
-With the ***groups*** API, you will be able to get the best *N* points for each document, assuming that the payload of the points contains the document ID. Of course there will be times where the best *N* points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, akin to the `limit` parameter.
+ distance: ""Cosine"",
+ },
+ optimizers_config: {
-### Search groups
+ default_segment_number: 16,
+ },
+});
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/search_point_groups)):
+```
-```http
+```rust
-POST /collections/{collection_name}/points/search/groups
+use qdrant_client::qdrant::{
-{
+ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
- // Same as in the regular search API
+};
- ""vector"": [1.1],
+use qdrant_client::Qdrant;
- // Grouping parameters
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- ""group_by"": ""document_id"", // Path of the field to group by
- ""limit"": 4, // Max amount of groups
- ""group_size"": 2, // Max amount of points per group
+client
-}
+ .create_collection(
-```
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
+ .optimizers_config(
-```python
+ OptimizersConfigDiffBuilder::default().default_segment_number(16),
-client.search_groups(
+ ),
- collection_name=""{collection_name}"",
+ )
- # Same as in the regular search() API
+ .await?;
- query_vector=g,
+```
- # Grouping parameters
- group_by=""document_id"", # Path of the field to group by
- limit=4, # Max amount of groups
+```java
- group_size=2, # Max amount of points per group
+import io.qdrant.client.QdrantClient;
-)
+import io.qdrant.client.QdrantGrpcClient;
-```
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-```typescript
+import io.qdrant.client.grpc.Collections.VectorParams;
-client.searchPointGroups(""{collection_name}"", {
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- vector: [1.1],
- group_by: ""document_id"",
- limit: 4,
+QdrantClient client =
- group_size: 2,
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-});
-```
+client
+ .createCollectionAsync(
-```rust
+ CreateCollection.newBuilder()
-use qdrant_client::qdrant::SearchPointGroups;
+ .setCollectionName(""{collection_name}"")
+ .setVectorsConfig(
+ VectorsConfig.newBuilder()
-client
+ .setParams(
- .search_groups(&SearchPointGroups {
+ VectorParams.newBuilder()
- collection_name: ""{collection_name}"".to_string(),
+ .setSize(768)
- vector: vec![1.1],
+ .setDistance(Distance.Cosine)
- group_by: ""document_id"".to_string(),
+ .build())
- limit: 4,
+ .build())
- group_size: 2,
+ .setOptimizersConfig(
- ..Default::default()
+ OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build())
- })
+ .build())
- .await?;
+ .get();
```
-```java
+```csharp
-import java.util.List;
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-import io.qdrant.client.grpc.Points.SearchPointGroups;
+var client = new QdrantClient(""localhost"", 6334);
-client
- .searchGroupsAsync(
+await client.CreateCollectionAsync(
- SearchPointGroups.newBuilder()
+ collectionName: ""{collection_name}"",
- .setCollectionName(""{collection_name}"")
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- .addAllVector(List.of(1.1f))
+ optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 }
- .setGroupBy(""document_id"")
+);
- .setLimit(4)
+```
- .setGroupSize(2)
- .build())
- .get();
+```go
-```
+import (
+ ""context""
-```csharp
-using Qdrant.Client;
+ ""github.com/qdrant/go-client/qdrant""
+)
-var client = new QdrantClient(""localhost"", 6334);
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-await client.SearchGroupsAsync(
+ Port: 6334,
- collectionName: ""{collection_name}"",
+})
- vector: new float[] { 1.1f },
- groupBy: ""document_id"",
- limit: 4,
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- groupSize: 2
+ CollectionName: ""{collection_name}"",
-);
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
-```
+ Size: 768,
+ Distance: qdrant.Distance_Cosine,
+ }),
-The output of a ***groups*** call looks like this:
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{
+ DefaultSegmentNumber: qdrant.PtrOf(uint64(16)),
+ },
-```json
+})
-{
+```
- ""result"": {
- ""groups"": [
- {
+To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel.
- ""id"": ""a"",
+To do that, you can configure qdrant to use minimal number of segments, which is usually 2.
- ""hits"": [
+Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index.
- { ""id"": 0, ""score"": 0.91 },
- { ""id"": 1, ""score"": 0.85 }
- ]
+```http
- },
+PUT /collections/{collection_name}
- {
+{
- ""id"": ""b"",
+ ""vectors"": {
- ""hits"": [
+ ""size"": 768,
- { ""id"": 1, ""score"": 0.85 }
+ ""distance"": ""Cosine""
- ]
+ },
- },
+ ""optimizers_config"": {
- {
+ ""default_segment_number"": 2
- ""id"": 123,
+ }
- ""hits"": [
+}
- { ""id"": 3, ""score"": 0.79 },
+```
- { ""id"": 4, ""score"": 0.75 }
- ]
- },
+```python
- {
+from qdrant_client import QdrantClient, models
- ""id"": -10,
- ""hits"": [
- { ""id"": 5, ""score"": 0.6 }
+client = QdrantClient(url=""http://localhost:6333"")
- ]
- }
- ]
+client.create_collection(
- },
+ collection_name=""{collection_name}"",
- ""status"": ""ok"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- ""time"": 0.001
+ optimizers_config=models.OptimizersConfigDiff(default_segment_number=2),
-}
+)
```
-The groups are ordered by the score of the top point in the group. Inside each group the points are sorted too.
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-If the `group_by` field of a point is an array (e.g. `""document_id"": [""a"", ""b""]`), the point can be included in multiple groups (e.g. `""document_id"": ""a""` and `document_id: ""b""`).
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-
+client.createCollection(""{collection_name}"", {
+ vectors: {
-**Limitations**:
+ size: 768,
+ distance: ""Cosine"",
+ },
-* Only [keyword](../payload/#keyword) and [integer](../payload/#integer) payload values are supported for the `group_by` parameter. Payload values with other types will be ignored.
+ optimizers_config: {
-* At the moment, pagination is not enabled when using **groups**, so the `offset` parameter is not allowed.
+ default_segment_number: 2,
+ },
+});
-### Lookup in groups
+```
-*Available as of v1.3.0*
+```rust
+use qdrant_client::qdrant::{
+ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
-Having multiple points for parts of the same item often introduces redundancy in the stored data. Which may be fine if the information shared by the points is small, but it can become a problem if the payload is large, because it multiplies the storage space needed to store the points by a factor of the amount of points we have per group.
+};
+use qdrant_client::Qdrant;
-One way of optimizing storage when using groups is to store the information shared by the points with the same group id in a single point in another collection. Then, when using the [**groups** API](#grouping-api), add the `with_lookup` parameter to bring the information from those points into each group.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-![Group id matches point id](/docs/lookup_id_linking.png)
+client
+ .create_collection(
-This has the extra benefit of having a single point to update when the information shared by the points in a group changes.
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
+ .optimizers_config(
-For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point.
+ OptimizersConfigDiffBuilder::default().default_segment_number(2),
+ ),
+ )
-In this case, to bring the information from the documents into the chunks grouped by the document id, you can use the `with_lookup` parameter:
+ .await?;
+```
-```http
-POST /collections/chunks/points/search/groups
+```java
-{
+import io.qdrant.client.QdrantClient;
- // Same as in the regular search API
+import io.qdrant.client.QdrantGrpcClient;
- ""vector"": [1.1],
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
- // Grouping parameters
+import io.qdrant.client.grpc.Collections.VectorParams;
- ""group_by"": ""document_id"",
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- ""limit"": 2,
- ""group_size"": 2,
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- // Lookup parameters
- ""with_lookup"": {
- // Name of the collection to look up points in
+client
- ""collection"": ""documents"",
+ .createCollectionAsync(
+ CreateCollection.newBuilder()
+ .setCollectionName(""{collection_name}"")
- // Options for specifying what to bring from the payload
+ .setVectorsConfig(
- // of the looked up point, true by default
+ VectorsConfig.newBuilder()
- ""with_payload"": [""title"", ""text""],
+ .setParams(
+ VectorParams.newBuilder()
+ .setSize(768)
- // Options for specifying what to bring from the vector(s)
+ .setDistance(Distance.Cosine)
- // of the looked up point, true by default
+ .build())
- ""with_vectors: false
+ .build())
- }
+ .setOptimizersConfig(
-}
+ OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build())
-```
+ .build())
+ .get();
+```
-```python
-client.search_groups(
- collection_name=""chunks"",
+```csharp
- # Same as in the regular search() API
+using Qdrant.Client;
- query_vector=[1.1],
+using Qdrant.Client.Grpc;
- # Grouping parameters
- group_by=""document_id"", # Path of the field to group by
- limit=2, # Max amount of groups
+var client = new QdrantClient(""localhost"", 6334);
- group_size=2, # Max amount of points per group
- # Lookup parameters
- with_lookup=models.WithLookup(
+await client.CreateCollectionAsync(
- # Name of the collection to look up points in
+ collectionName: ""{collection_name}"",
- collection=""documents"",
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- # Options for specifying what to bring from the payload
+ optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 }
- # of the looked up point, True by default
+);
- with_payload=[""title"", ""text""],
+```
- # Options for specifying what to bring from the vector(s)
- # of the looked up point, True by default
- with_vectors=False,
+```go
- ),
+import (
-)
+ ""context""
-```
+ ""github.com/qdrant/go-client/qdrant""
-```typescript
+)
-client.searchPointGroups(""{collection_name}"", {
- vector: [1.1],
- group_by: ""document_id"",
+client, err := qdrant.NewClient(&qdrant.Config{
- limit: 2,
+ Host: ""localhost"",
- group_size: 2,
+ Port: 6334,
- with_lookup: {
+})
- collection: w,
- with_payload: [""title"", ""text""],
- with_vectors: false,
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- },
+ CollectionName: ""{collection_name}"",
-});
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
-```
+ Size: 768,
+ Distance: qdrant.Distance_Cosine,
+ }),
-```rust
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{
-use qdrant_client::qdrant::{SearchPointGroups, WithLookup};
+ DefaultSegmentNumber: qdrant.PtrOf(uint64(2)),
+ },
+})
-client
+```",documentation/guides/optimize.md
+"---
- .search_groups(&SearchPointGroups {
+title: Telemetry
- collection_name: ""{collection_name}"".to_string(),
+weight: 150
- vector: vec![1.1],
+aliases:
- group_by: ""document_id"".to_string(),
+ - ../telemetry
- limit: 2,
+---
- group_size: 2,
- with_lookup: Some(WithLookup {
- collection: ""documents"".to_string(),
+# Telemetry
- with_payload: Some(vec![""title"", ""text""].into()),
- with_vectors: Some(false.into()),
- }),
+Qdrant collects anonymized usage statistics from users in order to improve the engine.
- ..Default::default()
+You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion).
- })
- .await?;
-```
+## Why do we collect telemetry?
-```java
+We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios.
-import java.util.List;
+We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations.
-import static io.qdrant.client.WithPayloadSelectorFactory.include;
+In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used.
-import static io.qdrant.client.WithVectorsSelectorFactory.enable;
+Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance.
-import io.qdrant.client.grpc.Points.SearchPointGroups;
+To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code.
-import io.qdrant.client.grpc.Points.WithLookup;
+With this information, we can make Qdrant faster for everyone.
-client
- .searchGroupsAsync(
- SearchPointGroups.newBuilder()
+## What information is collected?
- .setCollectionName(""{collection_name}"")
- .addAllVector(List.of(1.0f))
- .setGroupBy(""document_id"")
+There are 3 types of information that we collect:
- .setLimit(2)
- .setGroupSize(2)
- .setWithLookup(
+* System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance.
- WithLookup.newBuilder()
+* Performance - information about timings and counters of various pieces of code.
- .setCollection(""documents"")
+* Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us.
- .setWithPayload(include(List.of(""title"", ""text"")))
- .setWithVectors(enable(false))
- .build())
+### We **never** collect the following information:
- .build())
- .get();
-```
+- User's IP address
+- Any data that can be used to identify the user or the user's organization
+- Any data, stored in the collections
-```csharp
+- Any names of the collections
-using Qdrant.Client;
+- Any URLs
-using Qdrant.Client.Grpc;
+## How do we anonymize data?
-var client = new QdrantClient(""localhost"", 6334);
+We understand that some users may be concerned about the privacy of their data.
-await client.SearchGroupsAsync(
+That is why we make an extra effort to ensure your privacy.
- collectionName: ""{collection_name}"",
- vector: new float[] { 1.0f },
- groupBy: ""document_id"",
+There are several different techniques that we use to anonymize the data:
- limit: 2,
- groupSize: 2,
- withLookup: new WithLookup
+- We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances.
- {
+- We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000.
- Collection = ""documents"",
+- We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry.
- WithPayload = new WithPayloadSelector
+- All urls are hashed as well.
- {
- Include = new PayloadIncludeSelector { Fields = { new string[] { ""title"", ""text"" } } }
- },
+You can see exact version of anomymized collected data by accessing the [telemetry API](https://api.qdrant.tech/master/api-reference/service/telemetry) with `anonymize=true` parameter.
- WithVectors = false
- }
-);
+For example,
-```
-For the `with_lookup` parameter, you can also use the shorthand `with_lookup=""documents""` to bring the whole payload and vector(s) without explicitly specifying it.
+## Deactivate telemetry
-The looked up result will show up under `lookup` in each group.
+You can deactivate telemetry by:
-```json
-{
+- setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true`
- ""result"": {
+- setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files
- ""groups"": [
+- using cli option `--disable-telemetry`
- {
- ""id"": 1,
- ""hits"": [
+Any of these options will prevent Qdrant from sending any telemetry data.
- { ""id"": 0, ""score"": 0.91 },
- { ""id"": 1, ""score"": 0.85 }
- ],
+If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions)
- ""lookup"": {
- ""id"": 1,
- ""payload"": {
+## Request information deletion
- ""title"": ""Document A"",
- ""text"": ""This is document A""
- }
+We provide an email address so that users can request the complete removal of their data from all of our tools.
- }
- },
- {
+To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation.
- ""id"": 2,
+You can find this identifier in the telemetry API response (`""id""` field), or in the logs of your Qdrant instance.
- ""hits"": [
- { ""id"": 1, ""score"": 0.85 }
- ],
+Any questions regarding the management of the data we collect can also be sent to this email address.
+",documentation/guides/telemetry.md
+"---
- ""lookup"": {
+title: Distributed Deployment
- ""id"": 2,
+weight: 100
- ""payload"": {
+aliases:
- ""title"": ""Document B"",
+ - ../distributed_deployment
- ""text"": ""This is document B""
+ - /guides/distributed_deployment
- }
+---
- }
- }
- ]
+# Distributed deployment
- },
- ""status"": ""ok"",
- ""time"": 0.001
+Since version v0.8.0 Qdrant supports a distributed deployment mode.
-}
+In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability.
-```
+## How many Qdrant nodes should I run?
-Since the lookup is done by matching directly with the point id, any group id that is not an existing (and valid) point id in the lookup collection will be ignored, and the `lookup` field will be empty.
-",documentation/concepts/search.md
-"---
-title: Payload
-weight: 40
+The ideal number of Qdrant nodes depends on how much you value cost-saving, resilience, and performance/scalability in relation to each other.
-aliases:
- - ../payload
----
+- **Prioritizing cost-saving**: If cost is most important to you, run a single Qdrant node. This is not recommended for production environments. Drawbacks:
+ - Resilience: Users will experience downtime during node restarts, and recovery is not possible unless you have backups or snapshots.
+ - Performance: Limited to the resources of a single server.
-# Payload
+- **Prioritizing resilience**: If resilience is most important to you, run a Qdrant cluster with three or more nodes and two or more shard replicas. Clusters with three or more nodes and replication can perform all operations even while one node is down. Additionally, they gain performance benefits from load-balancing and they can recover from the permanent loss of one node without the need for backups or snapshots (but backups are still strongly recommended). This is most recommended for production environments. Drawbacks:
-One of the significant features of Qdrant is the ability to store additional information along with vectors.
+ - Cost: Larger clusters are more costly than smaller clusters, which is the only drawback of this configuration.
-This information is called `payload` in Qdrant terminology.
+- **Balancing cost, resilience, and performance**: Running a two-node Qdrant cluster with replicated shards allows the cluster to respond to most read/write requests even when one node is down, such as during maintenance events. Having two nodes also means greater performance than a single-node cluster while still being cheaper than a three-node cluster. Drawbacks:
-Qdrant allows you to store any information that can be represented using JSON.
+ - Resilience (uptime): The cluster cannot perform operations on collections when one node is down. Those operations require >50% of nodes to be running, so this is only possible in a 3+ node cluster. Since creating, editing, and deleting collections are usually rare operations, many users find this drawback to be negligible.
+ - Resilience (data integrity): If the data on one of the two nodes is permanently lost or corrupted, it cannot be recovered aside from snapshots or backups. Only 3+ node clusters can recover from the permanent loss of a single node since recovery operations require >50% of the cluster to be healthy.
+ - Cost: Replicating your shards requires storing two copies of your data.
-Here is an example of a typical payload:
+ - Performance: The maximum performance of a Qdrant cluster increases as you add more nodes.
-```json
+In summary, single-node clusters are best for non-production workloads, replicated 3+ node clusters are the gold standard, and replicated 2-node clusters strike a good balance.
-{
- ""name"": ""jacket"",
- ""colors"": [""red"", ""blue""],
+## Enabling distributed mode in self-hosted Qdrant
- ""count"": 10,
- ""price"": 11.99,
- ""locations"": [
+To enable distributed deployment - enable the cluster mode in the [configuration](../configuration/) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`.
- {
- ""lon"": 52.5200,
- ""lat"": 13.4050
+```yaml
- }
+cluster:
- ],
+ # Use `enabled: true` to run Qdrant in distributed deployment mode
- ""reviews"": [
+ enabled: true
- {
+ # Configuration of the inter-cluster communication
- ""user"": ""alice"",
+ p2p:
- ""score"": 4
+ # Port for internal communication between peers
- },
+ port: 6335
- {
- ""user"": ""bob"",
- ""score"": 5
+ # Configuration related to distributed consensus algorithm
- }
+ consensus:
- ]
+ # How frequently peers should ping each other.
-}
+ # Setting this parameter to lower value will allow consensus
-```
+ # to detect disconnected node earlier, but too frequent
+ # tick period may create significant network and CPU overhead.
+ # We encourage you NOT to change this parameter unless you know what you are doing.
-## Payload types
+ tick_period_ms: 100
+```
-In addition to storing payloads, Qdrant also allows you search based on certain kinds of values.
-This feature is implemented as additional filters during the search and will enable you to incorporate custom logic on top of semantic similarity.
+By default, Qdrant will use port `6335` for its internal communication.
+All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations.
-During the filtering, Qdrant will check the conditions over those values that match the type of the filtering condition. If the stored value type does not fit the filtering condition - it will be considered not satisfied.
+Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached:
-For example, you will get an empty output if you apply the [range condition](../filtering/#range) on the string data.
+```bash
+./qdrant --uri 'http://qdrant_node_1:6335'
-However, arrays (multiple values of the same type) are treated a little bit different. When we apply a filter to an array, it will succeed if at least one of the values inside the array meets the condition.
+```
-The filtering process is discussed in detail in the section [Filtering](../filtering).
+Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster.
-Let's look at the data types that Qdrant supports for searching:
+To do this, they need to be provided with a bootstrap URL:
-### Integer
+```bash
+./qdrant --bootstrap 'http://qdrant_node_1:6335'
+```
-`integer` - 64-bit integer in the range from `-9223372036854775808` to `9223372036854775807`.
+The URL of the new peers themselves will be calculated automatically from the IP address of their request.
-Example of single and multiple `integer` values:
+But it is also possible to provide them individually using the `--uri` argument.
-```json
+```text
-{
+USAGE:
- ""count"": 10,
+ qdrant [OPTIONS]
- ""sizes"": [35, 36, 38]
-}
-```
+OPTIONS:
+ --bootstrap
+ Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified -
-### Float
+ this peer will be considered as a first in a new deployment
-`float` - 64-bit floating point number.
+ --uri
+ Uri of this peer. Other peers should be able to reach it by this uri.
-Example of single and multiple `float` values:
+ This value has to be supplied if this is the first peer in a new deployment.
-```json
-{
+ In case this is not the first peer and it bootstraps the value is optional. If not
- ""price"": 11.99,
+ supplied then qdrant will take internal grpc port from config and derive the IP address
+
+ of this peer on bootstrap peer (receiving side)
- ""ratings"": [9.1, 9.2, 9.4]
-}
```
-### Bool
+After a successful synchronization you can observe the state of the cluster through the [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status):
-Bool - binary value. Equals to `true` or `false`.
+```http
+GET /cluster
+```
-Example of single and multiple `bool` values:
+Example result:
-```json
-{
- ""is_delivered"": true,
+```json
- ""responses"": [false, false, true, false]
+{
-}
+ ""result"": {
-```
+ ""status"": ""enabled"",
+ ""peer_id"": 11532566549086892000,
+ ""peers"": {
-### Keyword
+ ""9834046559507417430"": {
+ ""uri"": ""http://172.18.0.3:6335/""
+ },
-`keyword` - string value.
+ ""11532566549086892528"": {
+ ""uri"": ""http://qdrant_node_1:6335/""
+ }
-Example of single and multiple `keyword` values:
+ },
+ ""raft_info"": {
+ ""term"": 1,
-```json
+ ""commit"": 4,
-{
+ ""pending_operations"": 1,
- ""name"": ""Alice"",
+ ""leader"": 11532566549086892000,
- ""friends"": [
+ ""role"": ""Leader""
- ""bob"",
+ }
- ""eva"",
+ },
- ""jack""
+ ""status"": ""ok"",
- ]
+ ""time"": 5.731e-06
}
@@ -21389,179 +21048,145 @@ Example of single and multiple `keyword` values:
-### Geo
-
-
+Note that enabling distributed mode does not automatically replicate your data. See the section on [making use of a new distributed Qdrant cluster](#making-use-of-a-new-distributed-qdrant-cluster) for the next steps.
-`geo` is used to represent geographical coordinates.
+## Enabling distributed mode in Qdrant Cloud
-Example of single and multiple `geo` values:
+For best results, first ensure your cluster is running Qdrant v1.7.4 or higher. Older versions of Qdrant do support distributed mode, but improvements in v1.7.4 make distributed clusters more resilient during outages.
-```json
-{
- ""location"": {
+In the [Qdrant Cloud console](https://cloud.qdrant.io/), click ""Scale Up"" to increase your cluster size to >1. Qdrant Cloud configures the distributed mode settings automatically.
- ""lon"": 52.5200,
- ""lat"": 13.4050
- },
+After the scale-up process completes, you will have a new empty node running alongside your existing node(s). To replicate data into this new empty node, see the next section.
- ""cities"": [
- {
- ""lon"": 51.5072,
+## Making use of a new distributed Qdrant cluster
- ""lat"": 0.1276
- },
- {
+When you enable distributed mode and scale up to two or more nodes, your data does not move to the new node automatically; it starts out empty. To make use of your new empty node, do one of the following:
- ""lon"": 40.7128,
- ""lat"": 74.0060
- }
+* Create a new replicated collection by setting the [replication_factor](#replication-factor) to 2 or more and setting the [number of shards](#choosing-the-right-number-of-shards) to a multiple of your number of nodes.
- ]
+* If you have an existing collection which does not contain enough shards for each node, you must create a new collection as described in the previous bullet point.
-}
+* If you already have enough shards for each node and you merely need to replicate your data, follow the directions for [creating new shard replicas](#creating-new-shard-replicas).
-```
+* If you already have enough shards for each node and your data is already replicated, you can move data (without replicating it) onto the new node(s) by [moving shards](#moving-shards).
-Coordinate should be described as an object containing two fields: `lon` - for longitude, and `lat` - for latitude.
+## Raft
-## Create point with payload
+Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure.
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/upsert_points))
+Operations on points, on the other hand, do not go through the consensus infrastructure.
-```http
+Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead.
-PUT /collections/{collection_name}/points
+In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes.
-{
- ""points"": [
- {
+Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes.
- ""id"": 1,
+In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them.
- ""vector"": [0.05, 0.61, 0.76, 0.74],
- ""payload"": {""city"": ""Berlin"", ""price"": 1.99}
- },
+Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied.
- {
- ""id"": 2,
- ""vector"": [0.19, 0.81, 0.75, 0.11],
+You may use the cluster [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status) to check the state of the consensus.
- ""payload"": {""city"": [""Berlin"", ""London""], ""price"": 1.99}
- },
- {
+## Sharding
- ""id"": 3,
- ""vector"": [0.36, 0.55, 0.47, 0.94],
- ""payload"": {""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]}
+A Collection in Qdrant is made of one or more shards.
- }
+A shard is an independent store of points which is able to perform all operations provided by collections.
- ]
+There are two methods of distributing points across shards:
-}
-```
+- **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior.
-```python
-from qdrant_client import QdrantClient
+- **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding)
-from qdrant_client.http import models
+Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result.
-client = QdrantClient(host=""localhost"", port=6333)
+### Choosing the right number of shards
-client.upsert(
- collection_name=""{collection_name}"",
- points=[
+When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster when the collection was created. The `shard_number` cannot be changed without recreating the collection.
- models.PointStruct(
- id=1,
- vector=[0.05, 0.61, 0.76, 0.74],
+```http
- payload={
+PUT /collections/{collection_name}
- ""city"": ""Berlin"",
+{
- ""price"": 1.99,
+ ""vectors"": {
- },
+ ""size"": 300,
- ),
+ ""distance"": ""Cosine""
- models.PointStruct(
+ },
- id=2,
+ ""shard_number"": 6
- vector=[0.19, 0.81, 0.75, 0.11],
+}
- payload={
+```
- ""city"": [""Berlin"", ""London""],
- ""price"": 1.99,
- },
+```python
- ),
+from qdrant_client import QdrantClient, models
- models.PointStruct(
- id=3,
- vector=[0.36, 0.55, 0.47, 0.94],
+client = QdrantClient(url=""http://localhost:6333"")
- payload={
- ""city"": [""Berlin"", ""Moscow""],
- ""price"": [1.99, 2.99],
+client.create_collection(
- },
+ collection_name=""{collection_name}"",
- ),
+ vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
- ],
+ shard_number=6,
)
@@ -21579,389 +21204,361 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.upsert(""{collection_name}"", {
+client.createCollection(""{collection_name}"", {
- points: [
+ vectors: {
- {
+ size: 300,
- id: 1,
+ distance: ""Cosine"",
- vector: [0.05, 0.61, 0.76, 0.74],
+ },
- payload: {
+ shard_number: 6,
- city: ""Berlin"",
+});
- price: 1.99,
+```
- },
- },
- {
+```rust
- id: 2,
+use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
- vector: [0.19, 0.81, 0.75, 0.11],
+use qdrant_client::Qdrant;
- payload: {
- city: [""Berlin"", ""London""],
- price: 1.99,
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- },
- },
- {
+client
- id: 3,
+ .create_collection(
- vector: [0.36, 0.55, 0.47, 0.94],
+ CreateCollectionBuilder::new(""{collection_name}"")
- payload: {
+ .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
- city: [""Berlin"", ""Moscow""],
+ .shard_number(6),
- price: [1.99, 2.99],
+ )
- },
+ .await?;
- },
+```
- ],
-});
-```
+```java
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-```rust
+import io.qdrant.client.grpc.Collections.CreateCollection;
-use qdrant_client::{client::QdrantClient, qdrant::PointStruct};
+import io.qdrant.client.grpc.Collections.Distance;
-use serde_json::json;
+import io.qdrant.client.grpc.Collections.VectorParams;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-let points = vec![
- PointStruct::new(
- 1,
+client
- vec![0.05, 0.61, 0.76, 0.74],
+ .createCollectionAsync(
- json!(
+ CreateCollection.newBuilder()
- {""city"": ""Berlin"", ""price"": 1.99}
+ .setCollectionName(""{collection_name}"")
- )
+ .setVectorsConfig(
- .try_into()
+ VectorsConfig.newBuilder()
- .unwrap(),
+ .setParams(
- ),
+ VectorParams.newBuilder()
- PointStruct::new(
+ .setSize(300)
- 2,
+ .setDistance(Distance.Cosine)
- vec![0.19, 0.81, 0.75, 0.11],
+ .build())
- json!(
+ .build())
- {""city"": [""Berlin"", ""London""]}
+ .setShardNumber(6)
- )
+ .build())
- .try_into()
+ .get();
- .unwrap(),
+```
- ),
- PointStruct::new(
- 3,
+```csharp
- vec![0.36, 0.55, 0.47, 0.94],
+using Qdrant.Client;
- json!(
+using Qdrant.Client.Grpc;
- {""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]}
- )
- .try_into()
+var client = new QdrantClient(""localhost"", 6334);
- .unwrap(),
- ),
-];
+await client.CreateCollectionAsync(
+ collectionName: ""{collection_name}"",
+ vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
-client
-
- .upsert_points(""{collection_name}"".to_string(), None, points, None)
+ shardNumber: 6
- .await?;
+);
```
-```java
+```go
-import java.util.List;
+import (
-import java.util.Map;
+ ""context""
-import static io.qdrant.client.PointIdFactory.id;
+ ""github.com/qdrant/go-client/qdrant""
-import static io.qdrant.client.ValueFactory.value;
+)
-import static io.qdrant.client.VectorsFactory.vectors;
+client, err := qdrant.NewClient(&qdrant.Config{
-import io.qdrant.client.QdrantClient;
+ Host: ""localhost"",
-import io.qdrant.client.QdrantGrpcClient;
+ Port: 6334,
-import io.qdrant.client.grpc.Points.PointStruct;
+})
-QdrantClient client =
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 300,
-client
+ Distance: qdrant.Distance_Cosine,
- .upsertAsync(
+ }),
- ""{collection_name}"",
+ ShardNumber: qdrant.PtrOf(uint32(6)),
- List.of(
+})
- PointStruct.newBuilder()
+```
- .setId(id(1))
- .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
- .putAllPayload(Map.of(""city"", value(""Berlin""), ""price"", value(1.99)))
+To ensure all nodes in your cluster are evenly utilized, the number of shards must be a multiple of the number of nodes you are currently running in your cluster.
- .build(),
- PointStruct.newBuilder()
- .setId(id(2))
+> Aside: Advanced use cases such as multitenancy may require an uneven distribution of shards. See [Multitenancy](/articles/multitenancy/).
- .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
- .putAllPayload(
- Map.of(""city"", list(List.of(value(""Berlin""), value(""London"")))))
+We recommend creating at least 2 shards per node to allow future expansion without having to re-shard. Re-sharding should be avoided since it requires creating a new collection. In-place re-sharding is planned for a future version of Qdrant.
- .build(),
- PointStruct.newBuilder()
- .setId(id(3))
+If you anticipate a lot of growth, we recommend 12 shards since you can expand from 1 node up to 2, 3, 6, and 12 nodes without having to re-shard. Having more than 12 shards in a small cluster may not be worth the performance overhead.
- .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
- .putAllPayload(
- Map.of(
+Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations.
- ""city"",
- list(List.of(value(""Berlin""), value(""London""))),
- ""price"",
+### Moving shards
- list(List.of(value(1.99), value(2.99)))))
- .build()))
- .get();
+*Available as of v0.9.0*
-```
+Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime.
-```csharp
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://api.qdrant.tech/master/api-reference/distributed/collection-cluster-info).
-var client = new QdrantClient(""localhost"", 6334);
+Use the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to initiate the shard transfer:
-await client.UpsertAsync(
+```http
- collectionName: ""{collection_name}"",
+POST /collections/{collection_name}/cluster
- points: new List
+{
- {
+ ""move_shard"": {
- new PointStruct
+ ""shard_id"": 0,
- {
+ ""from_peer_id"": 381894127,
- Id = 1,
+ ""to_peer_id"": 467122995
- Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+ }
- Payload = { [""city""] = ""Berlin"", [""price""] = 1.99 }
+}
- },
+```
- new PointStruct
- {
- Id = 2,
+
- Vectors = new[] { 0.19f, 0.81f, 0.75f, 0.11f },
- Payload = { [""city""] = new[] { ""Berlin"", ""London"" } }
- },
+After the transfer is initiated, the service will process it based on the used
- new PointStruct
+[transfer method](#shard-transfer-method) keeping both shards in sync. Once the
- {
+transfer is completed, the old shard is deleted from the source node.
- Id = 3,
- Vectors = new[] { 0.36f, 0.55f, 0.47f, 0.94f },
- Payload =
+In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://api.qdrant.tech/master/api-reference/distributed/remove-peer).
- {
- [""city""] = new[] { ""Berlin"", ""Moscow"" },
- [""price""] = new Value
+```http
- {
+DELETE /cluster/peer/{peer_id}
- ListValue = new ListValue { Values = { new Value[] { 1.99, 2.99 } } }
+```
- }
- }
- }
+After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown.
- }
-);
-```
+### User-defined sharding
-## Update payload
+*Available as of v1.7.0*
-### Set payload
+Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned.
-Set only the given payload values on a point.
+A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards.
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/set_payload)):
+To enable user-defined sharding, set `sharding_method` to `custom` during collection creation:
```http
-POST /collections/{collection_name}/points/payload
+PUT /collections/{collection_name}
{
- ""payload"": {
+ ""shard_number"": 1,
- ""property1"": ""string"",
+ ""sharding_method"": ""custom""
- ""property2"": ""string""
+ // ... other collection parameters
- },
+}
- ""points"": [
+```
- 0, 3, 100
- ]
-}
+```python
-```
+from qdrant_client import QdrantClient, models
-```python
+client = QdrantClient(url=""http://localhost:6333"")
-client.set_payload(
- collection_name=""{collection_name}"",
- payload={
+client.create_collection(
- ""property1"": ""string"",
+ collection_name=""{collection_name}"",
- ""property2"": ""string"",
+ shard_number=1,
- },
+ sharding_method=models.ShardingMethod.CUSTOM,
- points=[0, 3, 10],
+ # ... other collection parameters
)
+client.create_shard_key(""{collection_name}"", ""{shard_key}"")
+
```
```typescript
-client.setPayload(""{collection_name}"", {
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- payload: {
- property1: ""string"",
- property2: ""string"",
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- },
- points: [0, 3, 10],
+
+client.createCollection(""{collection_name}"", {
+
+ shard_number: 1,
+
+ sharding_method: ""custom"",
+
+ // ... other collection parameters
+
+});
+
+
+
+client.createShardKey(""{collection_name}"", {
+
+ shard_key: ""{shard_key}""
});
@@ -21973,45 +21570,45 @@ client.setPayload(""{collection_name}"", {
use qdrant_client::qdrant::{
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector,
+ CreateCollectionBuilder, CreateShardKeyBuilder, CreateShardKeyRequestBuilder, Distance,
+
+ ShardingMethod, VectorParamsBuilder,
};
-use serde_json::json;
+use qdrant_client::Qdrant;
-client
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- .set_payload_blocking(
- ""{collection_name}"",
- None,
+client
+
+ .create_collection(
- &PointsSelector {
+ CreateCollectionBuilder::new(""{collection_name}"")
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
+ .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
- ids: vec![0.into(), 3.into(), 10.into()],
+ .shard_number(1)
- })),
+ .sharding_method(ShardingMethod::Custom.into()),
- },
+ )
- json!({
+ .await?;
- ""property1"": ""string"",
- ""property2"": ""string"",
- })
+client
- .try_into()
+ .create_shard_key(
- .unwrap(),
+ CreateShardKeyRequestBuilder::new(""{collection_name}"")
- None,
+ .request(CreateShardKeyBuilder::default().shard_key(""{shard_key"".to_string())),
)
@@ -22023,36 +21620,62 @@ client
```java
-import java.util.List;
+import static io.qdrant.client.ShardKeyFactory.shardKey;
-import java.util.Map;
+import io.qdrant.client.QdrantClient;
-import static io.qdrant.client.PointIdFactory.id;
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.ShardingMethod;
+
+import io.qdrant.client.grpc.Collections.CreateShardKey;
+
+import io.qdrant.client.grpc.Collections.CreateShardKeyRequest;
-import static io.qdrant.client.ValueFactory.value;
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
client
- .setPayloadAsync(
+ .createCollectionAsync(
- ""{collection_name}"",
+ CreateCollection.newBuilder()
- Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
+ .setCollectionName(""{collection_name}"")
- List.of(id(0), id(3), id(10)),
+ // ... other collection parameters
- true,
+ .setShardNumber(1)
- null,
+ .setShardingMethod(ShardingMethod.Custom)
- null)
+ .build())
.get();
+
+
+client.createShardKeyAsync(CreateShardKeyRequest.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setRequest(CreateShardKey.newBuilder()
+
+ .setShardKey(shardKey(""{shard_key}""))
+
+ .build())
+
+ .build()).get();
+
```
@@ -22069,983 +21692,947 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.SetPayloadAsync(
+await client.CreateCollectionAsync(
collectionName: ""{collection_name}"",
- payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
+ // ... other collection parameters
- ids: new ulong[] { 0, 3, 10 }
+ shardNumber: 1,
+
+ shardingMethod: ShardingMethod.Custom
);
-```
+await client.CreateShardKeyAsync(
-You don't need to know the ids of the points you want to modify. The alternative
+ ""{collection_name}"",
-is to use filters.
+ new CreateShardKey { ShardKey = new ShardKey { Keyword = ""{shard_key}"", } }
+ );
+```
-```http
-POST /collections/{collection_name}/points/payload
-{
+```go
- ""payload"": {
+import (
- ""property1"": ""string"",
+ ""context""
- ""property2"": ""string""
- },
- ""filter"": {
+ ""github.com/qdrant/go-client/qdrant""
- ""must"": [
+)
- {
- ""key"": ""color"",
- ""match"": {
+client, err := qdrant.NewClient(&qdrant.Config{
- ""value"": ""red""
+ Host: ""localhost"",
- }
+ Port: 6334,
- }
+})
- ]
- }
-}
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-```
+ CollectionName: ""{collection_name}"",
+ // ... other collection parameters
+ ShardNumber: qdrant.PtrOf(uint32(1)),
-```python
+ ShardingMethod: qdrant.ShardingMethod_Custom.Enum(),
-client.set_payload(
+})
- collection_name=""{collection_name}"",
- payload={
- ""property1"": ""string"",
+client.CreateShardKey(context.Background(), ""{collection_name}"", &qdrant.CreateShardKey{
- ""property2"": ""string"",
+ ShardKey: qdrant.NewShardKey(""{shard_key}""),
- },
+})
- points=models.Filter(
+```
- must=[
- models.FieldCondition(
- key=""color"",
+In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings:
- match=models.MatchValue(value=""red""),
- ),
- ],
+```json
- ),
+{
-)
+ ""shard_number"": 1,
-```
+ ""sharding_method"": ""custom"",
+ ""replication_factor"": 2
+}
-```typescript
+```
-client.setPayload(""{collection_name}"", {
- payload: {
- property1: ""string"",
+Then you will have `1 * 10 * 2 = 20` total physical shards in the collection.
- property2: ""string"",
- },
- filter: {
+Physical shards require a large amount of resources, so make sure your custom sharding key has a low cardinality.
- must: [
- {
- key: ""color"",
+For large cardinality keys, it is recommended to use [partition by payload](/documentation/guides/multiple-partitions/#partition-by-payload) instead.
- match: {
- value: ""red"",
- },
+To specify the shard for each point, you need to provide the `shard_key` field in the upsert request:
- },
- ],
- },
+```http
-});
+PUT /collections/{collection_name}/points
-```
+{
+ ""points"": [
+ {
-```rust
+ ""id"": 1111,
-use qdrant_client::qdrant::{
+ ""vector"": [0.1, 0.2, 0.3]
- points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector,
+ },
-};
+ ]
-use serde_json::json;
+ ""shard_key"": ""user_1""
+}
+```
-client
- .set_payload_blocking(
- ""{collection_name}"",
+```python
- None,
+from qdrant_client import QdrantClient, models
- &PointsSelector {
- points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([
- Condition::matches(""color"", ""red"".to_string()),
+client = QdrantClient(url=""http://localhost:6333"")
- ]))),
- },
- json!({
+client.upsert(
- ""property1"": ""string"",
+ collection_name=""{collection_name}"",
- ""property2"": ""string"",
+ points=[
- })
+ models.PointStruct(
- .try_into()
+ id=1111,
- .unwrap(),
+ vector=[0.1, 0.2, 0.3],
- None,
+ ),
- )
+ ],
- .await?;
+ shard_key_selector=""user_1"",
-```
+)
+```
-```java
-import java.util.Map;
+```typescript
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+client.upsertPoints(""{collection_name}"", {
-import static io.qdrant.client.ValueFactory.value;
+ points: [
+ {
+ id: 1111,
-client
+ vector: [0.1, 0.2, 0.3],
- .setPayloadAsync(
+ },
- ""{collection_name}"",
+ ],
- Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
+ shard_key: ""user_1"",
- Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
+});
- true,
+```
- null,
- null)
- .get();
+```rust
-```
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
+use qdrant_client::Payload;
-```csharp
-using Qdrant.Client;
+client
-using Qdrant.Client.Grpc;
+ .upsert_points(
-using static Qdrant.Client.Grpc.Conditions;
+ UpsertPointsBuilder::new(
+ ""{collection_name}"",
+ vec![PointStruct::new(
-var client = new QdrantClient(""localhost"", 6334);
+ 111,
+ vec![0.1, 0.2, 0.3],
+ Payload::default(),
-await client.SetPayloadAsync(
+ )],
- collectionName: ""{collection_name}"",
+ )
- payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
+ .shard_key_selector(""user_1"".to_string()),
- filter: MatchKeyword(""color"", ""red"")
+ )
-);
+ .await?;
```
-### Overwrite payload
+```java
+import java.util.List;
-Fully replace any existing payload with the given one.
+import static io.qdrant.client.PointIdFactory.id;
+import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector;
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/overwrite_payload)):
+import static io.qdrant.client.VectorsFactory.vectors;
-```http
+import io.qdrant.client.QdrantClient;
-PUT /collections/{collection_name}/points/payload
+import io.qdrant.client.QdrantGrpcClient;
-{
+import io.qdrant.client.grpc.Points.PointStruct;
- ""payload"": {
+import io.qdrant.client.grpc.Points.UpsertPoints;
- ""property1"": ""string"",
- ""property2"": ""string""
- },
+QdrantClient client =
- ""points"": [
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- 0, 3, 100
- ]
-}
+client
-```
+ .upsertAsync(
+ UpsertPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
-```python
+ .addAllPoints(
-client.overwrite_payload(
+ List.of(
- collection_name=""{collection_name}"",
+ PointStruct.newBuilder()
- payload={
+ .setId(id(111))
- ""property1"": ""string"",
+ .setVectors(vectors(0.1f, 0.2f, 0.3f))
- ""property2"": ""string"",
+ .build()))
- },
+ .setShardKeySelector(shardKeySelector(""user_1""))
- points=[0, 3, 10],
+ .build())
-)
+ .get();
```
-```typescript
+```csharp
-client.overwritePayload(""{collection_name}"", {
+using Qdrant.Client;
- payload: {
+using Qdrant.Client.Grpc;
- property1: ""string"",
- property2: ""string"",
- },
+var client = new QdrantClient(""localhost"", 6334);
- points: [0, 3, 10],
-});
-```
+await client.UpsertAsync(
+ collectionName: ""{collection_name}"",
+ points: new List
-```rust
+ {
-use qdrant_client::qdrant::{
+ new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } }
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector,
+ },
-};
+ shardKeySelector: new ShardKeySelector { ShardKeys = { new List { ""user_1"" } } }
-use serde_json::json;
+);
+```
-client
- .overwrite_payload_blocking(
+```go
- ""{collection_name}"",
+import (
- None,
+ ""context""
- &PointsSelector {
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
- ids: vec![0.into(), 3.into(), 10.into()],
+ ""github.com/qdrant/go-client/qdrant""
- })),
+)
- },
- json!({
- ""property1"": ""string"",
+client, err := qdrant.NewClient(&qdrant.Config{
- ""property2"": ""string"",
+ Host: ""localhost"",
- })
+ Port: 6334,
- .try_into()
+})
- .unwrap(),
- None,
- )
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
- .await?;
+ CollectionName: ""{collection_name}"",
-```
+ Points: []*qdrant.PointStruct{
+ {
+ Id: qdrant.NewIDNum(111),
-```java
+ Vectors: qdrant.NewVectors(0.1, 0.2, 0.3),
-import java.util.List;
+ },
+ },
+ ShardKeySelector: &qdrant.ShardKeySelector{
-import static io.qdrant.client.PointIdFactory.id;
+ ShardKeys: []*qdrant.ShardKey{
-import static io.qdrant.client.ValueFactory.value;
+ qdrant.NewShardKey(""user_1""),
+ },
+ },
-client
+})
- .overwritePayloadAsync(
+```
- ""{collection_name}"",
- Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
- List.of(id(0), id(3), id(10)),
+
- null)
+
- .get();
+* When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys.
-```
+This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check.
+
-```csharp
-using Qdrant.Client;
+Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards.
-using Qdrant.Client.Grpc;
+Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed.
-var client = new QdrantClient(""localhost"", 6334);
+
-await client.OverwritePayloadAsync(
- collectionName: ""{collection_name}"",
- payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
+### Shard transfer method
- ids: new ulong[] { 0, 3, 10 }
-);
-```
+*Available as of v1.7.0*
-Like [set payload](#set-payload), you don't need to know the ids of the points
+There are different methods for transferring a shard, such as moving or
-you want to modify. The alternative is to use filters.
+replicating, to another node. Depending on what performance and guarantees you'd
+like to have and how you'd like to manage your cluster, you likely want to
+choose a specific method. Each method has its own pros and cons. Which is
-### Clear payload
+fastest depends on the size and state of a shard.
-This method removes all payload keys from specified points
+Available shard transfer methods are:
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/clear_payload)):
+- `stream_records`: _(default)_ transfer by streaming just its records to the target node in batches.
+- `snapshot`: transfer including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots/) automatically.
+- `wal_delta`: _(auto recovery default)_ transfer by resolving [WAL] difference; the operations that were missed.
-```http
-
-POST /collections/{collection_name}/points/payload/clear
-
-{
- ""points"": [0, 3, 100]
-}
+Each has pros, cons and specific requirements, some of which are:
-```
+| Method: | Stream records | Snapshot | WAL delta |
-```python
+|:---|:---|:---|:---|
-client.clear_payload(
+| **Version** | v0.8.0+ | v1.7.0+ | v1.8.0+ |
- collection_name=""{collection_name}"",
+| **Target** | New/existing shard | New/existing shard | Existing shard |
- points_selector=models.PointIdsList(
+| **Connectivity** | Internal gRPC API (6335) | REST API (6333) Internal gRPC API (6335) | Internal gRPC API (6335) |
- points=[0, 3, 100],
+| **HNSW index** | Doesn't transfer, will reindex on target. | Does transfer, immediately ready on target. | Doesn't transfer, may index on target. |
- ),
+| **Quantization** | Doesn't transfer, will requantize on target. | Does transfer, immediately ready on target. | Doesn't transfer, may quantize on target. |
-)
+| **Ordering** | Unordered updates on target[^unordered] | Ordered updates on target[^ordered] | Ordered updates on target[^ordered] |
-```
+| **Disk space** | No extra required | Extra required for snapshot on both nodes | No extra required |
-```typescript
+[^unordered]: Weak ordering for updates: All records are streamed to the target node in order.
-client.clearPayload(""{collection_name}"", {
+ New updates are received on the target node in parallel, while the transfer
- points: [0, 3, 100],
+ of records is still happening. We therefore have `weak` ordering, regardless
-});
+ of what [ordering](#write-ordering) is used for updates.
-```
+[^ordered]: Strong ordering for updates: A snapshot of the shard
+ is created, it is transferred and recovered on the target node. That ensures
+ the state of the shard is kept consistent. New updates are queued on the
-```rust
+ source node, and transferred in order to the target node. Updates therefore
-use qdrant_client::qdrant::{
+ have the same [ordering](#write-ordering) as the user selects, making
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector,
+ `strong` ordering possible.
-};
+To select a shard transfer method, specify the `method` like:
-client
- .clear_payload(
- ""{collection_name}"",
+```http
- None,
+POST /collections/{collection_name}/cluster
- Some(PointsSelector {
+{
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
+ ""move_shard"": {
- ids: vec![0.into(), 3.into(), 100.into()],
+ ""shard_id"": 0,
- })),
+ ""from_peer_id"": 381894127,
- }),
+ ""to_peer_id"": 467122995,
- None,
+ ""method"": ""snapshot""
- )
+ }
- .await?;
+}
```
-```java
-
-import java.util.List;
-
+The `stream_records` transfer method is the simplest available. It simply
+transfers all shard records in batches to the target node until it has
-import static io.qdrant.client.PointIdFactory.id;
+transferred all of them, keeping both shards in sync. It will also make sure the
+transferred shard indexing process is keeping up before performing a final
+switch. The method has two common disadvantages: 1. It does not transfer index
-client
+or quantization data, meaning that the shard has to be optimized again on the
- .clearPayloadAsync(""{collection_name}"", List.of(id(0), id(3), id(100)), true, null, null)
+new node, which can be very expensive. 2. The ordering guarantees are
- .get();
+`weak`[^unordered], which is not suitable for some applications. Because it is
-```
+so simple, it's also very robust, making it a reliable choice if the above cons
+are acceptable in your use case. If your cluster is unstable and out of
+resources, it's probably best to use the `stream_records` transfer method,
-```csharp
+because it is unlikely to fail.
-using Qdrant.Client;
+The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots/)
-var client = new QdrantClient(""localhost"", 6334);
+to transfer a shard. A snapshot is created automatically. It is then transferred
+and restored on the target node. After this is done, the snapshot is removed
+from both nodes. While the snapshot/transfer/restore operation is happening, the
-await client.ClearPayloadAsync(collectionName: ""{collection_name}"", ids: new ulong[] { 0, 3, 100 });
+source node queues up all new operations. All queued updates are then sent in
-```
+order to the target shard to bring it into the same state as the source. There
+are two important benefits: 1. It transfers index and quantization data, so that
+the shard does not have to be optimized again on the target node, making them
-
+shards, this can give a huge performance improvement. 2. The ordering guarantees
+can be `strong`[^ordered], required for some applications.
-### Delete payload keys
+The `wal_delta` transfer method only transfers the difference between two
+shards. More specifically, it transfers all operations that were missed to the
-Delete specific payload keys from points.
+target shard. The [WAL] of both shards is used to resolve this. There are two
+benefits: 1. It will be very fast because it only transfers the difference
+rather than all data. 2. The ordering guarantees can be `strong`[^ordered],
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/delete_payload)):
+required for some applications. Two disadvantages are: 1. It can only be used to
+transfer to a shard that already exists on the other node. 2. Applicability is
+limited because the WALs normally don't hold more than 64MB of recent
-```http
+operations. But that should be enough for a node that quickly restarts, to
-POST /collections/{collection_name}/points/payload/delete
+upgrade for example. If a delta cannot be resolved, this method automatically
-{
+falls back to `stream_records` which equals transferring the full shard.
- ""keys"": [""color"", ""price""],
- ""points"": [0, 3, 100]
-}
+The `stream_records` method is currently used as default. This may change in the
-```
+future. As of Qdrant 1.9.0 `wal_delta` is used for automatic shard replications
+to recover dead shards.
-```python
-client.delete_payload(
+[WAL]: ../../concepts/storage/#versioning
- collection_name=""{collection_name}"",
- keys=[""color"", ""price""],
- points=[0, 3, 100],
+## Replication
-)
-```
+*Available as of v0.11.0*
-```typescript
-client.deletePayload(""{collection_name}"", {
+Qdrant allows you to replicate shards between nodes in the cluster.
- keys: [""color"", ""price""],
- points: [0, 3, 100],
-});
+Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster.
-```
+This ensures the availability of the data in case of node failures, except if all replicas are lost.
-```rust
+### Replication factor
-use qdrant_client::qdrant::{
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector,
-};
+When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to ""1"", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection.
-client
+Currently, the replication factor of a collection can only be configured at creation time.
- .delete_payload_blocking(
- ""{collection_name}"",
- None,
+```http
- &PointsSelector {
+PUT /collections/{collection_name}
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
+{
- ids: vec![0.into(), 3.into(), 100.into()],
+ ""vectors"": {
- })),
+ ""size"": 300,
- },
+ ""distance"": ""Cosine""
- vec![""color"".to_string(), ""price"".to_string()],
+ },
- None,
+ ""shard_number"": 6,
- )
+ ""replication_factor"": 2,
- .await?;
+}
```
-```java
-
-import java.util.List;
-
-
+```python
-import static io.qdrant.client.PointIdFactory.id;
+from qdrant_client import QdrantClient, models
-client
+client = QdrantClient(url=""http://localhost:6333"")
- .deletePayloadAsync(
- ""{collection_name}"",
- List.of(""color"", ""price""),
+client.create_collection(
- List.of(id(0), id(3), id(100)),
+ collection_name=""{collection_name}"",
- true,
+ vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
- null,
+ shard_number=6,
- null)
+ replication_factor=2,
- .get();
+)
```
-```csharp
-
-using Qdrant.Client;
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-var client = new QdrantClient(""localhost"", 6334);
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-await client.DeletePayloadAsync(
- collectionName: ""{collection_name}"",
+client.createCollection(""{collection_name}"", {
- keys: [""color"", ""price""],
+ vectors: {
- ids: new ulong[] { 0, 3, 100 }
+ size: 300,
-);
+ distance: ""Cosine"",
+ },
+ shard_number: 6,
-```
+ replication_factor: 2,
+});
+```
-Alternatively, you can use filters to delete payload keys from the points.
+```rust
-```http
+use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
-POST /collections/{collection_name}/points/payload/delete
+use qdrant_client::Qdrant;
-{
- ""keys"": [""color"", ""price""],
- ""filter"": {
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- ""must"": [
- {
- ""key"": ""color"",
+client
- ""match"": {
+ .create_collection(
- ""value"": ""red""
+ CreateCollectionBuilder::new(""{collection_name}"")
- }
+ .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
- }
+ .shard_number(6)
- ]
+ .replication_factor(2),
- }
+ )
-}
+ .await?;
```
-```python
-
-client.delete_payload(
+```java
- collection_name=""{collection_name}"",
+import io.qdrant.client.QdrantClient;
- keys=[""color"", ""price""],
+import io.qdrant.client.QdrantGrpcClient;
- points=models.Filter(
+import io.qdrant.client.grpc.Collections.CreateCollection;
- must=[
+import io.qdrant.client.grpc.Collections.Distance;
- models.FieldCondition(
+import io.qdrant.client.grpc.Collections.VectorParams;
- key=""color"",
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- match=models.MatchValue(value=""red""),
- ),
- ],
+QdrantClient client =
- ),
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-)
-```
+client
+ .createCollectionAsync(
-```typescript
+ CreateCollection.newBuilder()
-client.deletePayload(""{collection_name}"", {
+ .setCollectionName(""{collection_name}"")
- keys: [""color"", ""price""],
+ .setVectorsConfig(
- filter: {
+ VectorsConfig.newBuilder()
- must: [
+ .setParams(
- {
+ VectorParams.newBuilder()
- key: ""color"",
+ .setSize(300)
- match: {
+ .setDistance(Distance.Cosine)
- value: ""red"",
+ .build())
- },
+ .build())
- },
+ .setShardNumber(6)
- ],
+ .setReplicationFactor(2)
- },
+ .build())
-});
+ .get();
```
-```rust
-
-use qdrant_client::qdrant::{
-
- points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector,
+```csharp
-};
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-client
- .delete_payload_blocking(
+var client = new QdrantClient(""localhost"", 6334);
- ""{collection_name}"",
- None,
- &PointsSelector {
+await client.CreateCollectionAsync(
- points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([
+ collectionName: ""{collection_name}"",
- Condition::matches(""color"", ""red"".to_string()),
+ vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
- ]))),
+ shardNumber: 6,
- },
+ replicationFactor: 2
- vec![""color"".to_string(), ""price"".to_string()],
+);
- None,
+```
- )
- .await?;
-```
+```go
+import (
+ ""context""
-```java
-import java.util.List;
+ ""github.com/qdrant/go-client/qdrant""
+)
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+client, err := qdrant.NewClient(&qdrant.Config{
-client
+ Host: ""localhost"",
- .deletePayloadAsync(
+ Port: 6334,
- ""{collection_name}"",
+})
- List.of(""color"", ""price""),
- Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
- true,
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- null,
+ CollectionName: ""{collection_name}"",
- null)
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
- .get();
+ Size: 300,
-```
+ Distance: qdrant.Distance_Cosine,
+ }),
+ ShardNumber: qdrant.PtrOf(uint32(6)),
-```csharp
+ ReplicationFactor: qdrant.PtrOf(uint32(2)),
-using Qdrant.Client;
+})
-using static Qdrant.Client.Grpc.Conditions;
+```
-var client = new QdrantClient(""localhost"", 6334);
+This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards.
-await client.DeletePayloadAsync(
+Since a replication factor of ""2"" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand.
- collectionName: ""{collection_name}"",
- keys: [""color"", ""price""],
- filter: MatchKeyword(""color"", ""red"")
+### Creating new shard replicas
-);
-```
+It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster).
-## Payload indexing
+A replica can be added on a specific peer by specifying the peer from which to replicate.
-To search more efficiently with filters, Qdrant allows you to create indexes for payload fields by specifying the name and type of field it is intended to be.
+```http
+POST /collections/{collection_name}/cluster
-The indexed fields also affect the vector index. See [Indexing](../indexing) for details.
+{
+ ""replicate_shard"": {
+ ""shard_id"": 0,
-In practice, we recommend creating an index on those fields that could potentially constrain the results the most.
+ ""from_peer_id"": 381894127,
-For example, using an index for the object ID will be much more efficient, being unique for each record, than an index by its color, which has only a few possible values.
+ ""to_peer_id"": 467122995
+ }
+}
-In compound queries involving multiple fields, Qdrant will attempt to use the most restrictive index first.
+```
-To create index for the field, you can use the following:
+
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/create_field_index))
+And a replica can be removed on a specific peer.
```http
-PUT /collections/{collection_name}/index
+POST /collections/{collection_name}/cluster
{
- ""field_name"": ""name_of_the_field_to_index"",
+ ""drop_replica"": {
- ""field_schema"": ""keyword""
+ ""shard_id"": 0,
+
+ ""peer_id"": 381894127
+
+ }
}
@@ -23053,263 +22640,239 @@ PUT /collections/{collection_name}/index
-```python
+Keep in mind that a collection must contain at least one active replica of a shard.
-client.create_payload_index(
- collection_name=""{collection_name}"",
- field_name=""name_of_the_field_to_index"",
+### Error handling
- field_schema=""keyword"",
-)
-```
+Replicas can be in different states:
-```typescript
+- Active: healthy and ready to serve traffic
-client.createPayloadIndex(""{collection_name}"", {
+- Dead: unhealthy and not ready to serve traffic
- field_name: ""name_of_the_field_to_index"",
+- Partial: currently under resynchronization before activation
- field_schema: ""keyword"",
-});
-```
+A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic.
-```rust
+A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically.
-use qdrant_client::qdrant::FieldType;
+This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation.
-client
- .create_field_index(
- ""{collection_name}"",
+### Node Failure Recovery
- ""name_of_the_field_to_index"",
- FieldType::Keyword,
- None,
+Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable.
- None,
+No system is immune to this.
- )
- .await?;
-```
+But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation.
+Let's walk through them from best to worst.
-```java
-import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+**Recover with replicated collection**
-client.createPayloadIndexAsync(
+If the number of failed nodes is less than the replication factor of the collection, then your cluster should still be able to perform read, search and update queries.
- ""{collection_name}"",
- ""name_of_the_field_to_index"",
- PayloadSchemaType.Keyword,
+Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed.
- null,
- true,
- null,
+If the failed node never restarts, you can recover the lost shards if you have a 3+ node cluster. You cannot recover lost shards in smaller clusters because recovery operations go through [raft](#raft) which requires >50% of the nodes to be healthy.
- null);
-```
+
+**Recreate node with replicated collections**
-```csharp
+If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node.
-using Qdrant.Client;
+To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API.
-var client = new QdrantClient(""localhost"", 6334);
+Apply the `force` flag if necessary.
-await client.CreatePayloadIndexAsync(
+When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
- collectionName: ""{collection_name}"",
- fieldName: ""name_of_the_field_to_index""
-);
+Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation.
-```
+Use the [Replicate Shard Operation](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to create another copy of the shard on the newly connected node.
-The index usage flag is displayed in the payload schema with the [collection info API](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_collection).
+It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery.
+Building a completely automatic process of collection scaling would require control over the cluster machines themself.
+Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that.
-Payload schema example:
-```json
-{
+**Recover from snapshot**
- ""payload_schema"": {
- ""property1"": {
- ""data_type"": ""keyword""
+If there are no copies of data in the cluster, it is still possible to recover from a snapshot.
- },
- ""property2"": {
- ""data_type"": ""integer""
+Follow the same steps to detach failed node and create a new one in the cluster:
- }
- }
-}
+* To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API. Apply the `force` flag if necessary.
-```
-",documentation/concepts/payload.md
-"---
+* Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes.
-title: Collections
-weight: 30
-aliases:
+Snapshot recovery, used in single-node deployment, is different from cluster one.
- - ../collections
+Consensus manages all metadata about all collections and does not require snapshots to recover it.
----
+But you can use snapshots to recover missing shards of the collections.
-# Collections
+Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it.
+The service will download the specified snapshot of the collection and recover shards with data from it.
-A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
+Once all shards of the collection are recovered, the collection will become operational again.
-Distance metrics are used to measure similarities among vectors.
-The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training.
+### Temporary node failure
-Qdrant supports these most popular types of metrics:
+If properly configured, running Qdrant in distributed mode can make your cluster resistant to outages when one node fails temporarily.
-* Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product)
+Here is how differently-configured Qdrant clusters respond:
-* Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity)
-* Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance)
-* Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry)
+* 1-node clusters: All operations time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk.
+* 2-node clusters where shards ARE NOT replicated: All operations will time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk.
+* 2-node clusters where all shards ARE replicated to both nodes: All requests except for operations on collections continue to work during the outage.
-
+* 3+-node clusters where all shards are replicated to at least 2 nodes: All requests continue to work during the outage.
-In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum.
+## Consistency guarantees
-These settings can be changed at any time by a corresponding request.
+By default, Qdrant focuses on availability and maximum throughput of search operations.
-## Setting up multitenancy
+For the majority of use cases, this is a preferable trade-off.
-**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/)
+During the normal state of operation, it is possible to search and modify data from any peers in the cluster.
-**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
+Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster.
-> Note: If you're running `curl` from the command line, the following commands
+- reads are using a partial fan-out strategy to optimize latency and availability
-assume that you have a running instance of Qdrant on `http://localhost:6333`.
+- writes are executed in parallel on all active sharded replicas
-If needed, you can set one up as described in our
-[Quickstart](/documentation/quick-start/) guide. For convenience, these commands
-specify collections named `test_collection1` through `test_collection4`.
+![Embeddings](/docs/concurrent-operations-replicas.png)
-## Create a collection
+However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc.
+Qdrant provides a few options to control consistency guarantees:
-```http
-PUT /collections/{collection_name}
+- `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations.
-{
+- Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low.
- ""vectors"": {
+- Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical.
- ""size"": 300,
- ""distance"": ""Cosine""
- }
-}
-```
+### Write consistency factor
-```bash
+The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default.
-curl -X PUT http://localhost:6333/collections/test_collection1 \
+It can be configured at the collection's creation time.
- -H 'Content-Type: application/json' \
- --data-raw '{
+
+```http
+
+PUT /collections/{collection_name}
+
+{
""vectors"": {
- ""size"": 300,
+ ""size"": 300,
- ""distance"": ""Cosine""
+ ""distance"": ""Cosine""
- }
+ },
- }'
+ ""shard_number"": 6,
+
+ ""replication_factor"": 2,
+
+ ""write_consistency_factor"": 2,
+
+}
```
@@ -23317,13 +22880,11 @@ curl -X PUT http://localhost:6333/collections/test_collection1 \
```python
-from qdrant_client import QdrantClient
-
-from qdrant_client.http import models
+from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+client = QdrantClient(url=""http://localhost:6333"")
@@ -23331,7 +22892,13 @@ client.create_collection(
collection_name=""{collection_name}"",
- vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
+ vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE),
+
+ shard_number=6,
+
+ replication_factor=2,
+
+ write_consistency_factor=2,
)
@@ -23351,55 +22918,53 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
client.createCollection(""{collection_name}"", {
- vectors: { size: 100, distance: ""Cosine"" },
+ vectors: {
-});
-
-```
+ size: 300,
+ distance: ""Cosine"",
+ },
-```rust
+ shard_number: 6,
-use qdrant_client::{
+ replication_factor: 2,
- client::QdrantClient,
+ write_consistency_factor: 2,
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
+});
-};
+```
-//The Rust client uses Qdrant's GRPC interface
+```rust
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
+use qdrant_client::Qdrant;
-client
- .create_collection(&CreateCollection {
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- collection_name: ""{collection_name}"".to_string(),
- vectors_config: Some(VectorsConfig {
- config: Some(Config::Params(VectorParams {
+client
- size: 100,
+ .create_collection(
- distance: Distance::Cosine.into(),
+ CreateCollectionBuilder::new(""{collection_name}"")
- ..Default::default()
+ .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine))
- })),
+ .shard_number(6)
- }),
+ .replication_factor(2)
- ..Default::default()
+ .write_consistency_factor(2),
- })
+ )
.await?;
@@ -23409,25 +22974,59 @@ client
```java
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
import io.qdrant.client.grpc.Collections.Distance;
import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.QdrantClient;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
+
-import io.qdrant.client.QdrantGrpcClient;
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-QdrantClient client = new QdrantClient(
- QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+client
+ .createCollectionAsync(
-client.createCollectionAsync(""{collection_name}"",
+ CreateCollection.newBuilder()
- VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get();
+ .setCollectionName(""{collection_name}"")
+
+ .setVectorsConfig(
+
+ VectorsConfig.newBuilder()
+
+ .setParams(
+
+ VectorParams.newBuilder()
+
+ .setSize(300)
+
+ .setDistance(Distance.Cosine)
+
+ .build())
+
+ .build())
+
+ .setShardNumber(6)
+
+ .setReplicationFactor(2)
+
+ .setWriteConsistencyFactor(2)
+
+ .build())
+
+ .get();
```
@@ -23449,7 +23048,13 @@ await client.CreateCollectionAsync(
collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }
+ vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine },
+
+ shardNumber: 6,
+
+ replicationFactor: 2,
+
+ writeConsistencyFactor: 2
);
@@ -23457,145 +23062,157 @@ await client.CreateCollectionAsync(
-In addition to the required options, you can also specify custom values for the following collection options:
+```go
+import (
+ ""context""
-* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
-* `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning)
-* `optimizers_config` - see [optimizer](../optimizer) for details.
+ ""github.com/qdrant/go-client/qdrant""
-* `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment#sharding) section for details.
+)
-* `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload.
-* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
+ Port: 6334,
+})
-See [schema definitions](https://qdrant.github.io/qdrant/redoc/index.html#operation/create_collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters.
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+ CollectionName: ""{collection_name}"",
-*Available as of v1.2.0*
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 300,
+ Distance: qdrant.Distance_Cosine,
-Vectors all live in RAM for very quick access. The `on_disk` parameter can be
+ }),
-set in the vector configuration. If true, all vectors will live on disk. This
+ ShardNumber: qdrant.PtrOf(uint32(6)),
-will enable the use of
+ ReplicationFactor: qdrant.PtrOf(uint32(2)),
-[memmaps](../../concepts/storage/#configuring-memmap-storage),
+ WriteConsistencyFactor: qdrant.PtrOf(uint32(2)),
-which is suitable for ingesting a large amount of data.
+})
+```
-### Create collection from another collection
+Write operations will fail if the number of active replicas is less than the `write_consistency_factor`.
-*Available as of v1.0.0*
+### Read consistency
-It is possible to initialize a collection from another existing collection.
+Read `consistency` can be specified for most read requests and will ensure that the returned result
+is consistent across cluster nodes.
-This might be useful for experimenting quickly with different configurations for the same data set.
+- `all` will query all nodes and return points, which present on all of them
-Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample
+- `majority` will query all nodes and return points, which present on the majority of them
-code, `""size"": 300` and `""distance"": ""Cosine""`.
+- `quorum` will query randomly selected majority of nodes and return points, which present on all of them
+- `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them
+- default `consistency` is `1`
```http
-PUT /collections/{collection_name}
+POST /collections/{collection_name}/points/query?consistency=majority
{
- ""vectors"": {
-
- ""size"": 100,
+ ""query"": [0.2, 0.1, 0.9, 0.7],
- ""distance"": ""Cosine""
+ ""filter"": {
- },
+ ""must"": [
- ""init_from"": {
+ {
- ""collection"": ""{from_collection_name}""
+ ""key"": ""city"",
- }
+ ""match"": {
-}
+ ""value"": ""London""
-```
+ }
+ }
+ ]
-```bash
+ },
-curl -X PUT http://localhost:6333/collections/test_collection2 \
+ ""params"": {
- -H 'Content-Type: application/json' \
+ ""hnsw_ef"": 128,
- --data-raw '{
+ ""exact"": false
- ""vectors"": {
+ },
- ""size"": 300,
+ ""limit"": 3
- ""distance"": ""Cosine""
+}
- },
+```
- ""init_from"": {
- ""collection"": ""test_collection1""
- }
+```python
- }'
+client.query_points(
-```
+ collection_name=""{collection_name}"",
+ query=[0.2, 0.1, 0.9, 0.7],
+ query_filter=models.Filter(
-```python
+ must=[
-from qdrant_client import QdrantClient
+ models.FieldCondition(
-from qdrant_client.http import models
+ key=""city"",
+ match=models.MatchValue(
+ value=""London"",
-client = QdrantClient(""localhost"", port=6333)
+ ),
+ )
+ ]
-client.create_collection(
+ ),
- collection_name=""{collection_name}"",
+ search_params=models.SearchParams(hnsw_ef=128, exact=False),
- vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
+ limit=3,
- init_from=models.InitFrom(collection=""{from_collection_name}""),
+ consistency=""majority"",
)
@@ -23605,19 +23222,27 @@ client.create_collection(
```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+client.query(""{collection_name}"", {
+ query: [0.2, 0.1, 0.9, 0.7],
+ filter: {
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ must: [{ key: ""city"", match: { value: ""London"" } }],
+
+ },
+ params: {
+ hnsw_ef: 128,
-client.createCollection(""{collection_name}"", {
+ exact: false,
- vectors: { size: 100, distance: ""Cosine"" },
+ },
- init_from: { collection: ""{from_collection_name}"" },
+ limit: 3,
+
+ consistency: ""majority"",
});
@@ -23627,45 +23252,45 @@ client.createCollection(""{collection_name}"", {
```rust
-use qdrant_client::{
+use qdrant_client::qdrant::{
- client::QdrantClient,
+ read_consistency::Value, Condition, Filter, QueryPointsBuilder, ReadConsistencyType,
- qdrant::{vectors_config::Config, CreateCollection, Distance, VectorParams, VectorsConfig},
+ SearchParamsBuilder,
};
+use qdrant_client::{Qdrant, QdrantError};
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-client
- .create_collection(&CreateCollection {
+client
- collection_name: ""{collection_name}"".to_string(),
+ .query(
- vectors_config: Some(VectorsConfig {
+ QueryPointsBuilder::new(""{collection_name}"")
- config: Some(Config::Params(VectorParams {
+ .query(vec![0.2, 0.1, 0.9, 0.7])
- size: 100,
+ .limit(3)
- distance: Distance::Cosine.into(),
+ .filter(Filter::must([Condition::matches(
- ..Default::default()
+ ""city"",
- })),
+ ""London"".to_string(),
- }),
+ )]))
- init_from_collection: Some(""{from_collection_name}"".to_string()),
+ .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false))
- ..Default::default()
+ .read_consistency(Value::Type(ReadConsistencyType::Majority.into())),
- })
+ )
.await?;
@@ -23679,49 +23304,51 @@ import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Points.Filter;
-import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Points.QueryPoints;
-import io.qdrant.client.grpc.Collections.VectorParams;
+import io.qdrant.client.grpc.Points.ReadConsistency;
-import io.qdrant.client.grpc.Collections.VectorsConfig;
+import io.qdrant.client.grpc.Points.ReadConsistencyType;
+
+import io.qdrant.client.grpc.Points.SearchParams;
-QdrantClient client =
+import static io.qdrant.client.QueryFactory.nearest;
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-client
+QdrantClient client =
- .createCollectionAsync(
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- CreateCollection.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setVectorsConfig(
+client.queryAsync(
- VectorsConfig.newBuilder()
+ QueryPoints.newBuilder()
- .setParams(
+ .setCollectionName(""{collection_name}"")
- VectorParams.newBuilder()
+ .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build())
- .setSize(100)
+ .setQuery(nearest(.2f, 0.1f, 0.9f, 0.7f))
- .setDistance(Distance.Cosine)
+ .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build())
- .build()))
+ .setLimit(3)
- .setInitFromCollection(""{from_collection_name}"")
+ .setReadConsistency(
- .build())
+ ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build())
- .get();
+ .build())
+
+ .get();
```
@@ -23733,19 +23360,27 @@ using Qdrant.Client;
using Qdrant.Client.Grpc;
+using static Qdrant.Client.Grpc.Conditions;
+
var client = new QdrantClient(""localhost"", 6334);
-await client.CreateCollectionAsync(
+await client.QueryAsync(
collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine },
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- initFromCollection: ""{from_collection_name}""
+ filter: MatchKeyword(""city"", ""London""),
+
+ searchParams: new SearchParams { HnswEf = 128, Exact = false },
+
+ limit: 3,
+
+ readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority }
);
@@ -23753,87 +23388,119 @@ await client.CreateCollectionAsync(
-### Collection with multiple vectors
+```go
+import (
+ ""context""
-*Available as of v0.10.0*
+ ""github.com/qdrant/go-client/qdrant""
-It is possible to have multiple vectors per record.
+)
-This feature allows for multiple vector storages per collection.
-To distinguish vectors in one record, they should have a unique name defined when creating the collection.
-Each named vector in this mode has its distance and size:
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
+})
-```http
-PUT /collections/{collection_name}
+client.Query(context.Background(), &qdrant.QueryPoints{
-{
+ CollectionName: ""{collection_name}"",
- ""vectors"": {
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
- ""image"": {
+ Filter: &qdrant.Filter{
- ""size"": 4,
+ Must: []*qdrant.Condition{
- ""distance"": ""Dot""
+ qdrant.NewMatch(""city"", ""London""),
- },
+ },
- ""text"": {
+ },
- ""size"": 8,
+ Params: &qdrant.SearchParams{
- ""distance"": ""Cosine""
+ HnswEf: qdrant.PtrOf(uint64(128)),
- }
+ },
- }
+ Limit: qdrant.PtrOf(uint64(3)),
-}
+ ReadConsistency: qdrant.NewReadConsistencyType(qdrant.ReadConsistencyType_Majority),
+
+})
```
-```bash
+### Write ordering
-curl -X PUT http://localhost:6333/collections/test_collection3 \
- -H 'Content-Type: application/json' \
- --data-raw '{
+Write `ordering` can be specified for any write request to serialize it through a single ""leader"" node,
- ""vectors"": {
+which ensures that all write operations (issued with the same `ordering`) are performed and observed
- ""image"": {
+sequentially.
- ""size"": 4,
- ""distance"": ""Dot""
- },
+- `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered.
- ""text"": {
+- `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change.
- ""size"": 8,
+- `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down.
- ""distance"": ""Cosine""
- }
- }
+
- }'
+
+
+```http
+
+PUT /collections/{collection_name}/points?ordering=strong
+
+{
+
+ ""batch"": {
+
+ ""ids"": [1, 2, 3],
+
+ ""payloads"": [
+
+ {""color"": ""red""},
+
+ {""color"": ""green""},
+
+ {""color"": ""blue""}
+
+ ],
+
+ ""vectors"": [
+
+ [0.9, 0.1, 0.1],
+
+ [0.1, 0.9, 0.1],
+
+ [0.1, 0.1, 0.9]
+
+ ]
+
+ }
+
+}
```
@@ -23841,27 +23508,37 @@ curl -X PUT http://localhost:6333/collections/test_collection3 \
```python
-from qdrant_client import QdrantClient
+client.upsert(
-from qdrant_client.http import models
+ collection_name=""{collection_name}"",
+
+ points=models.Batch(
+ ids=[1, 2, 3],
+ payloads=[
-client = QdrantClient(""localhost"", port=6333)
+ {""color"": ""red""},
+ {""color"": ""green""},
+ {""color"": ""blue""},
-client.create_collection(
+ ],
- collection_name=""{collection_name}"",
+ vectors=[
- vectors_config={
+ [0.9, 0.1, 0.1],
- ""image"": models.VectorParams(size=4, distance=models.Distance.DOT),
+ [0.1, 0.9, 0.1],
- ""text"": models.VectorParams(size=8, distance=models.Distance.COSINE),
+ [0.1, 0.1, 0.9],
- },
+ ],
+
+ ),
+
+ ordering=models.WriteOrdering.STRONG,
)
@@ -23871,24 +23548,28 @@ client.create_collection(
```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
-
+client.upsert(""{collection_name}"", {
+ batch: {
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ ids: [1, 2, 3],
+ payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }],
+ vectors: [
-client.createCollection(""{collection_name}"", {
+ [0.9, 0.1, 0.1],
- vectors: {
+ [0.1, 0.9, 0.1],
- image: { size: 4, distance: ""Dot"" },
+ [0.1, 0.1, 0.9],
- text: { size: 8, distance: ""Cosine"" },
+ ],
},
+ ordering: ""strong"",
+
});
```
@@ -23897,123 +23578,125 @@ client.createCollection(""{collection_name}"", {
```rust
-use qdrant_client::{
+use qdrant_client::qdrant::{
- client::QdrantClient,
+ PointStruct, UpsertPointsBuilder, WriteOrdering, WriteOrderingType
- qdrant::{
+};
- vectors_config::Config, CreateCollection, Distance, VectorParams, VectorParamsMap,
+use qdrant_client::Qdrant;
- VectorsConfig,
- },
-};
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+client
+ .upsert_points(
+ UpsertPointsBuilder::new(
-client
+ ""{collection_name}"",
- .create_collection(&CreateCollection {
+ vec![
- collection_name: ""{collection_name}"".to_string(),
+ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""color"", ""red"".into())]),
- vectors_config: Some(VectorsConfig {
+ PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""color"", ""green"".into())]),
- config: Some(Config::ParamsMap(VectorParamsMap {
+ PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""color"", ""blue"".into())]),
- map: [
+ ],
- (
+ )
- ""image"".to_string(),
+ .ordering(WriteOrdering {
- VectorParams {
+ r#type: WriteOrderingType::Strong.into(),
- size: 4,
+ }),
- distance: Distance::Dot.into(),
+ )
- ..Default::default()
+ .await?;
- },
+```
- ),
- (
- ""text"".to_string(),
+```java
- VectorParams {
+import java.util.List;
- size: 8,
+import java.util.Map;
- distance: Distance::Cosine.into(),
- ..Default::default()
- },
+import static io.qdrant.client.PointIdFactory.id;
- ),
+import static io.qdrant.client.ValueFactory.value;
- ]
+import static io.qdrant.client.VectorsFactory.vectors;
- .into(),
- })),
- }),
+import io.qdrant.client.grpc.Points.PointStruct;
- ..Default::default()
+import io.qdrant.client.grpc.Points.UpsertPoints;
- })
+import io.qdrant.client.grpc.Points.WriteOrdering;
- .await?;
+import io.qdrant.client.grpc.Points.WriteOrderingType;
-```
+client
-```java
+ .upsertAsync(
-import java.util.Map;
+ UpsertPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
+ .addAllPoints(
-import io.qdrant.client.QdrantClient;
+ List.of(
-import io.qdrant.client.QdrantGrpcClient;
+ PointStruct.newBuilder()
-import io.qdrant.client.grpc.Collections.Distance;
+ .setId(id(1))
-import io.qdrant.client.grpc.Collections.VectorParams;
+ .setVectors(vectors(0.9f, 0.1f, 0.1f))
+ .putAllPayload(Map.of(""color"", value(""red"")))
+ .build(),
-QdrantClient client =
+ PointStruct.newBuilder()
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ .setId(id(2))
+ .setVectors(vectors(0.1f, 0.9f, 0.1f))
+ .putAllPayload(Map.of(""color"", value(""green"")))
-client
+ .build(),
- .createCollectionAsync(
+ PointStruct.newBuilder()
- ""{collection_name}"",
+ .setId(id(3))
- Map.of(
+ .setVectors(vectors(0.1f, 0.1f, 0.94f))
- ""image"", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(),
+ .putAllPayload(Map.of(""color"", value(""blue"")))
- ""text"",
+ .build()))
- VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build()))
+ .setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build())
+
+ .build())
.get();
@@ -24033,1723 +23716,1615 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.CreateCollectionAsync(
+await client.UpsertAsync(
collectionName: ""{collection_name}"",
- vectorsConfig: new VectorParamsMap
+ points: new List
{
- Map =
+ new()
{
- [""image""] = new VectorParams { Size = 4, Distance = Distance.Dot },
+ Id = 1,
- [""text""] = new VectorParams { Size = 8, Distance = Distance.Cosine },
+ Vectors = new[] { 0.9f, 0.1f, 0.1f },
- }
+ Payload = { [""color""] = ""red"" }
- }
+ },
-);
+ new()
-```
+ {
+ Id = 2,
+ Vectors = new[] { 0.1f, 0.9f, 0.1f },
-For rare use cases, it is possible to create a collection without any vector storage.
+ Payload = { [""color""] = ""green"" }
+ },
+ new()
-*Available as of v1.1.1*
+ {
+ Id = 3,
+ Vectors = new[] { 0.1f, 0.1f, 0.9f },
-For each named vector you can optionally specify
+ Payload = { [""color""] = ""blue"" }
-[`hnsw_config`](../indexing/#vector-index) or
+ }
-[`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to
+ },
-deviate from the collection configuration. This can be useful to fine-tune
+ ordering: WriteOrderingType.Strong
-search performance on a vector level.
+);
+```
-*Available as of v1.2.0*
+```go
+import (
-Vectors all live in RAM for very quick access. On a per-vector basis you can set
+ ""context""
-`on_disk` to true to store all vectors on disk at all times. This will enable
-the use of
-[memmaps](../../concepts/storage/#configuring-memmap-storage),
+ ""github.com/qdrant/go-client/qdrant""
-which is suitable for ingesting a large amount of data.
+)
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-### Collection with sparse vectors
+ Port: 6334,
+})
-*Available as of v1.7.0*
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+ CollectionName: ""{collection_name}"",
-Qdrant supports sparse vectors as a first-class citizen.
+ Points: []*qdrant.PointStruct{
+
+ {
+ Id: qdrant.NewIDNum(1),
+ Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
-Sparse vectors are useful for text search, where each word is represented as a separate dimension.
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""red""}),
+ },
+ {
-Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point.
+ Id: qdrant.NewIDNum(2),
+ Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""green""}),
-Unlike dense vectors, sparse vectors must be named.
+ },
-And additionally, sparse vectors and dense vectors must have different names within a collection.
+ {
+ Id: qdrant.NewIDNum(3),
+ Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
-```http
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""blue""}),
-PUT /collections/{collection_name}
+ },
-{
+ },
- ""sparse_vectors"": {
+ Ordering: &qdrant.WriteOrdering{
- ""text"": { },
+ Type: qdrant.WriteOrderingType_Strong,
- }
+ },
-}
+})
```
-```bash
+## Listener mode
-curl -X PUT http://localhost:6333/collections/test_collection4 \
- -H 'Content-Type: application/json' \
- --data-raw '{
+
- ""sparse_vectors"": {
- ""text"": { }
- }
+In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations.
- }'
+There are several scenarios where this can be useful:
-```
+- Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time.
+- Listener node can be used to syncronize data into another region, while still performing search operations in the local region.
-```python
-from qdrant_client import QdrantClient
-from qdrant_client.http import models
+To enable listener mode, set `node_type` to `Listener` in the config file:
-client = QdrantClient(""localhost"", port=6333)
-client.create_collection(
+```yaml
- collection_name=""{collection_name}"",
+storage:
- sparse_vectors_config={
+ node_type: ""Listener""
- ""text"": models.SparseVectorParams(),
+```
- },
-)
-```
+Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage.
-```typescript
+All shards, stored on the listener node, will be converted to the `Listener` state.
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+This mechanism should allow to minimize upsert latency in case of parallel snapshotting.
-client.createCollection(""{collection_name}"", {
+## Consensus Checkpointing
- sparse_vectors: {
- text: { },
- },
+Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state.
-});
+This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes.
-```
+For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state.
-```rust
+In long-running clusters, this can take a long time, and the log can grow very large.
-use qdrant_client::{
- client::QdrantClient,
- qdrant::{
+To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state.
- vectors_config::Config, CreateCollection, Distance, SparseVectorParams, VectorParamsMap,
- VectorsConfig,
- },
+To use this feature, simply call the `/cluster/recover` API on required node:
-};
+```http
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+POST /cluster/recover
+```
-client
- .create_collection(&CreateCollection {
+This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application.
- collection_name: ""{collection_name}"".to_string(),
- sparse_vectors_config: Some(SparseVectorsConfig {
- map: [
+In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation.
+",documentation/guides/distributed_deployment.md
+"---
- (
+title: Installation
- ""text"".to_string(),
+weight: 5
- SparseVectorParams {},
+aliases:
- ),
+ - ../install
- ]
+ - ../installation
- .into(),
+---
- }),
- }),
- ..Default::default()
+## Installation requirements
- })
- .await?;
-```
+The following sections describe the requirements for deploying Qdrant.
-```java
+### CPU and memory
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.CreateCollection;
+The CPU and RAM that you need depends on:
-import io.qdrant.client.grpc.Collections.SparseVectorConfig;
-import io.qdrant.client.grpc.Collections.SparseVectorParams;
+- Number of vectors
+- Vector dimensions
-QdrantClient client =
+- [Payloads](/documentation/concepts/payload/) and their indexes
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+- Storage
+- Replication
+- How you configure quantization
-client
- .createCollectionAsync(
- CreateCollection.newBuilder()
+Our [Cloud Pricing Calculator](https://cloud.qdrant.io/calculator) can help you estimate required resources without payload or index data.
- .setCollectionName(""{collection_name}"")
- .setSparseVectorsConfig(
- SparseVectorConfig.newBuilder()
+### Storage
- .putMap(""text"", SparseVectorParams.getDefaultInstance()))
- .build())
- .get();
+For persistent storage, Qdrant requires block-level access to storage devices with a [POSIX-compatible file system](https://www.quobyte.com/storage-explained/posix-filesystem/). Network systems such as [iSCSI](https://en.wikipedia.org/wiki/ISCSI) that provide block-level access are also acceptable.
-```
+Qdrant won't work with [Network file systems](https://en.wikipedia.org/wiki/File_system#Network_file_systems) such as NFS, or [Object storage](https://en.wikipedia.org/wiki/Object_storage) systems such as S3.
-```csharp
+If you offload vectors to a local disk, we recommend you use a solid-state (SSD or NVMe) drive.
-using Qdrant.Client;
-using Qdrant.Client.Grpc;
+### Networking
-var client = new QdrantClient(""localhost"", 6334);
+Each Qdrant instance requires three open ports:
-await client.CreateCollectionAsync(
- collectionName: ""{collection_name}"",
+* `6333` - For the HTTP API, for the [Monitoring](/documentation/guides/monitoring/) health and metrics endpoints
- sparseVectorsConfig: (""text"", new SparseVectorParams())
+* `6334` - For the [gRPC](/documentation/interfaces/#grpc-interface) API
-);
+* `6335` - For [Distributed deployment](/documentation/guides/distributed_deployment/)
-```
+All Qdrant instances in a cluster must be able to:
-Outside of a unique name, there are no required configuration parameters for sparse vectors.
+- Communicate with each other over these ports
-The distance function for sparse vectors is always `Dot` and does not need to be specified.
+- Allow incoming connections to ports `6333` and `6334` from clients that use Qdrant.
-However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index).
+### Security
-### Delete collection
+The default configuration of Qdrant might not be secure enough for every situation. Please see [our security documentation](/documentation/guides/security/) for more information.
-```http
+## Installation options
-DELETE http://localhost:6333/collections/test_collection4
-```
+Qdrant can be installed in different ways depending on your needs:
-```bash
-curl -X DELETE http://localhost:6333/collections/test_collection4
+For production, you can use our Qdrant Cloud to run Qdrant either fully managed in our infrastructure or with Hybrid Cloud in yours.
-```
+For testing or development setups, you can run the Qdrant container or as a binary executable.
-```python
-client.delete_collection(collection_name=""{collection_name}"")
-```
+If you want to run Qdrant in your own infrastructure, without any cloud connection, we recommend to install Qdrant in a Kubernetes cluster with our Helm chart, or to use our Qdrant Enterprise Operator
-```typescript
+## Production
-client.deleteCollection(""{collection_name}"");
-```
+For production, we recommend that you configure Qdrant in the cloud, with Kubernetes, or with a Qdrant Enterprise Operator.
-```rust
-client.delete_collection(""{collection_name}"").await?;
+### Qdrant Cloud
-```
+You can set up production with the [Qdrant Cloud](https://qdrant.to/cloud), which provides fully managed Qdrant databases.
-```java
+It provides horizontal and vertical scaling, one click installation and upgrades, monitoring, logging, as well as backup and disaster recovery. For more information, see the [Qdrant Cloud documentation](/documentation/cloud/).
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+### Kubernetes
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+You can use a ready-made [Helm Chart](https://helm.sh/docs/) to run Qdrant in your Kubernetes cluster:
-client.deleteCollectionAsync(""{collection_name}"").get();
+```bash
-```
+helm repo add qdrant https://qdrant.to/helm
+helm install qdrant qdrant/qdrant
+```
-```csharp
-using Qdrant.Client;
+For more information, see the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant) README.
-var client = new QdrantClient(""localhost"", 6334);
+### Qdrant Kubernetes Operator
-await client.DeleteCollectionAsync(""{collection_name}"");
-```
+We provide a Qdrant Enterprise Operator for Kubernetes installations. For more information, [use this form](https://qdrant.to/contact-us) to contact us.
-### Update collection parameters
+### Docker and Docker Compose
-Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors.
+Usually, we recommend to run Qdrant in Kubernetes, or use the Qdrant Cloud for production setups. This makes setting up highly available and scalable Qdrant clusters with backups and disaster recovery a lot easier.
-For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished.
-As a result, you will not waste extra computation resources on rebuilding the index.
+However, you can also use Docker and Docker Compose to run Qdrant in production, by following the setup instructions in the [Docker](#docker) and [Docker Compose](#docker-compose) Development sections.
+In addition, you have to make sure:
-The following command enables indexing for segments that have more than 10000 kB of vectors stored:
+* To use a performant [persistent storage](#storage) for your data
+* To configure the [security settings](/documentation/guides/security/) for your deployment
+* To set up and configure Qdrant on multiple nodes for a highly available [distributed deployment](/documentation/guides/distributed_deployment/)
-```http
+* To set up a load balancer for your Qdrant cluster
-PATCH /collections/{collection_name}
+* To create a [backup and disaster recovery strategy](/documentation/concepts/snapshots/) for your data
-{
+* To integrate Qdrant with your [monitoring](/documentation/guides/monitoring/) and logging solutions
- ""optimizers_config"": {
- ""indexing_threshold"": 10000
- }
+## Development
-}
-```
+For development and testing, we recommend that you set up Qdrant in Docker. We also have different client libraries.
-```bash
-curl -X PATCH http://localhost:6333/collections/test_collection1 \
+### Docker
- -H 'Content-Type: application/json' \
- --data-raw '{
- ""optimizers_config"": {
+The easiest way to start using Qdrant for testing or development is to run the Qdrant container image.
- ""indexing_threshold"": 10000
+The latest versions are always available on [DockerHub](https://hub.docker.com/r/qdrant/qdrant/tags?page=1&ordering=last_updated).
- }
- }'
-```
+Make sure that [Docker](https://docs.docker.com/engine/install/), [Podman](https://podman.io/docs/installation) or the container runtime of your choice is installed and running. The following instructions use Docker.
-```python
+Pull the image:
-client.update_collection(
- collection_name=""{collection_name}"",
- optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000),
+```bash
-)
+docker pull qdrant/qdrant
```
-```typescript
+In the following command, revise `$(pwd)/path/to/data` for your Docker configuration. Then use the updated command to run the container:
-client.updateCollection(""{collection_name}"", {
- optimizers_config: {
- indexing_threshold: 10000,
+```bash
- },
+docker run -p 6333:6333 \
-});
+ -v $(pwd)/path/to/data:/qdrant/storage \
+
+ qdrant/qdrant
```
-```rust
+With this command, you start a Qdrant instance with the default configuration.
-use qdrant_client::qdrant::OptimizersConfigDiff;
+It stores all data in the `./path/to/data` directory.
-client
+By default, Qdrant uses port 6333, so at [localhost:6333](http://localhost:6333) you should see the welcome message.
- .update_collection(
- ""{collection_name}"",
- &OptimizersConfigDiff {
+To change the Qdrant configuration, you can overwrite the production configuration:
- indexing_threshold: Some(10000),
- ..Default::default()
- },
+```bash
- None,
+docker run -p 6333:6333 \
- None,
+ -v $(pwd)/path/to/data:/qdrant/storage \
- None,
+ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \
- None,
+ qdrant/qdrant
- None,
+```
- )
- .await?;
-```
+Alternatively, you can use your own `custom_config.yaml` configuration file:
-```java
+```bash
-import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+docker run -p 6333:6333 \
-import io.qdrant.client.grpc.Collections.UpdateCollection;
+ -v $(pwd)/path/to/data:/qdrant/storage \
+ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/custom_config.yaml \
+ qdrant/qdrant \
-client.updateCollectionAsync(
+ ./qdrant --config-path config/custom_config.yaml
- UpdateCollection.newBuilder()
+```
- .setCollectionName(""{collection_name}"")
- .setOptimizersConfig(
- OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build())
+For more information, see the [Configuration](/documentation/guides/configuration/) documentation.
- .build());
-```
+### Docker Compose
-```csharp
-using Qdrant.Client;
+You can also use [Docker Compose](https://docs.docker.com/compose/) to run Qdrant.
-using Qdrant.Client.Grpc;
+Here is an example customized compose file for a single node Qdrant cluster:
-var client = new QdrantClient(""localhost"", 6334);
+```yaml
-await client.UpdateCollectionAsync(
+services:
- collectionName: ""{collection_name}"",
+ qdrant:
- optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 }
+ image: qdrant/qdrant:latest
-);
+ restart: always
-```
+ container_name: qdrant
+ ports:
+ - 6333:6333
-The following parameters can be updated:
+ - 6334:6334
+ expose:
+ - 6333
-* `optimizers_config` - see [optimizer](../optimizer/) for details.
+ - 6334
-* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
+ - 6335
-* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
+ configs:
-* `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings.
+ - source: qdrant_config
-* `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`.
+ target: /qdrant/config/production.yaml
+ volumes:
+ - ./qdrant_data:/qdrant/storage
-Full API specification is available in [schema definitions](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/update_collection).
+configs:
-Calls to this endpoint may be blocking as it waits for existing optimizers to
+ qdrant_config:
-finish. We recommended against using this in a production database as it may
+ content: |
-introduce huge overhead due to the rebuilding of the index.
+ log_level: INFO
+```
-#### Update vector parameters
+
-*Available as of v1.4.0*
+### From source
-
+Qdrant is written in Rust and can be compiled into a binary executable.
+This installation method can be helpful if you want to compile Qdrant for a specific processor architecture or if you do not want to use Docker.
-Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW
-index, quantization and disk configurations can now be changed without
-recreating a collection. Segments (with index and quantized data) will
+Before compiling, make sure that the necessary libraries and the [rust toolchain](https://www.rust-lang.org/tools/install) are installed.
-automatically be rebuilt in the background to match updated parameters.
+The current list of required libraries can be found in the [Dockerfile](https://github.com/qdrant/qdrant/blob/master/Dockerfile).
-To put vector data on disk for a collection that **does not have** named vectors,
+Build Qdrant with Cargo:
-use `""""` as name:
+```bash
+cargo build --release --bin qdrant
+```
-```http
-PATCH /collections/{collection_name}
-{
+After a successful build, you can find the binary in the following subdirectory `./target/release/qdrant`.
- ""vectors"": {
- """": {
- ""on_disk"": true
+## Client libraries
- }
- }
-}
+In addition to the service, Qdrant provides a variety of client libraries for different programming languages. For a full list, see our [Client libraries](../../interfaces/#client-libraries) documentation.
+",documentation/guides/installation.md
+"---
-```
+title: Quantization
+weight: 120
+aliases:
-```bash
+ - ../quantization
-curl -X PATCH http://localhost:6333/collections/test_collection1 \
+ - /articles/dedicated-service/documentation/guides/quantization/
- -H 'Content-Type: application/json' \
+ - /guides/quantization/
- --data-raw '{
+---
- ""vectors"": {
- """": {
- ""on_disk"": true
+# Quantization
- }
- }
- }'
+Quantization is an optional feature in Qdrant that enables efficient storage and search of high-dimensional vectors.
-```
+By transforming original vectors into a new representations, quantization compresses data while preserving close to original relative distances between vectors.
+Different quantization methods have different mechanics and tradeoffs. We will cover them in this section.
+Quantization is primarily used to reduce the memory footprint and accelerate the search process in high-dimensional vector spaces.
-To put vector data on disk for a collection that **does have** named vectors:
+In the context of the Qdrant, quantization allows you to optimize the search engine for specific use cases, striking a balance between accuracy, storage efficiency, and search speed.
-Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name).
+There are tradeoffs associated with quantization.
+On the one hand, quantization allows for significant reductions in storage requirements and faster search times.
+This can be particularly beneficial in large-scale applications where minimizing the use of resources is a top priority.
+On the other hand, quantization introduces an approximation error, which can lead to a slight decrease in search quality.
+The level of this tradeoff depends on the quantization method and its parameters, as well as the characteristics of the data.
-```http
-PATCH /collections/{collection_name}
-{
+## Scalar Quantization
- ""vectors"": {
- ""my_vector"": {
- ""on_disk"": true
+*Available as of v1.1.0*
- }
- }
-}
+Scalar quantization, in the context of vector search engines, is a compression technique that compresses vectors by reducing the number of bits used to represent each vector component.
-```
+For instance, Qdrant uses 32-bit floating numbers to represent the original vector components. Scalar quantization allows you to reduce the number of bits used to 8.
-```bash
+In other words, Qdrant performs `float32 -> uint8` conversion for each vector component.
-curl -X PATCH http://localhost:6333/collections/test_collection1 \
+Effectively, this means that the amount of memory required to store a vector is reduced by a factor of 4.
- -H 'Content-Type: application/json' \
- --data-raw '{
- ""vectors"": {
+In addition to reducing the memory footprint, scalar quantization also speeds up the search process.
- ""my_vector"": {
+Qdrant uses a special SIMD CPU instruction to perform fast vector comparison.
- ""on_disk"": true
+This instruction works with 8-bit integers, so the conversion to `uint8` allows Qdrant to perform the comparison faster.
- }
- }
- }'
+The main drawback of scalar quantization is the loss of accuracy. The `float32 -> uint8` conversion introduces an error that can lead to a slight decrease in search quality.
-```
+However, this error is usually negligible, and tends to be less significant for high-dimensional vectors.
+In our experiments, we found that the error introduced by scalar quantization is usually less than 1%.
-In the following example the HNSW index and quantization parameters are updated,
-both for the whole collection, and for `my_vector` specifically:
+However, this value depends on the data and the quantization parameters.
+Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
+## Binary Quantization
-```http
-PATCH /collections/{collection_name}
-{
+*Available as of v1.5.0*
- ""vectors"": {
- ""my_vector"": {
- ""hnsw_config"": {
+Binary quantization is an extreme case of scalar quantization.
- ""m"": 32,
+This feature lets you represent each vector component as a single bit, effectively reducing the memory footprint by a **factor of 32**.
- ""ef_construct"": 123
- },
- ""quantization_config"": {
+This is the fastest quantization method, since it lets you perform a vector comparison with a few CPU instructions.
- ""product"": {
- ""compression"": ""x32"",
- ""always_ram"": true
+Binary quantization can achieve up to a **40x** speedup compared to the original vectors.
- }
- },
- ""on_disk"": true
+However, binary quantization is only efficient for high-dimensional vectors and require a centered distribution of vector components.
- }
- },
- ""hnsw_config"": {
+At the moment, binary quantization shows good accuracy results with the following models:
- ""ef_construct"": 123
- },
- ""quantization_config"": {
+- OpenAI `text-embedding-ada-002` - 1536d tested with [dbpedia dataset](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) achieving 0.98 recall@100 with 4x oversampling
- ""scalar"": {
+- Cohere AI `embed-english-v2.0` - 4096d tested on [wikipedia embeddings](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) - 0.98 recall@50 with 2x oversampling
- ""type"": ""int8"",
- ""quantile"": 0.8,
- ""always_ram"": false
+Models with a lower dimensionality or a different distribution of vector components may require additional experiments to find the optimal quantization parameters.
- }
- }
-}
+We recommend using binary quantization only with rescoring enabled, as it can significantly improve the search quality
-```
+with just a minor performance impact.
+Additionally, oversampling can be used to tune the tradeoff between search speed and search quality in the query time.
-```bash
-curl -X PATCH http://localhost:6333/collections/test_collection1 \
+### Binary Quantization as Hamming Distance
- -H 'Content-Type: application/json' \
- --data-raw '{
- ""vectors"": {
+The additional benefit of this method is that you can efficiently emulate Hamming distance with dot product.
- ""my_vector"": {
- ""hnsw_config"": {
- ""m"": 32,
+Specifically, if original vectors contain `{-1, 1}` as possible values, then the dot product of two vectors is equal to the Hamming distance by simply replacing `-1` with `0` and `1` with `1`.
- ""ef_construct"": 123
- },
- ""quantization_config"": {
+
- ""product"": {
- ""compression"": ""x32"",
- ""always_ram"": true
+
- }
+ Sample truth table
- },
- ""on_disk"": true
- }
+| Vector 1 | Vector 2 | Dot product |
- },
+|----------|----------|-------------|
- ""hnsw_config"": {
+| 1 | 1 | 1 |
- ""ef_construct"": 123
+| 1 | -1 | -1 |
- },
+| -1 | 1 | -1 |
- ""quantization_config"": {
+| -1 | -1 | 1 |
- ""scalar"": {
- ""type"": ""int8"",
- ""quantile"": 0.8,
+| Vector 1 | Vector 2 | Hamming distance |
- ""always_ram"": false
+|----------|----------|------------------|
- }
+| 1 | 1 | 0 |
- }
+| 1 | 0 | 1 |
-}'
+| 0 | 1 | 1 |
-```
+| 0 | 0 | 0 |
-```python
+
-client.update_collection(
- collection_name=""{collection_name}"",
- vectors_config={
+As you can see, both functions are equal up to a constant factor, which makes similarity search equivalent.
- ""my_vector"": models.VectorParamsDiff(
+Binary quantization makes it efficient to compare vectors using this representation.
- hnsw_config=models.HnswConfigDiff(
- m=32,
- ef_construct=123,
+## Product Quantization
- ),
- quantization_config=models.ProductQuantization(
- product=models.ProductQuantizationConfig(
+*Available as of v1.2.0*
- compression=models.CompressionRatio.X32,
- always_ram=True,
- ),
+Product quantization is a method of compressing vectors to minimize their memory usage by dividing them into
- ),
+chunks and quantizing each segment individually.
- on_disk=True,
+Each chunk is approximated by a centroid index that represents the original vector component.
- ),
+The positions of the centroids are determined through the utilization of a clustering algorithm such as k-means.
- },
+For now, Qdrant uses only 256 centroids, so each centroid index can be represented by a single byte.
- hnsw_config=models.HnswConfigDiff(
- ef_construct=123,
- ),
+Product quantization can compress by a more prominent factor than a scalar one.
- quantization_config=models.ScalarQuantization(
+But there are some tradeoffs. Product quantization distance calculations are not SIMD-friendly, so it is slower than scalar quantization.
- scalar=models.ScalarQuantizationConfig(
-
- type=models.ScalarType.INT8,
-
- quantile=0.8,
+Also, product quantization has a loss of accuracy, so it is recommended to use it only for high-dimensional vectors.
- always_ram=False,
- ),
- ),
+Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case.
-)
-```
+## How to choose the right quantization method
-```typescript
-client.updateCollection(""{collection_name}"", {
+Here is a brief table of the pros and cons of each quantization method:
- vectors: {
- my_vector: {
- hnsw_config: {
+| Quantization method | Accuracy | Speed | Compression |
- m: 32,
+|---------------------|----------|--------------|-------------|
- ef_construct: 123,
+| Scalar | 0.99 | up to x2 | 4 |
- },
+| Product | 0.7 | 0.5 | up to 64 |
- quantization_config: {
+| Binary | 0.95* | up to x40 | 32 |
- product: {
- compression: ""x32"",
- always_ram: true,
+`*` - for compatible models
- },
- },
- on_disk: true,
+- **Binary Quantization** is the fastest method and the most memory-efficient, but it requires a centered distribution of vector components. It is recommended to use with tested models only.
- },
+- **Scalar Quantization** is the most universal method, as it provides a good balance between accuracy, speed, and compression. It is recommended as default quantization if binary quantization is not applicable.
- },
+- **Product Quantization** may provide a better compression ratio, but it has a significant loss of accuracy and is slower than scalar quantization. It is recommended if the memory footprint is the top priority and the search speed is not critical.
- hnsw_config: {
- ef_construct: 123,
- },
+## Setting up Quantization in Qdrant
- quantization_config: {
- scalar: {
- type: ""int8"",
+You can configure quantization for a collection by specifying the quantization parameters in the `quantization_config` section of the collection configuration.
- quantile: 0.8,
- always_ram: true,
- },
+Quantization will be automatically applied to all vectors during the indexation process.
- },
+Quantized vectors are stored alongside the original vectors in the collection, so you will still have access to the original vectors if you need them.
-});
-```
+*Available as of v1.1.1*
-```rust
-use qdrant_client::client::QdrantClient;
+The `quantization_config` can also be set on a per vector basis by specifying it in a named vector.
-use qdrant_client::qdrant::{
- quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiff,
- QuantizationConfigDiff, QuantizationType, ScalarQuantization, VectorParamsDiff,
+### Setting up Scalar Quantization
- VectorsConfigDiff,
-};
+To enable scalar quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
-client
- .update_collection(
+```http
- ""{collection_name}"",
+PUT /collections/{collection_name}
- None,
+{
- None,
+ ""vectors"": {
- None,
+ ""size"": 768,
- Some(&HnswConfigDiff {
+ ""distance"": ""Cosine""
- ef_construct: Some(123),
+ },
- ..Default::default()
+ ""quantization_config"": {
- }),
+ ""scalar"": {
- Some(&VectorsConfigDiff {
+ ""type"": ""int8"",
- config: Some(Config::ParamsMap(
+ ""quantile"": 0.99,
- qdrant_client::qdrant::VectorParamsDiffMap {
+ ""always_ram"": true
- map: HashMap::from([(
+ }
- (""my_vector"".into()),
+ }
- VectorParamsDiff {
+}
- hnsw_config: Some(HnswConfigDiff {
+```
- m: Some(32),
- ef_construct: Some(123),
- ..Default::default()
+```python
- }),
+from qdrant_client import QdrantClient, models
- ..Default::default()
- },
- )]),
+client = QdrantClient(url=""http://localhost:6333"")
- },
- )),
- }),
+client.create_collection(
- Some(&QuantizationConfigDiff {
+ collection_name=""{collection_name}"",
- quantization: Some(Quantization::Scalar(ScalarQuantization {
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- r#type: QuantizationType::Int8 as i32,
+ quantization_config=models.ScalarQuantization(
- quantile: Some(0.8),
+ scalar=models.ScalarQuantizationConfig(
- always_ram: Some(true),
+ type=models.ScalarType.INT8,
- ..Default::default()
+ quantile=0.99,
- })),
+ always_ram=True,
- }),
+ ),
- )
+ ),
- .await?;
+)
```
-```java
+```typescript
-import io.qdrant.client.grpc.Collections.HnswConfigDiff;
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-import io.qdrant.client.grpc.Collections.QuantizationConfigDiff;
-import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.grpc.Collections.ScalarQuantization;
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-import io.qdrant.client.grpc.Collections.UpdateCollection;
-import io.qdrant.client.grpc.Collections.VectorParamsDiff;
-import io.qdrant.client.grpc.Collections.VectorParamsDiffMap;
+client.createCollection(""{collection_name}"", {
-import io.qdrant.client.grpc.Collections.VectorsConfigDiff;
+ vectors: {
+ size: 768,
+ distance: ""Cosine"",
-client
+ },
- .updateCollectionAsync(
+ quantization_config: {
- UpdateCollection.newBuilder()
+ scalar: {
- .setCollectionName(""{collection_name}"")
+ type: ""int8"",
- .setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build())
+ quantile: 0.99,
- .setVectorsConfig(
+ always_ram: true,
- VectorsConfigDiff.newBuilder()
+ },
- .setParamsMap(
+ },
- VectorParamsDiffMap.newBuilder()
+});
- .putMap(
+```
- ""my_vector"",
- VectorParamsDiff.newBuilder()
- .setHnswConfig(
+```rust
- HnswConfigDiff.newBuilder()
+use qdrant_client::qdrant::{
- .setM(3)
+ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
- .setEfConstruct(123)
+ VectorParamsBuilder,
- .build())
+};
- .build())))
+use qdrant_client::Qdrant;
- .setQuantizationConfig(
- QuantizationConfigDiff.newBuilder()
- .setScalar(
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- ScalarQuantization.newBuilder()
- .setType(QuantizationType.Int8)
- .setQuantile(0.8f)
+client
- .setAlwaysRam(true)
+ .create_collection(
- .build()))
+ CreateCollectionBuilder::new(""{collection_name}"")
- .build())
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
- .get();
+ .quantization_config(
-```
+ ScalarQuantizationBuilder::default()
+ .r#type(QuantizationType::Int8.into())
+ .quantile(0.99)
-```csharp
+ .always_ram(true),
-using Qdrant.Client;
+ ),
-using Qdrant.Client.Grpc;
+ )
+ .await?;
+```
-var client = new QdrantClient(""localhost"", 6334);
+```java
-await client.UpdateCollectionAsync(
+import io.qdrant.client.QdrantClient;
- collectionName: ""{collection_name}"",
+import io.qdrant.client.QdrantGrpcClient;
- hnswConfig: new HnswConfigDiff { EfConstruct = 123 },
+import io.qdrant.client.grpc.Collections.CreateCollection;
- vectorsConfig: new VectorParamsDiffMap
+import io.qdrant.client.grpc.Collections.Distance;
- {
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
- Map =
+import io.qdrant.client.grpc.Collections.QuantizationType;
- {
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
- {
+import io.qdrant.client.grpc.Collections.VectorParams;
- ""my_vector"",
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- new VectorParamsDiff
- {
- HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 }
+QdrantClient client =
- }
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- }
- }
- },
+client
- quantizationConfig: new QuantizationConfigDiff
+ .createCollectionAsync(
- {
+ CreateCollection.newBuilder()
- Scalar = new ScalarQuantization
+ .setCollectionName(""{collection_name}"")
- {
+ .setVectorsConfig(
- Type = QuantizationType.Int8,
+ VectorsConfig.newBuilder()
- Quantile = 0.8f,
+ .setParams(
- AlwaysRam = true
+ VectorParams.newBuilder()
- }
+ .setSize(768)
- }
+ .setDistance(Distance.Cosine)
-);
+ .build())
-```
+ .build())
+ .setQuantizationConfig(
+ QuantizationConfig.newBuilder()
-## Collection info
+ .setScalar(
+ ScalarQuantization.newBuilder()
+ .setType(QuantizationType.Int8)
-Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are
+ .setQuantile(0.99f)
-distributed and indexed.
+ .setAlwaysRam(true)
+ .build())
+ .build())
-```http
+ .build())
-GET /collections/test_collection1
+ .get();
```
-```bash
-
-curl -X GET http://localhost:6333/collections/test_collection1
+```csharp
-```
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-```python
-client.get_collection(collection_name=""{collection_name}"")
+var client = new QdrantClient(""localhost"", 6334);
-```
+await client.CreateCollectionAsync(
-```typescript
+ collectionName: ""{collection_name}"",
-client.getCollection(""{collection_name}"");
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
-```
+ quantizationConfig: new QuantizationConfig
+ {
+ Scalar = new ScalarQuantization
-```rust
+ {
-client.collection_info(""{collection_name}"").await?;
+ Type = QuantizationType.Int8,
-```
+ Quantile = 0.99f,
+ AlwaysRam = true
+ }
-```java
+ }
-client.getCollectionInfoAsync(""{collection_name}"").get();
+);
```
-
+```go
-Expected result
+import (
+ ""context""
-```json
-{
+ ""github.com/qdrant/go-client/qdrant""
- ""result"": {
+)
- ""status"": ""green"",
- ""optimizer_status"": ""ok"",
- ""vectors_count"": 1068786,
+client, err := qdrant.NewClient(&qdrant.Config{
- ""indexed_vectors_count"": 1024232,
+ Host: ""localhost"",
- ""points_count"": 1068786,
+ Port: 6334,
- ""segments_count"": 31,
+})
- ""config"": {
- ""params"": {
- ""vectors"": {
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
- ""size"": 384,
+ CollectionName: ""{collection_name}"",
- ""distance"": ""Cosine""
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
- },
+ Size: 768,
- ""shard_number"": 1,
+ Distance: qdrant.Distance_Cosine,
- ""replication_factor"": 1,
+ }),
- ""write_consistency_factor"": 1,
+ QuantizationConfig: qdrant.NewQuantizationScalar(
- ""on_disk_payload"": false
+ &qdrant.ScalarQuantization{
- },
+ Type: qdrant.QuantizationType_Int8,
- ""hnsw_config"": {
+ Quantile: qdrant.PtrOf(float32(0.99)),
- ""m"": 16,
+ AlwaysRam: qdrant.PtrOf(true),
- ""ef_construct"": 100,
+ },
- ""full_scan_threshold"": 10000,
+ ),
- ""max_indexing_threads"": 0
+})
- },
+```
- ""optimizer_config"": {
- ""deleted_threshold"": 0.2,
- ""vacuum_min_vector_number"": 1000,
+There are 3 parameters that you can specify in the `quantization_config` section:
- ""default_segment_number"": 0,
- ""max_segment_size"": null,
- ""memmap_threshold"": null,
+`type` - the type of the quantized vector components. Currently, Qdrant supports only `int8`.
- ""indexing_threshold"": 20000,
- ""flush_interval_sec"": 5,
- ""max_optimization_threads"": 1
+`quantile` - the quantile of the quantized vector components.
- },
+The quantile is used to calculate the quantization bounds.
- ""wal_config"": {
+For instance, if you specify `0.99` as the quantile, 1% of extreme values will be excluded from the quantization bounds.
- ""wal_capacity_mb"": 32,
- ""wal_segments_ahead"": 0
- }
+Using quantiles lower than `1.0` might be useful if there are outliers in your vector components.
- },
+This parameter only affects the resulting precision and not the memory footprint.
- ""payload_schema"": {}
+It might be worth tuning this parameter if you experience a significant decrease in search quality.
- },
- ""status"": ""ok"",
- ""time"": 0.00010143
+`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
-}
+However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
-```
+In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
-
-
+### Setting up Binary Quantization
+To enable binary quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
-```csharp
+```http
-await client.GetCollectionInfoAsync(""{collection_name}"");
+PUT /collections/{collection_name}
-```
+{
+ ""vectors"": {
+ ""size"": 1536,
-If you insert the vectors into the collection, the `status` field may become
+ ""distance"": ""Cosine""
-`yellow` whilst it is optimizing. It will become `green` once all the points are
+ },
-successfully processed.
+ ""quantization_config"": {
+ ""binary"": {
+ ""always_ram"": true
-The following color statuses are possible:
+ }
+ }
+}
-- 🟢 `green`: collection is ready
+```
-- 🟡 `yellow`: collection is optimizing
-- 🔴 `red`: an error occurred which the engine could not recover from
+```python
+from qdrant_client import QdrantClient, models
-### Approximate point and vector counts
+client = QdrantClient(url=""http://localhost:6333"")
-You may be interested in the count attributes:
+client.create_collection(
-- `points_count` - total number of objects (vectors and their payloads) stored in the collection
+ collection_name=""{collection_name}"",
-- `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point
+ vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE),
-- `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration.
+ quantization_config=models.BinaryQuantization(
+ binary=models.BinaryQuantizationConfig(
+ always_ram=True,
-The above counts are not exact, but should be considered approximate. Depending
+ ),
-on how you use Qdrant these may give very different numbers than what you may
+ ),
-expect. It's therefore important **not** to rely on them.
+)
+```
-More specifically, these numbers represent the count of points and vectors in
-Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points
+```typescript
-as part of automatic optimizations. It may keep changed or deleted points for a
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-bit. And it may delay indexing of new points. All of that is for optimization
-reasons.
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-Updates you do are therefore not directly reflected in these numbers. If you see
-a wildly different count of points, it will likely resolve itself once a new
+client.createCollection(""{collection_name}"", {
-round of automatic optimizations has completed.
+ vectors: {
+ size: 1536,
+ distance: ""Cosine"",
-To clarify: these numbers don't represent the exact amount of points or vectors
+ },
-you have inserted, nor does it represent the exact number of distinguishable
+ quantization_config: {
-points or vectors you can query. If you want to know exact counts, refer to the
+ binary: {
-[count API](../points/#counting-points).
+ always_ram: true,
+ },
+ },
-_Note: these numbers may be removed in a future version of Qdrant._
+});
+```
-### Indexing vectors in HNSW
+```rust
+use qdrant_client::qdrant::{
-In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and
+ BinaryQuantizationBuilder, CreateCollectionBuilder, Distance, VectorParamsBuilder,
-depends on the [optimizer configuration](../optimizer). A new index segment is built if the size of non-indexed vectors is higher than the
+};
-value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment
+use qdrant_client::Qdrant;
-created and `indexed_vectors_count` might be equal to `0`.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters).
+client
-## Collection aliases
+ .create_collection(
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(1536, Distance::Cosine))
-In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly.
+ .quantization_config(BinaryQuantizationBuilder::new(true)),
-For example, when upgrading to a new version of the neural network.
+ )
+ .await?;
+```
-There is no way to stop the service and rebuild the collection with new vectors in these situations.
-Aliases are additional names for existing collections.
-All queries to the collection can also be done identically, using an alias instead of the collection name.
+```java
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection.
+import io.qdrant.client.grpc.Collections.BinaryQuantization;
-Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch.
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
-### Create alias
+import io.qdrant.client.grpc.Collections.VectorParams;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
-```http
-POST /collections/aliases
+QdrantClient client =
-{
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""actions"": [
- {
- ""create_alias"": {
+client
- ""collection_name"": ""test_collection1"",
+ .createCollectionAsync(
- ""alias_name"": ""production_collection""
+ CreateCollection.newBuilder()
- }
+ .setCollectionName(""{collection_name}"")
- }
+ .setVectorsConfig(
- ]
+ VectorsConfig.newBuilder()
-}
+ .setParams(
-```
+ VectorParams.newBuilder()
+ .setSize(1536)
+ .setDistance(Distance.Cosine)
-```bash
+ .build())
-curl -X POST http://localhost:6333/collections/aliases \
+ .build())
- -H 'Content-Type: application/json' \
+ .setQuantizationConfig(
- --data-raw '{
+ QuantizationConfig.newBuilder()
- ""actions"": [
+ .setBinary(BinaryQuantization.newBuilder().setAlwaysRam(true).build())
- {
+ .build())
- ""create_alias"": {
+ .build())
- ""collection_name"": ""test_collection1"",
+ .get();
- ""alias_name"": ""production_collection""
+```
- }
- }
- ]
+```csharp
-}'
+using Qdrant.Client;
-```
+using Qdrant.Client.Grpc;
-```python
+var client = new QdrantClient(""localhost"", 6334);
-client.update_collection_aliases(
- change_aliases_operations=[
- models.CreateAliasOperation(
+await client.CreateCollectionAsync(
- create_alias=models.CreateAlias(
+ collectionName: ""{collection_name}"",
- collection_name=""example_collection"", alias_name=""production_collection""
+ vectorsConfig: new VectorParams { Size = 1536, Distance = Distance.Cosine },
- )
+ quantizationConfig: new QuantizationConfig
- )
+ {
- ]
+ Binary = new BinaryQuantization { AlwaysRam = true }
-)
+ }
+
+);
```
-```typescript
+```go
-client.updateCollectionAliases({
+import (
- actions: [
+ ""context""
- {
- create_alias: {
- collection_name: ""example_collection"",
+ ""github.com/qdrant/go-client/qdrant""
- alias_name: ""production_collection"",
+)
- },
- },
- ],
+client, err := qdrant.NewClient(&qdrant.Config{
-});
+ Host: ""localhost"",
-```
+ Port: 6334,
+})
-```rust
-client.create_alias(""example_collection"", ""production_collection"").await?;
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-```
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 1536,
-```java
-
-client.createAliasAsync(""production_collection"", ""example_collection"").get();
-
-```
+ Distance: qdrant.Distance_Cosine,
+ }),
+ QuantizationConfig: qdrant.NewQuantizationBinary(
-```csharp
+ &qdrant.BinaryQuantization{
-await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection"");
+ AlwaysRam: qdrant.PtrOf(true),
-```
+ },
+ ),
+})
-### Remove alias
+```
-```bash
+`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
-curl -X POST http://localhost:6333/collections/aliases \
+However, in some setups you might want to keep quantized vectors in RAM to speed up the search process.
- -H 'Content-Type: application/json' \
- --data-raw '{
- ""actions"": [
+In this case, you can set `always_ram` to `true` to store quantized vectors in RAM.
- {
- ""delete_alias"": {
- ""collection_name"": ""test_collection1"",
+### Setting up Product Quantization
- ""alias_name"": ""production_collection""
- }
- }
+To enable product quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration.
- ]
-}'
-```
+```http
+PUT /collections/{collection_name}
+{
-```http
+ ""vectors"": {
-POST /collections/aliases
+ ""size"": 768,
-{
+ ""distance"": ""Cosine""
- ""actions"": [
+ },
- {
+ ""quantization_config"": {
- ""delete_alias"": {
+ ""product"": {
- ""alias_name"": ""production_collection""
+ ""compression"": ""x16"",
- }
+ ""always_ram"": true
}
- ]
+ }
}
@@ -25759,423 +25334,457 @@ POST /collections/aliases
```python
-client.update_collection_aliases(
+from qdrant_client import QdrantClient, models
- change_aliases_operations=[
- models.DeleteAliasOperation(
- delete_alias=models.DeleteAlias(alias_name=""production_collection"")
+client = QdrantClient(url=""http://localhost:6333"")
- ),
- ]
-)
+client.create_collection(
-```
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ quantization_config=models.ProductQuantization(
-```typescript
+ product=models.ProductQuantizationConfig(
-client.updateCollectionAliases({
+ compression=models.CompressionRatio.X16,
- actions: [
+ always_ram=True,
- {
+ ),
- delete_alias: {
+ ),
- alias_name: ""production_collection"",
+)
- },
+```
- },
- ],
-});
+```typescript
-```
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-```rust
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.delete_alias(""production_collection"").await?;
-```
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+ size: 768,
-```java
+ distance: ""Cosine"",
-client.deleteAliasAsync(""production_collection"").get();
+ },
-```
+ quantization_config: {
+ product: {
+ compression: ""x16"",
-```csharp
+ always_ram: true,
-await client.DeleteAliasAsync(""production_collection"");
+ },
+
+ },
+
+});
```
-### Switch collection
+```rust
+use qdrant_client::qdrant::{
+ CompressionRatio, CreateCollectionBuilder, Distance, ProductQuantizationBuilder,
-Multiple alias actions are performed atomically.
+ VectorParamsBuilder,
-For example, you can switch underlying collection with the following command:
+};
+use qdrant_client::Qdrant;
-```http
-POST /collections/aliases
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-{
- ""actions"": [
- {
+client
- ""delete_alias"": {
+ .create_collection(
- ""alias_name"": ""production_collection""
+ CreateCollectionBuilder::new(""{collection_name}"")
- }
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
- },
+ .quantization_config(
- {
+ ProductQuantizationBuilder::new(CompressionRatio::X16.into()).always_ram(true),
- ""create_alias"": {
+ ),
- ""collection_name"": ""test_collection2"",
+ )
- ""alias_name"": ""production_collection""
+ .await?;
- }
+```
- }
- ]
-}
+```java
-```
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CompressionRatio;
-```bash
+import io.qdrant.client.grpc.Collections.CreateCollection;
-curl -X POST http://localhost:6333/collections/aliases \
+import io.qdrant.client.grpc.Collections.Distance;
- -H 'Content-Type: application/json' \
+import io.qdrant.client.grpc.Collections.ProductQuantization;
- --data-raw '{
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
- ""actions"": [
+import io.qdrant.client.grpc.Collections.VectorParams;
- {
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- ""delete_alias"": {
- ""alias_name"": ""production_collection""
- }
+QdrantClient client =
- },
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- {
- ""create_alias"": {
- ""collection_name"": ""test_collection2"",
+client
- ""alias_name"": ""production_collection""
+ .createCollectionAsync(
- }
+ CreateCollection.newBuilder()
- }
+ .setCollectionName(""{collection_name}"")
- ]
+ .setVectorsConfig(
-}'
+ VectorsConfig.newBuilder()
-```
+ .setParams(
+ VectorParams.newBuilder()
+ .setSize(768)
-```python
+ .setDistance(Distance.Cosine)
-client.update_collection_aliases(
+ .build())
- change_aliases_operations=[
+ .build())
- models.DeleteAliasOperation(
+ .setQuantizationConfig(
- delete_alias=models.DeleteAlias(alias_name=""production_collection"")
+ QuantizationConfig.newBuilder()
- ),
+ .setProduct(
- models.CreateAliasOperation(
+ ProductQuantization.newBuilder()
- create_alias=models.CreateAlias(
+ .setCompression(CompressionRatio.x16)
- collection_name=""example_collection"", alias_name=""production_collection""
+ .setAlwaysRam(true)
- )
+ .build())
- ),
+ .build())
- ]
+ .build())
-)
+ .get();
```
-```typescript
+```csharp
-client.updateCollectionAliases({
+using Qdrant.Client;
- actions: [
+using Qdrant.Client.Grpc;
- {
- delete_alias: {
- alias_name: ""production_collection"",
+var client = new QdrantClient(""localhost"", 6334);
- },
- },
- {
+await client.CreateCollectionAsync(
- create_alias: {
+ collectionName: ""{collection_name}"",
- collection_name: ""example_collection"",
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- alias_name: ""production_collection"",
+ quantizationConfig: new QuantizationConfig
- },
+ {
- },
+ Product = new ProductQuantization { Compression = CompressionRatio.X16, AlwaysRam = true }
- ],
+ }
-});
+);
```
-```rust
+```go
-client.delete_alias(""production_collection"").await?;
+import (
-client.create_alias(""example_collection"", ""production_collection"").await?;
+ ""context""
-```
+ ""github.com/qdrant/go-client/qdrant""
-```java
+)
-client.deleteAliasAsync(""production_collection"").get();
-client.createAliasAsync(""production_collection"", ""example_collection"").get();
-```
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-```csharp
+})
-await client.DeleteAliasAsync(""production_collection"");
-await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection"");
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-### List collection aliases
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 768,
-```http
+ Distance: qdrant.Distance_Cosine,
-GET /collections/test_collection2/aliases
+ }),
-```
+ QuantizationConfig: qdrant.NewQuantizationProduct(
+ &qdrant.ProductQuantization{
+ Compression: qdrant.CompressionRatio_x16,
-```bash
+ AlwaysRam: qdrant.PtrOf(true),
-curl -X GET http://localhost:6333/collections/test_collection2/aliases
+ },
+
+ ),
+
+})
```
-```python
+There are two parameters that you can specify in the `quantization_config` section:
-from qdrant_client import QdrantClient
+`compression` - compression ratio.
-client = QdrantClient(""localhost"", port=6333)
+Compression ratio represents the size of the quantized vector in bytes divided by the size of the original vector in bytes.
+In this case, the quantized vector will be 16 times smaller than the original vector.
-client.get_collection_aliases(collection_name=""{collection_name}"")
-```
+`always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors.
+However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. Then set `always_ram` to `true`.
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+### Searching with Quantization
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+Once you have configured quantization for a collection, you don't need to do anything extra to search with quantization.
+Qdrant will automatically use quantized vectors if they are available.
-client.getCollectionAliases(""{collection_name}"");
-```
+However, there are a few options that you can use to control the search process:
-```rust
+```http
-use qdrant_client::client::QdrantClient;
+POST /collections/{collection_name}/points/query
+{
+ ""query"": [0.2, 0.1, 0.9, 0.7],
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ""params"": {
+ ""quantization"": {
+ ""ignore"": false,
-client.list_collection_aliases(""{collection_name}"").await?;
+ ""rescore"": true,
-```
+ ""oversampling"": 2.0
+ }
+ },
-```java
+ ""limit"": 10
-import io.qdrant.client.QdrantClient;
+}
-import io.qdrant.client.QdrantGrpcClient;
+```
-QdrantClient client =
+```python
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+from qdrant_client import QdrantClient, models
-client.listCollectionAliasesAsync(""{collection_name}"").get();
+client = QdrantClient(url=""http://localhost:6333"")
-```
+client.query_points(
-```csharp
+ collection_name=""{collection_name}"",
-using Qdrant.Client;
+ query=[0.2, 0.1, 0.9, 0.7],
+
+ search_params=models.SearchParams(
+ quantization=models.QuantizationSearchParams(
+ ignore=False,
-var client = new QdrantClient(""localhost"", 6334);
+ rescore=True,
+ oversampling=2.0,
+ )
-await client.ListCollectionAliasesAsync(""{collection_name}"");
+ ),
+
+)
```
-### List all aliases
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-```http
-GET /aliases
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-```
+client.query(""{collection_name}"", {
-```bash
+ query: [0.2, 0.1, 0.9, 0.7],
-curl -X GET http://localhost:6333/aliases
+ params: {
-```
+ quantization: {
+ ignore: false,
+ rescore: true,
+ oversampling: 2.0,
+ },
-```python
+ },
-from qdrant_client import QdrantClient
+ limit: 10,
+});
+```
-client = QdrantClient(""localhost"", port=6333)
+```rust
-client.get_aliases()
+use qdrant_client::qdrant::{
-```
+ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
+};
+use qdrant_client::Qdrant;
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+client
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
-client.getAliases();
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-```
+ .limit(10)
+ .params(
+ SearchParamsBuilder::default().quantization(
-```rust
+ QuantizationSearchParamsBuilder::default()
-use qdrant_client::client::QdrantClient;
+ .ignore(false)
+ .rescore(true)
+ .oversampling(2.0),
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ),
+ ),
+ )
-client.list_aliases().await?;
+ .await?;
```
@@ -26187,203 +25796,217 @@ import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.QuantizationSearchParams;
+import io.qdrant.client.grpc.Points.QueryPoints;
-QdrantClient client =
-
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-
-
+import io.qdrant.client.grpc.Points.SearchParams;
-client.listAliasesAsync().get();
-```
+import static io.qdrant.client.QueryFactory.nearest;
-```csharp
-using Qdrant.Client;
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-var client = new QdrantClient(""localhost"", 6334);
+client.queryAsync(
+ QueryPoints.newBuilder()
-await client.ListAliasesAsync();
+ .setCollectionName(""{collection_name}"")
-```
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setParams(
+ SearchParams.newBuilder()
-### List all collections
+ .setQuantization(
+ QuantizationSearchParams.newBuilder()
+ .setIgnore(false)
-```http
+ .setRescore(true)
-GET /collections
+ .setOversampling(2.0)
-```
+ .build())
+ .build())
+ .setLimit(10)
-```bash
+ .build())
-curl -X GET http://localhost:6333/collections
+ .get();
```
+```csharp
+using Qdrant.Client;
-```python
-
-from qdrant_client import QdrantClient
-
-
-
-client = QdrantClient(""localhost"", port=6333)
-
+using Qdrant.Client.Grpc;
-client.get_collections()
-```
+var client = new QdrantClient(""localhost"", 6334);
-```typescript
+await client.QueryAsync(
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ collectionName: ""{collection_name}"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ searchParams: new SearchParams
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ {
+ Quantization = new QuantizationSearchParams
+ {
-client.getCollections();
+ Ignore = false,
-```
+ Rescore = true,
+ Oversampling = 2.0
+ }
-```rust
+ },
-use qdrant_client::client::QdrantClient;
+ limit: 10
+);
+```
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+```go
-client.list_collections().await?;
+import (
-```
+ ""context""
-```java
+ ""github.com/qdrant/go-client/qdrant""
-import io.qdrant.client.QdrantClient;
+)
-import io.qdrant.client.QdrantGrpcClient;
+client, err := qdrant.NewClient(&qdrant.Config{
-QdrantClient client =
+ Host: ""localhost"",
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ Port: 6334,
+})
-client.listCollectionsAsync().get();
-```
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-```csharp
+ Params: &qdrant.SearchParams{
-using Qdrant.Client;
+ Quantization: &qdrant.QuantizationSearchParams{
+ Ignore: qdrant.PtrOf(false),
+ Rescore: qdrant.PtrOf(true),
-var client = new QdrantClient(""localhost"", 6334);
+ Oversampling: qdrant.PtrOf(2.0),
+ },
+ },
-await client.ListCollectionsAsync();
+})
```
-",documentation/concepts/collections.md
-"---
-title: Indexing
-weight: 90
-aliases:
+`ignore` - Toggle whether to ignore quantized vectors during the search process. By default, Qdrant will use quantized vectors if they are available.
- - ../indexing
----
+`rescore` - Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors.
+This can improve the search quality, but may slightly decrease the search speed, compared to the search without rescore.
-# Indexing
+It is recommended to disable rescore only if the original vectors are stored on a slow storage (e.g. HDD or network storage).
+By default, rescore is enabled.
-A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering.
+**Available as of v1.3.0**
-The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection.
+`oversampling` - Defines how many extra vectors should be pre-selected using quantized index, and then re-scored using original vectors.
+For example, if oversampling is 2.4 and limit is 100, then 240 vectors will be pre-selected using quantized index, and then top-100 will be returned after re-scoring.
-Not all segments automatically have indexes.
+Oversampling is useful if you want to tune the tradeoff between search speed and search quality in the query time.
-Their necessity is determined by the [optimizer](../optimizer) settings and depends, as a rule, on the number of stored points.
+## Quantization tips
-## Payload Index
+#### Accuracy tuning
-Payload index in Qdrant is similar to the index in conventional document-oriented databases.
-This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition.
+In this section, we will discuss how to tune the search precision.
+The fastest way to understand the impact of quantization on the search quality is to compare the search results with and without quantization.
-The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search#query-planning) choose a search strategy.
+In order to disable quantization, you can set `ignore` to `true` in the search request:
-Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user.
+```http
-To mark a field as indexable, you can use the following:
+POST /collections/{collection_name}/points/query
+{
+ ""query"": [0.2, 0.1, 0.9, 0.7],
-```http
+ ""params"": {
-PUT /collections/{collection_name}/index
+ ""quantization"": {
-{
+ ""ignore"": true
- ""field_name"": ""name_of_the_field_to_index"",
+ }
- ""field_schema"": ""keyword""
+ },
+
+ ""limit"": 10
}
@@ -26393,21 +26016,29 @@ PUT /collections/{collection_name}/index
```python
-from qdrant_client import QdrantClient
+from qdrant_client import QdrantClient, models
-client = QdrantClient(host=""localhost"", port=6333)
+client = QdrantClient(url=""http://localhost:6333"")
-client.create_payload_index(
+client.query_points(
collection_name=""{collection_name}"",
- field_name=""name_of_the_field_to_index"",
+ query=[0.2, 0.1, 0.9, 0.7],
- field_schema=""keyword"",
+ search_params=models.SearchParams(
+
+ quantization=models.QuantizationSearchParams(
+
+ ignore=True,
+
+ )
+
+ ),
)
@@ -26425,11 +26056,19 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.createPayloadIndex(""{collection_name}"", {
+client.query(""{collection_name}"", {
- field_name: ""name_of_the_field_to_index"",
+ query: [0.2, 0.1, 0.9, 0.7],
- field_schema: ""keyword"",
+ params: {
+
+ quantization: {
+
+ ignore: true,
+
+ },
+
+ },
});
@@ -26439,27 +26078,37 @@ client.createPayloadIndex(""{collection_name}"", {
```rust
-use qdrant_client::{client::QdrantClient, qdrant::FieldType};
+use qdrant_client::qdrant::{
+
+ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
+
+};
+
+use qdrant_client::Qdrant;
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
client
- .create_field_index(
+ .query(
- ""{collection_name}"",
+ QueryPointsBuilder::new(""{collection_name}"")
- ""name_of_the_field_to_index"",
+ .query(vec![0.2, 0.1, 0.9, 0.7])
- FieldType::Keyword,
+ .limit(3)
- None,
+ .params(
- None,
+ SearchParamsBuilder::default()
+
+ .quantization(QuantizationSearchParamsBuilder::default().ignore(true)),
+
+ ),
)
@@ -26475,755 +26124,797 @@ import io.qdrant.client.QdrantClient;
import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+import io.qdrant.client.grpc.Points.QuantizationSearchParams;
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.SearchParams;
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+import static io.qdrant.client.QueryFactory.nearest;
-client
- .createPayloadIndexAsync(
+QdrantClient client =
- ""{collection_name}"",
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""name_of_the_field_to_index"",
- PayloadSchemaType.Keyword,
- null,
+client.queryAsync(
- null,
+ QueryPoints.newBuilder()
- null,
+ .setCollectionName(""{collection_name}"")
- null)
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
- .get();
+ .setParams(
-```
+ SearchParams.newBuilder()
+ .setQuantization(
+ QuantizationSearchParams.newBuilder().setIgnore(true).build())
-```csharp
+ .build())
-using Qdrant.Client;
+ .setLimit(10)
+ .build())
+ .get();
-var client = new QdrantClient(""localhost"", 6334);
+```
-await client.CreatePayloadIndexAsync(collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index"");
+```csharp
-```
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Available field types are:
+var client = new QdrantClient(""localhost"", 6334);
-* `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions.
-* `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions.
+await client.QueryAsync(
-* `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions.
+ collectionName: ""{collection_name}"",
-* `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of 1.4.0).
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-* `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions.
+ searchParams: new SearchParams
-* `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions.
+ {
+ Quantization = new QuantizationSearchParams { Ignore = true }
+ },
-Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions.
+ limit: 10
-If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most.
+);
-As a rule, the more different values a payload value has, the more efficiently the index will be used.
+```
-### Full-text index
+```go
+import (
+ ""context""
-*Available as of v0.10.0*
+ ""github.com/qdrant/go-client/qdrant""
-Qdrant supports full-text search for string payload.
+)
-Full-text index allows you to filter points by the presence of a word or a phrase in the payload field.
+client, err := qdrant.NewClient(&qdrant.Config{
-Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters.
+ Host: ""localhost"",
-Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index.
+ Port: 6334,
+})
-To create a full-text index, you can use the following:
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-```http
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-PUT /collections/{collection_name}/index
+ Params: &qdrant.SearchParams{
-{
+ Quantization: &qdrant.QuantizationSearchParams{
- ""field_name"": ""name_of_the_field_to_index"",
+ Ignore: qdrant.PtrOf(false),
- ""field_schema"": {
+ },
- ""type"": ""text"",
+ },
- ""tokenizer"": ""word"",
+})
- ""min_token_len"": 2,
+```
- ""max_token_len"": 20,
- ""lowercase"": true
- }
+- **Adjust the quantile parameter**: The quantile parameter in scalar quantization determines the quantization bounds.
-}
+By setting it to a value lower than 1.0, you can exclude extreme values (outliers) from the quantization bounds.
-```
+For example, if you set the quantile to 0.99, 1% of the extreme values will be excluded.
+By adjusting the quantile, you find an optimal value that will provide the best search quality for your collection.
-```python
-from qdrant_client import QdrantClient
+- **Enable rescore**: Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. On large collections, this can improve the search quality, with just minor performance impact.
-from qdrant_client.http import models
+#### Memory and speed tuning
-client = QdrantClient(host=""localhost"", port=6333)
+In this section, we will discuss how to tune the memory and speed of the search process with quantization.
-client.create_payload_index(
- collection_name=""{collection_name}"",
- field_name=""name_of_the_field_to_index"",
+There are 3 possible modes to place storage of vectors within the qdrant collection:
- field_schema=models.TextIndexParams(
- type=""text"",
- tokenizer=models.TokenizerType.WORD,
+- **All in RAM** - all vector, original and quantized, are loaded and kept in RAM. This is the fastest mode, but requires a lot of RAM. Enabled by default.
- min_token_len=2,
- max_token_len=15,
- lowercase=True,
+- **Original on Disk, quantized in RAM** - this is a hybrid mode, allows to obtain a good balance between speed and memory usage. Recommended scenario if you are aiming to shrink the memory footprint while keeping the search speed.
- ),
-)
-```
+This mode is enabled by setting `always_ram` to `true` in the quantization config while using memmap storage:
-```typescript
+```http
-import { QdrantClient, Schemas } from ""@qdrant/js-client-rest"";
+PUT /collections/{collection_name}
+{
+ ""vectors"": {
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ ""size"": 768,
+ ""distance"": ""Cosine"",
+ ""on_disk"": true
-client.createPayloadIndex(""{collection_name}"", {
+ },
- field_name: ""name_of_the_field_to_index"",
+ ""quantization_config"": {
- field_schema: {
+ ""scalar"": {
- type: ""text"",
+ ""type"": ""int8"",
- tokenizer: ""word"",
+ ""always_ram"": true
- min_token_len: 2,
+ }
- max_token_len: 15,
+ }
- lowercase: true,
+}
- },
+```
-});
-```
+```python
+from qdrant_client import QdrantClient, models
-```rust
-use qdrant_client::{
- client::QdrantClient,
+client = QdrantClient(url=""http://localhost:6333"")
- qdrant::{
- payload_index_params::IndexParams, FieldType, PayloadIndexParams, TextIndexParams,
- TokenizerType,
+client.create_collection(
- },
+ collection_name=""{collection_name}"",
-};
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
+ quantization_config=models.ScalarQuantization(
+ scalar=models.ScalarQuantizationConfig(
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ type=models.ScalarType.INT8,
+ always_ram=True,
+ ),
-client
+ ),
- .create_field_index(
+)
- ""{collection_name}"",
+```
- ""name_of_the_field_to_index"",
- FieldType::Text,
- Some(&PayloadIndexParams {
+```typescript
- index_params: Some(IndexParams::TextIndexParams(TextIndexParams {
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- tokenizer: TokenizerType::Word as i32,
- min_token_len: Some(2),
- max_token_len: Some(10),
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- lowercase: Some(true),
- })),
- }),
+client.createCollection(""{collection_name}"", {
- None,
+ vectors: {
- )
+ size: 768,
- .await?;
+ distance: ""Cosine"",
-```
+ on_disk: true,
+ },
+ quantization_config: {
-```java
+ scalar: {
-import io.qdrant.client.QdrantClient;
+ type: ""int8"",
-import io.qdrant.client.QdrantGrpcClient;
+ always_ram: true,
-import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+ },
-import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+ },
-import io.qdrant.client.grpc.Collections.TextIndexParams;
+});
-import io.qdrant.client.grpc.Collections.TokenizerType;
+```
-QdrantClient client =
+```rust
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+use qdrant_client::qdrant::{
+ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
+ VectorParamsBuilder,
-client
+};
- .createPayloadIndexAsync(
+use qdrant_client::Qdrant;
- ""{collection_name}"",
- ""name_of_the_field_to_index"",
- PayloadSchemaType.Text,
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- PayloadIndexParams.newBuilder()
- .setTextIndexParams(
- TextIndexParams.newBuilder()
+client
- .setTokenizer(TokenizerType.Word)
+ .create_collection(
- .setMinTokenLen(2)
+ CreateCollectionBuilder::new(""{collection_name}"")
- .setMaxTokenLen(10)
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
- .setLowercase(true)
+ .quantization_config(
- .build())
+ ScalarQuantizationBuilder::default()
- .build(),
+ .r#type(QuantizationType::Int8.into())
- null,
+ .always_ram(true),
- null,
+ ),
- null)
+ )
- .get();
+ .await?;
```
-```csharp
+```java
-using Qdrant.Client;
+import io.qdrant.client.QdrantClient;
-using Qdrant.Client.Grpc;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
-var client = new QdrantClient(""localhost"", 6334);
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
+import io.qdrant.client.grpc.Collections.QuantizationType;
-await client.CreatePayloadIndexAsync(
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
- collectionName: ""{collection_name}"",
+import io.qdrant.client.grpc.Collections.VectorParams;
- fieldName: ""name_of_the_field_to_index"",
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- schemaType: PayloadSchemaType.Text,
- indexParams: new PayloadIndexParams
- {
+QdrantClient client =
- TextIndexParams = new TextIndexParams
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- {
- Tokenizer = TokenizerType.Word,
- MinTokenLen = 2,
+client
- MaxTokenLen = 10,
+ .createCollectionAsync(
- Lowercase = true
+ CreateCollection.newBuilder()
- }
+ .setCollectionName(""{collection_name}"")
- }
+ .setVectorsConfig(
-);
+ VectorsConfig.newBuilder()
-```
+ .setParams(
+ VectorParams.newBuilder()
+ .setSize(768)
-Available tokenizers are:
+ .setDistance(Distance.Cosine)
+ .setOnDisk(true)
+ .build())
-* `word` - splits the string into words, separated by spaces, punctuation marks, and special characters.
+ .build())
-* `whitespace` - splits the string into words, separated by spaces.
+ .setQuantizationConfig(
-* `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`.
+ QuantizationConfig.newBuilder()
-* `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags.
+ .setScalar(
+ ScalarQuantization.newBuilder()
+ .setType(QuantizationType.Int8)
-See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index.
+ .setAlwaysRam(true)
+ .build())
+ .build())
-## Vector Index
+ .build())
+ .get();
+```
-A vector index is a data structure built on vectors through a specific mathematical model.
-Through the vector index, we can efficiently query several vectors similar to the target vector.
+```csharp
+using Qdrant.Client;
-Qdrant currently only uses HNSW as a dense vector index.
+using Qdrant.Client.Grpc;
-[HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position.
+var client = new QdrantClient(""localhost"", 6334);
-In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range.
+await client.CreateCollectionAsync(
+ collectionName: ""{collection_name}"",
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
-The corresponding parameters could be configured in the configuration file:
+ quantizationConfig: new QuantizationConfig
+ {
+ Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true }
-```yaml
+ }
-storage:
+);
- # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
+```
- hnsw_index:
- # Number of edges per node in the index graph.
- # Larger the value - more accurate the search, more space required.
+```go
- m: 16
+import (
- # Number of neighbours to consider during the index building.
+ ""context""
- # Larger the value - more accurate the search, more time required to build index.
- ef_construct: 100
- # Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
+ ""github.com/qdrant/go-client/qdrant""
- # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
+)
- # in this case full-scan search should be preferred by query planner and additional indexing is not required.
- # Note: 1Kb = 1 vector of size 256
- full_scan_threshold: 10000
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-```
+})
-And so in the process of creating a [collection](../collections). The `ef` parameter is configured during [the search](../search) and by default is equal to `ef_construct`.
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
-HNSW is chosen for several reasons.
+ Size: 768,
-First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search.
+ Distance: qdrant.Distance_Cosine,
-Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks).
+ OnDisk: qdrant.PtrOf(true),
+ }),
+ QuantizationConfig: qdrant.NewQuantizationScalar(
-*Available as of v1.1.1*
+ &qdrant.ScalarQuantization{
+ Type: qdrant.QuantizationType_Int8,
+ AlwaysRam: qdrant.PtrOf(true),
-The HNSW parameters can also be configured on a collection and named vector
+ },
-level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search
+ ),
-performance.
+})
+```
-## Sparse Vector Index
+In this scenario, the number of disk reads may play a significant role in the search speed.
+In a system with high disk latency, the re-scoring step may become a bottleneck.
-*Available as of v1.7.0*
+Consider disabling `rescore` to improve the search speed:
-### Key Features of Sparse Vector Index
-- **Support for Sparse Vectors:** Qdrant supports sparse vectors, characterized by a high proportion of zeroes.
-- **Efficient Indexing:** Utilizes an inverted index structure to store vectors for each non-zero dimension, optimizing memory and search speed.
+```http
+POST /collections/{collection_name}/points/query
+{
-### Search Mechanism
+ ""query"": [0.2, 0.1, 0.9, 0.7],
-- **Index Usage:** The index identifies vectors with non-zero values in query dimensions during a search.
+ ""params"": {
-- **Scoring Method:** Vectors are scored using the dot product.
+ ""quantization"": {
+ ""rescore"": false
+ }
-### Optimizations
+ },
-- **Reducing Vectors to Score:** Implementations are in place to minimize the number of vectors scored, especially for dimensions with numerous vectors.
+ ""limit"": 10
+}
+```
-### Filtering and Configuration
-- **Filtering Support:** Similar to dense vectors, supports filtering by payload fields.
-- **`full_scan_threshold` Configuration:** Allows control over when to switch search from the payload index to minimize scoring vectors.
+```python
-- **Threshold for Sparse Vectors:** Specifies the threshold in terms of the number of matching vectors found by the query planner.
+from qdrant_client import QdrantClient, models
-### Index Storage and Management
+client = QdrantClient(url=""http://localhost:6333"")
-- **Memory-Based Index:** The index resides in memory for appendable segments, ensuring fast search and update operations.
-- **Handling Immutable Segments:** For immutable segments, the sparse index can either stay in memory or be mapped to disk with the `on_disk` flag.
+client.query_points(
+ collection_name=""{collection_name}"",
-**Example Configuration:** To enable on-disk storage for immutable segments and full scan for queries inspecting less than 5000 vectors:
+ query=[0.2, 0.1, 0.9, 0.7],
+ search_params=models.SearchParams(
+ quantization=models.QuantizationSearchParams(rescore=False)
-```http
+ ),
-PUT /collections/{collection_name}
+)
-{
+```
- ""sparse_vectors"": {
- ""text"": {
- ""index"": {
+```typescript
- ""on_disk"": true,
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- ""full_scan_threshold"": 5000
- }
- },
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- }
-}
-```
+client.query(""{collection_name}"", {
+ query: [0.2, 0.1, 0.9, 0.7],
+ params: {
+ quantization: {
+ rescore: false,
-## Filtrable Index
+ },
+ },
+});
-Separately, payload index and vector index cannot solve the problem of search using the filter completely.
+```
-In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore.
+```rust
-However, for cases in the middle, this approach does not work well.
+use qdrant_client::qdrant::{
+ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder,
+};
-On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters.
+use qdrant_client::Qdrant;
-![HNSW fail](/docs/precision_by_m.png)
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-![hnsw graph](/docs/graph.gif)
+client
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
-You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/).
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values.
+ .limit(3)
+ .params(
+ SearchParamsBuilder::default()
-Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph.
+ .quantization(QuantizationSearchParamsBuilder::default().rescore(false)),
+ ),
+ )
-This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search.
-",documentation/concepts/indexing.md
-"---
+ .await?;
-title: Points
+```
-weight: 40
-aliases:
- - ../points
+```java
----
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.QuantizationSearchParams;
-# Points
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.SearchParams;
-The points are the central entity that Qdrant operates with.
-A point is a record consisting of a vector and an optional [payload](../payload).
+import static io.qdrant.client.QueryFactory.nearest;
-You can search among the points grouped in one [collection](../collections) based on vector similarity.
+QdrantClient client =
-This procedure is described in more detail in the [search](../search) and [filtering](../filtering) sections.
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-This section explains how to create and manage vectors.
+client.queryAsync(
+ QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
-Any point modification operation is asynchronous and takes place in 2 steps.
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-At the first stage, the operation is written to the Write-ahead-log.
+ .setParams(
+ SearchParams.newBuilder()
+ .setQuantization(
-After this moment, the service will not lose the data, even if the machine loses power supply.
+ QuantizationSearchParams.newBuilder().setRescore(false).build())
+
+ .build())
+ .setLimit(3)
+ .build())
-## Awaiting result
+ .get();
+```
-If the API is called with the `&wait=false` parameter, or if it is not explicitly specified, the client will receive an acknowledgment of receiving data:
+```csharp
+using Qdrant.Client;
-```json
+using Qdrant.Client.Grpc;
-{
- ""result"": {
- ""operation_id"": 123,
+var client = new QdrantClient(""localhost"", 6334);
- ""status"": ""acknowledged""
- },
- ""status"": ""ok"",
+await client.QueryAsync(
- ""time"": 0.000206061
+ collectionName: ""{collection_name}"",
-}
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-```
+ searchParams: new SearchParams
+ {
+ Quantization = new QuantizationSearchParams { Rescore = false }
-This response does not mean that the data is available for retrieval yet. This
+ },
-uses a form of eventual consistency. It may take a short amount of time before it
+ limit: 3
-is actually processed as updating the collection happens in the background. In
+);
-fact, it is possible that such request eventually fails.
+```
-If inserting a lot of vectors, we also recommend using asynchronous requests to take advantage of pipelining.
+```go
-If the logic of your application requires a guarantee that the vector will be available for searching immediately after the API responds, then use the flag `?wait=true`.
+import (
-In this case, the API will return the result only after the operation is finished:
+ ""context""
-```json
+ ""github.com/qdrant/go-client/qdrant""
-{
+)
- ""result"": {
- ""operation_id"": 0,
- ""status"": ""completed""
+client, err := qdrant.NewClient(&qdrant.Config{
- },
+ Host: ""localhost"",
- ""status"": ""ok"",
+ Port: 6334,
- ""time"": 0.000206061
+})
-}
-```
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-## Point IDs
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ Params: &qdrant.SearchParams{
+ Quantization: &qdrant.QuantizationSearchParams{
-Qdrant supports using both `64-bit unsigned integers` and `UUID` as identifiers for points.
+ Rescore: qdrant.PtrOf(false),
+ },
+ },
-Examples of UUID string representations:
+})
+```
-* simple: `936DA01F9ABD4d9d80C702AF85C822A8`
-* hyphenated: `550e8400-e29b-41d4-a716-446655440000`
+- **All on Disk** - all vectors, original and quantized, are stored on disk. This mode allows to achieve the smallest memory footprint, but at the cost of the search speed.
-* urn: `urn:uuid:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4`
+It is recommended to use this mode if you have a large collection and fast storage (e.g. SSD or NVMe).
-That means that in every request UUID string could be used instead of numerical id.
-Example:
+
+This mode is enabled by setting `always_ram` to `false` in the quantization config while using mmap storage:
```http
-PUT /collections/{collection_name}/points
+PUT /collections/{collection_name}
{
- ""points"": [
+ ""vectors"": {
- {
+ ""size"": 768,
- ""id"": ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+ ""distance"": ""Cosine"",
- ""payload"": {""color"": ""red""},
+ ""on_disk"": true
- ""vector"": [0.9, 0.1, 0.1]
+ },
+
+ ""quantization_config"": {
+
+ ""scalar"": {
+
+ ""type"": ""int8"",
+
+ ""always_ram"": false
}
- ]
+ }
}
@@ -27233,37 +26924,31 @@ PUT /collections/{collection_name}/points
```python
-from qdrant_client import QdrantClient
-
-from qdrant_client.http import models
+from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+client = QdrantClient(url=""http://localhost:6333"")
-client.upsert(
+client.create_collection(
collection_name=""{collection_name}"",
- points=[
-
- models.PointStruct(
-
- id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True),
- payload={
+ quantization_config=models.ScalarQuantization(
- ""color"": ""red"",
+ scalar=models.ScalarQuantizationConfig(
- },
+ type=models.ScalarType.INT8,
- vector=[0.9, 0.1, 0.1],
+ always_ram=False,
),
- ],
+ ),
)
@@ -27281,25 +26966,29 @@ const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-client.upsert(""{collection_name}"", {
+client.createCollection(""{collection_name}"", {
- points: [
+ vectors: {
- {
+ size: 768,
- id: ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+ distance: ""Cosine"",
- payload: {
+ on_disk: true,
- color: ""red"",
+ },
- },
+ quantization_config: {
- vector: [0.9, 0.1, 0.1],
+ scalar: {
+
+ type: ""int8"",
+
+ always_ram: false,
},
- ],
+ },
});
@@ -27309,43 +26998,39 @@ client.upsert(""{collection_name}"", {
```rust
-use qdrant_client::{client::QdrantClient, qdrant::PointStruct};
-
-use serde_json::json;
-
+use qdrant_client::qdrant::{
+ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder,
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ VectorParamsBuilder,
+};
+use qdrant_client::Qdrant;
-client
- .upsert_points_blocking(
- ""{collection_name}"".to_string(),
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- None,
- vec![PointStruct::new(
- ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"".to_string(),
+client
- vec![0.05, 0.61, 0.76, 0.74],
+ .create_collection(
- json!(
+ CreateCollectionBuilder::new(""{collection_name}"")
- {""color"": ""Red""}
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true))
- )
+ .quantization_config(
- .try_into()
+ ScalarQuantizationBuilder::default()
- .unwrap(),
+ .r#type(QuantizationType::Int8.into())
- )],
+ .always_ram(false),
- None,
+ ),
)
@@ -27357,27 +27042,25 @@ client
```java
-import java.util.List;
+import io.qdrant.client.QdrantClient;
-import java.util.Map;
+import io.qdrant.client.QdrantGrpcClient;
-import java.util.UUID;
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-import static io.qdrant.client.PointIdFactory.id;
-
-import static io.qdrant.client.ValueFactory.value;
-
-import static io.qdrant.client.VectorsFactory.vectors;
-
+import io.qdrant.client.grpc.Collections.QuantizationConfig;
+import io.qdrant.client.grpc.Collections.QuantizationType;
-import io.qdrant.client.QdrantClient;
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
-import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Points.PointStruct;
+import io.qdrant.client.grpc.Collections.VectorsConfig;
@@ -27389,21 +27072,47 @@ QdrantClient client =
client
- .upsertAsync(
+ .createCollectionAsync(
- ""{collection_name}"",
+ CreateCollection.newBuilder()
- List.of(
+ .setCollectionName(""{collection_name}"")
- PointStruct.newBuilder()
+ .setVectorsConfig(
- .setId(id(UUID.fromString(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"")))
+ VectorsConfig.newBuilder()
- .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+ .setParams(
- .putAllPayload(Map.of(""color"", value(""Red"")))
+ VectorParams.newBuilder()
- .build()))
+ .setSize(768)
+
+ .setDistance(Distance.Cosine)
+
+ .setOnDisk(true)
+
+ .build())
+
+ .build())
+
+ .setQuantizationConfig(
+
+ QuantizationConfig.newBuilder()
+
+ .setScalar(
+
+ ScalarQuantization.newBuilder()
+
+ .setType(QuantizationType.Int8)
+
+ .setAlwaysRam(false)
+
+ .build())
+
+ .build())
+
+ .build())
.get();
@@ -27423,27 +27132,19 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.UpsertAsync(
-
- collectionName: ""{collection_name}"",
-
- points: new List
-
- {
-
- new()
+await client.CreateCollectionAsync(
- {
+ collectionName: ""{collection_name}"",
- Id = Guid.Parse(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""),
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true},
- Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+ quantizationConfig: new QuantizationConfig
- Payload = { [""city""] = ""red"" }
+ {
- }
+ Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = false }
- }
+ }
);
@@ -27451,3287 +27152,3068 @@ await client.UpsertAsync(
-and
-
+```go
+import (
-```http
+ ""context""
-PUT /collections/{collection_name}/points
-{
- ""points"": [
+ ""github.com/qdrant/go-client/qdrant""
- {
+)
- ""id"": 1,
- ""payload"": {""color"": ""red""},
- ""vector"": [0.9, 0.1, 0.1]
+client, err := qdrant.NewClient(&qdrant.Config{
- }
+ Host: ""localhost"",
- ]
+ Port: 6334,
-}
+})
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-```python
+ CollectionName: ""{collection_name}"",
-client.upsert(
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
- collection_name=""{collection_name}"",
+ Size: 768,
- points=[
+ Distance: qdrant.Distance_Cosine,
- models.PointStruct(
+ OnDisk: qdrant.PtrOf(true),
- id=1,
+ }),
- payload={
+ QuantizationConfig: qdrant.NewQuantizationScalar(
- ""color"": ""red"",
+ &qdrant.ScalarQuantization{
- },
+ Type: qdrant.QuantizationType_Int8,
- vector=[0.9, 0.1, 0.1],
+ AlwaysRam: qdrant.PtrOf(false),
- ),
+ },
- ],
+ ),
-)
+})
```
+",documentation/guides/quantization.md
+"---
+title: Monitoring
+weight: 155
-```typescript
+aliases:
-client.upsert(""{collection_name}"", {
+ - ../monitoring
- points: [
+---
- {
- id: 1,
- payload: {
+# Monitoring
- color: ""red"",
- },
- vector: [0.9, 0.1, 0.1],
+Qdrant exposes its metrics in [Prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format)/[OpenMetrics](https://github.com/OpenObservability/OpenMetrics) format, so you can integrate them easily
- },
+with the compatible tools and monitor Qdrant with your own monitoring system. You can
- ],
+use the `/metrics` endpoint and configure it as a scrape target.
-});
-```
+Metrics endpoint:
-```rust
-use qdrant_client::qdrant::PointStruct;
+The integration with Qdrant is easy to
-use serde_json::json;
+[configure](https://prometheus.io/docs/prometheus/latest/getting_started/#configure-prometheus-to-monitor-the-sample-targets)
+with Prometheus and Grafana.
-client
- .upsert_points_blocking(
+## Monitoring multi-node clusters
- 1,
- None,
- vec![PointStruct::new(
+When scraping metrics from multi-node Qdrant clusters, it is important to scrape from
- ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"".to_string(),
+each node individually instead of using a load-balanced URL. Otherwise, your metrics will appear inconsistent after each scrape.
- vec![0.05, 0.61, 0.76, 0.74],
- json!(
- {""color"": ""Red""}
+## Monitoring in Qdrant Cloud
- )
- .try_into()
- .unwrap(),
+To scrape metrics from a Qdrant cluster running in Qdrant Cloud, note that an [API key](/documentation/cloud/authentication/) is required to access `/metrics`. Qdrant Cloud also supports supplying the API key as a [Bearer token](https://www.rfc-editor.org/rfc/rfc6750.html), which may be required by some providers.
- )],
- None,
- )
+## Exposed metrics
- .await?;
-```
+Each Qdrant server will expose the following metrics.
-```java
-import java.util.List;
+| Name | Type | Meaning |
-import java.util.Map;
+|-------------------------------------|---------|---------------------------------------------------|
+| app_info | gauge | Information about Qdrant server |
+| app_status_recovery_mode | gauge | If Qdrant is currently started in recovery mode |
-import static io.qdrant.client.PointIdFactory.id;
+| collections_total | gauge | Number of collections |
-import static io.qdrant.client.ValueFactory.value;
+| collections_vector_total | gauge | Total number of vectors in all collections |
-import static io.qdrant.client.VectorsFactory.vectors;
+| collections_full_total | gauge | Number of full collections |
+| collections_aggregated_total | gauge | Number of aggregated collections |
+| rest_responses_total | counter | Total number of responses through REST API |
-import io.qdrant.client.QdrantClient;
+| rest_responses_fail_total | counter | Total number of failed responses through REST API |
-import io.qdrant.client.QdrantGrpcClient;
+| rest_responses_avg_duration_seconds | gauge | Average response duration in REST API |
-import io.qdrant.client.grpc.Points.PointStruct;
+| rest_responses_min_duration_seconds | gauge | Minimum response duration in REST API |
+| rest_responses_max_duration_seconds | gauge | Maximum response duration in REST API |
+| grpc_responses_total | counter | Total number of responses through gRPC API |
-QdrantClient client =
+| grpc_responses_fail_total | counter | Total number of failed responses through REST API |
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+| grpc_responses_avg_duration_seconds | gauge | Average response duration in gRPC API |
+| grpc_responses_min_duration_seconds | gauge | Minimum response duration in gRPC API |
+| grpc_responses_max_duration_seconds | gauge | Maximum response duration in gRPC API |
-client
+| cluster_enabled | gauge | Whether the cluster support is enabled. 1 - YES |
- .upsertAsync(
- ""{collection_name}"",
- List.of(
+### Cluster-related metrics
- PointStruct.newBuilder()
- .setId(id(1))
- .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+There are also some metrics which are exposed in distributed mode only.
- .putAllPayload(Map.of(""color"", value(""Red"")))
- .build()))
- .get();
+| Name | Type | Meaning |
-```
+| -------------------------------- | ------- | ---------------------------------------------------------------------- |
+| cluster_peers_total | gauge | Total number of cluster peers |
+| cluster_term | counter | Current cluster term |
-```csharp
+| cluster_commit | counter | Index of last committed (finalized) operation cluster peer is aware of |
-using Qdrant.Client;
+| cluster_pending_operations_total | gauge | Total number of pending operations for cluster peer |
-using Qdrant.Client.Grpc;
+| cluster_voter | gauge | Whether the cluster peer is a voter or learner. 1 - VOTER |
-var client = new QdrantClient(""localhost"", 6334);
+## Kubernetes health endpoints
-await client.UpsertAsync(
+*Available as of v1.5.0*
- collectionName: ""{collection_name}"",
- points: new List
- {
+Qdrant exposes three endpoints, namely
- new()
+[`/healthz`](http://localhost:6333/healthz),
- {
+[`/livez`](http://localhost:6333/livez) and
- Id = 1,
+[`/readyz`](http://localhost:6333/readyz), to indicate the current status of the
- Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+Qdrant server.
- Payload = { [""city""] = ""red"" }
- }
- }
+These currently provide the most basic status response, returning HTTP 200 if
-);
+Qdrant is started and ready to be used.
-```
+Regardless of whether an [API key](../security/#authentication) is configured,
+the endpoints are always accessible.
-are both possible.
+You can read more about Kubernetes health endpoints
+[here](https://kubernetes.io/docs/reference/using-api/health-checks/).
+",documentation/guides/monitoring.md
+"---
-## Upload points
+title: Guides
+weight: 12
+# If the index.md file is empty, the link to the section will be hidden from the sidebar
-To optimize performance, Qdrant supports batch loading of points. I.e., you can load several points into the service in one API call.
+is_empty: true
-Batching allows you to minimize the overhead of creating a network connection.
+---",documentation/guides/_index.md
+"---
+title: Security
+weight: 165
-The Qdrant API supports two ways of creating batches - record-oriented and column-oriented.
+aliases:
-Internally, these options do not differ and are made only for the convenience of interaction.
+ - ../security
+---
-Create points with batch:
+# Security
-```http
-PUT /collections/{collection_name}/points
+Please read this page carefully. Although there are various ways to secure your Qdrant instances, **they are unsecured by default**.
-{
+You need to enable security measures before production use. Otherwise, they are completely open to anyone
- ""batch"": {
- ""ids"": [1, 2, 3],
- ""payloads"": [
+## Authentication
- {""color"": ""red""},
- {""color"": ""green""},
- {""color"": ""blue""}
+*Available as of v1.2.0*
- ],
- ""vectors"": [
- [0.9, 0.1, 0.1],
+Qdrant supports a simple form of client authentication using a static API key.
- [0.1, 0.9, 0.1],
+This can be used to secure your instance.
- [0.1, 0.1, 0.9]
- ]
- }
+To enable API key based authentication in your own Qdrant instance you must
-}
+specify a key in the configuration:
-```
+```yaml
-```python
+service:
-client.upsert(
+ # Set an api-key.
- collection_name=""{collection_name}"",
+ # If set, all requests must include a header with the api-key.
- points=models.Batch(
+ # example header: `api-key: `
- ids=[1, 2, 3],
+ #
- payloads=[
+ # If you enable this you should also enable TLS.
- {""color"": ""red""},
+ # (Either above or via an external service like nginx.)
- {""color"": ""green""},
+ # Sending an api-key over an unencrypted channel is insecure.
- {""color"": ""blue""},
+ api_key: your_secret_api_key_here
- ],
+```
- vectors=[
- [0.9, 0.1, 0.1],
- [0.1, 0.9, 0.1],
+Or alternatively, you can use the environment variable:
- [0.1, 0.1, 0.9],
- ],
- ),
+```bash
-)
+export QDRANT__SERVICE__API_KEY=your_secret_api_key_here
```
-```typescript
-
-client.upsert(""{collection_name}"", {
+
- batch: {
- ids: [1, 2, 3],
- payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }],
+For using API key based authentication in Qdrant Cloud see the cloud
- vectors: [
+[Authentication](/documentation/cloud/authentication/)
- [0.9, 0.1, 0.1],
+section.
- [0.1, 0.9, 0.1],
- [0.1, 0.1, 0.9],
- ],
+The API key then needs to be present in all REST or gRPC requests to your instance.
- },
+All official Qdrant clients for Python, Go, Rust, .NET and Java support the API key parameter.
-});
-```
+
-or record-oriented equivalent:
+```bash
+curl \
-```http
+ -X GET https://localhost:6333 \
-PUT /collections/{collection_name}/points
+ --header 'api-key: your_secret_api_key_here'
-{
+```
- ""points"": [
- {
- ""id"": 1,
+```python
- ""payload"": {""color"": ""red""},
+from qdrant_client import QdrantClient
- ""vector"": [0.9, 0.1, 0.1]
- },
- {
+client = QdrantClient(
- ""id"": 2,
+ url=""https://localhost:6333"",
- ""payload"": {""color"": ""green""},
+ api_key=""your_secret_api_key_here"",
- ""vector"": [0.1, 0.9, 0.1]
+)
- },
+```
- {
- ""id"": 3,
- ""payload"": {""color"": ""blue""},
+```typescript
- ""vector"": [0.1, 0.1, 0.9]
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- }
- ]
-}
+const client = new QdrantClient({
-```
+ url: ""http://localhost"",
+ port: 6333,
+ apiKey: ""your_secret_api_key_here"",
-```python
+});
-client.upsert(
+```
- collection_name=""{collection_name}"",
- points=[
- models.PointStruct(
+```rust
- id=1,
+use qdrant_client::Qdrant;
- payload={
- ""color"": ""red"",
- },
+let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
- vector=[0.9, 0.1, 0.1],
+ .api_key("""")
- ),
+ .build()?;
- models.PointStruct(
+```
- id=2,
- payload={
- ""color"": ""green"",
+```java
- },
+import io.qdrant.client.QdrantClient;
- vector=[0.1, 0.9, 0.1],
+import io.qdrant.client.QdrantGrpcClient;
- ),
- models.PointStruct(
- id=3,
+QdrantClient client =
- payload={
+ new QdrantClient(
- ""color"": ""blue"",
+ QdrantGrpcClient.newBuilder(
- },
+ ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- vector=[0.1, 0.1, 0.9],
+ 6334,
- ),
+ true)
- ],
+ .withApiKey("""")
-)
+ .build());
```
-```typescript
+```csharp
-client.upsert(""{collection_name}"", {
+using Qdrant.Client;
- points: [
- {
- id: 1,
+var client = new QdrantClient(
- payload: { color: ""red"" },
+ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- vector: [0.9, 0.1, 0.1],
+ https: true,
- },
+ apiKey: """"
- {
+);
- id: 2,
+```
- payload: { color: ""green"" },
- vector: [0.1, 0.9, 0.1],
- },
+```go
- {
+import ""github.com/qdrant/go-client/qdrant""
- id: 3,
- payload: { color: ""blue"" },
- vector: [0.1, 0.1, 0.9],
+client, err := qdrant.NewClient(&qdrant.Config{
- },
+ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- ],
+ Port: 6334,
-});
+ APIKey: """",
+
+ UseTLS: true,
+
+})
```
-```rust
+
-use qdrant_client::qdrant::PointStruct;
-use serde_json::json;
+### Read-only API key
-client
- .upsert_points_batch_blocking(
+*Available as of v1.7.0*
- ""{collection_name}"".to_string(),
- None,
- vec![
+In addition to the regular API key, Qdrant also supports a read-only API key.
- PointStruct::new(
+This key can be used to access read-only operations on the instance.
- 1,
- vec![0.9, 0.1, 0.1],
- json!(
+```yaml
- {""color"": ""red""}
+service:
- )
+ read_only_api_key: your_secret_read_only_api_key_here
- .try_into()
+```
- .unwrap(),
- ),
- PointStruct::new(
+Or with the environment variable:
- 2,
- vec![0.1, 0.9, 0.1],
- json!(
+```bash
- {""color"": ""green""}
+export QDRANT__SERVICE__READ_ONLY_API_KEY=your_secret_read_only_api_key_here
- )
+```
- .try_into()
- .unwrap(),
- ),
+Both API keys can be used simultaneously.
- PointStruct::new(
- 3,
- vec![0.1, 0.1, 0.9],
+### Granular access control with JWT
- json!(
- {""color"": ""blue""}
- )
+*Available as of v1.9.0*
- .try_into()
- .unwrap(),
- ),
+For more complex cases, Qdrant supports granular access control with [JSON Web Tokens (JWT)](https://jwt.io/).
- ],
+This allows you to have tokens, which allow restricited access to a specific parts of the stored data and build [Role-based access control (RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) on top of that.
- None,
+In this way, you can define permissions for users and restrict access to sensitive endpoints.
- 100,
- )
- .await?;
+To enable JWT-based authentication in your own Qdrant instance you need to specify the `api-key` and enable the `jwt_rbac` feature in the configuration:
-```
+```yaml
-```java
+service:
-import java.util.List;
+ api_key: you_secret_api_key_here
-import java.util.Map;
+ jwt_rbac: true
+```
-import static io.qdrant.client.PointIdFactory.id;
-import static io.qdrant.client.ValueFactory.value;
+Or with the environment variables:
-import static io.qdrant.client.VectorsFactory.vectors;
+```bash
-import io.qdrant.client.QdrantClient;
+export QDRANT__SERVICE__API_KEY=your_secret_api_key_here
-import io.qdrant.client.QdrantGrpcClient;
+export QDRANT__SERVICE__JWT_RBAC=true
-import io.qdrant.client.grpc.Points.PointStruct;
+```
-QdrantClient client =
+The `api_key` you set in the configuration will be used to encode and decode the JWTs, so –needless to say– keep it secure. If your `api_key` changes, all existing tokens will be invalid.
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+To use JWT-based authentication, you need to provide it as a bearer token in the `Authorization` header, or as an key in the `Api-Key` header of your requests.
-client
- .upsertAsync(
- ""{collection_name}"",
+```http
- List.of(
+Authorization: Bearer
- PointStruct.newBuilder()
- .setId(id(1))
- .setVectors(vectors(0.9f, 0.1f, 0.1f))
+// or
- .putAllPayload(Map.of(""color"", value(""red"")))
- .build(),
- PointStruct.newBuilder()
+Api-Key:
- .setId(id(2))
+```
- .setVectors(vectors(0.1f, 0.9f, 0.1f))
- .putAllPayload(Map.of(""color"", value(""green"")))
- .build(),
+```python
- PointStruct.newBuilder()
+from qdrant_client import QdrantClient
- .setId(id(3))
- .setVectors(vectors(0.1f, 0.1f, 0.9f))
- .putAllPayload(Map.of(""color"", value(""blue"")))
+qdrant_client = QdrantClient(
- .build()))
+ ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- .get();
+ api_key="""",
+
+)
```
-```csharp
+```typescript
-using Qdrant.Client;
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-using Qdrant.Client.Grpc;
+const client = new QdrantClient({
-var client = new QdrantClient(""localhost"", 6334);
+ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ apiKey: """",
+});
-await client.UpsertAsync(
+```
- collectionName: ""{collection_name}"",
- points: new List
- {
+```rust
- new()
+use qdrant_client::Qdrant;
- {
- Id = 1,
- Vectors = new[] { 0.9f, 0.1f, 0.1f },
+let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
- Payload = { [""city""] = ""red"" }
+ .api_key("""")
- },
+ .build()?;
- new()
+```
- {
- Id = 2,
- Vectors = new[] { 0.1f, 0.9f, 0.1f },
+```java
- Payload = { [""city""] = ""green"" }
+import io.qdrant.client.QdrantClient;
- },
+import io.qdrant.client.QdrantGrpcClient;
- new()
- {
- Id = 3,
+QdrantClient client =
- Vectors = new[] { 0.1f, 0.1f, 0.9f },
+ new QdrantClient(
- Payload = { [""city""] = ""blue"" }
+ QdrantGrpcClient.newBuilder(
- }
+ ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- }
+ 6334,
-);
+ true)
+
+ .withApiKey("""")
+
+ .build());
```
-The Python client has additional features for loading points, which include:
+```csharp
+using Qdrant.Client;
-- Parallelization
-- A retry mechanism
+var client = new QdrantClient(
-- Lazy batching support
+ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ https: true,
+ apiKey: """"
-For example, you can read your data directly from hard drives, to avoid storing all data in RAM. You can use these
+);
-features with the `upload_collection` and `upload_points` methods.
+```
-Similar to the basic upsert API, these methods support both record-oriented and column-oriented formats.
+```go
-
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
-Column-oriented format:
+ Port: 6334,
+ APIKey: """",
+ UseTLS: true,
-```python
+})
-client.upload_collection(
+```
- collection_name=""{collection_name}"",
+#### Generating JSON Web Tokens
- ids=[1, 2],
- payload=[
- {""color"": ""red""},
+Due to the nature of JWT, anyone who knows the `api_key` can generate tokens by using any of the existing libraries and tools, it is not necessary for them to have access to the Qdrant instance to generate them.
- {""color"": ""green""},
- ],
- vectors=[
+For convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab, if you're using the default url, it will be at `http://localhost:6333/dashboard#/jwt`.
- [0.9, 0.1, 0.1],
- [0.1, 0.9, 0.1],
- ],
+- **JWT Header** - Qdrant uses the `HS256` algorithm to decode the tokens.
- parallel=4,
- max_retries=3,
-)
+ ```json
-```
+ {
+ ""alg"": ""HS256"",
+ ""typ"": ""JWT""
-
+- **JWT Payload** - You can include any combination of the [parameters available](#jwt-configuration) in the payload. Keep reading for more info on each one.
-Record-oriented format:
+ ```json
-```python
+ {
-client.upload_points(
+ ""exp"": 1640995200, // Expiration time
- collection_name=""{collection_name}"",
+ ""value_exists"": ..., // Validate this token by looking for a point with a payload value
- points=[
+ ""access"": ""r"", // Define the access level.
- models.PointStruct(
+ }
- id=1,
+ ```
- payload={
- ""color"": ""red"",
- },
+**Signing the token** - To confirm that the generated token is valid, it needs to be signed with the `api_key` you have set in the configuration.
- vector=[0.9, 0.1, 0.1],
+That would mean, that someone who knows the `api_key` gives the authorization for the new token to be used in the Qdrant instance.
- ),
+Qdrant can validate the signature, because it knows the `api_key` and can decode the token.
- models.PointStruct(
- id=2,
- payload={
+The process of token generation can be done on the client side offline, and doesn't require any communication with the Qdrant instance.
- ""color"": ""green"",
- },
- vector=[0.1, 0.9, 0.1],
+Here is an example of libraries that can be used to generate JWT tokens:
- ),
- ],
- parallel=4,
+- Python: [PyJWT](https://pyjwt.readthedocs.io/en/stable/)
- max_retries=3,
+- JavaScript: [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken)
-)
+- Rust: [jsonwebtoken](https://crates.io/crates/jsonwebtoken)
-```
+#### JWT Configuration
-All APIs in Qdrant, including point loading, are idempotent.
-It means that executing the same method several times in a row is equivalent to a single execution.
+These are the available options, or **claims** in the JWT lingo. You can use them in the JWT payload to define its functionality.
-In this case, it means that points with the same id will be overwritten when re-uploaded.
+- **`exp`** - The expiration time of the token. This is a Unix timestamp in seconds. The token will be invalid after this time. The check for this claim includes a 30-second leeway to account for clock skew.
-Idempotence property is useful if you use, for example, a message queue that doesn't provide an exactly-ones guarantee.
-Even with such a system, Qdrant ensures data consistency.
+ ```json
+ {
+ ""exp"": 1640995200, // Expiration time
-[*Available as of v0.10.0*](#create-vector-name)
+ }
+ ```
-If the collection was created with multiple vectors, each vector data can be provided using the vector's name:
+- **`value_exists`** - This is a claim that can be used to validate the token against the data stored in a collection. Structure of this claim is as follows:
-```http
-PUT /collections/{collection_name}/points
+ ```json
-{
+ {
- ""points"": [
+ ""value_exists"": {
- {
+ ""collection"": ""my_validation_collection"",
- ""id"": 1,
+ ""matches"": [
- ""vector"": {
+ { ""key"": ""my_key"", ""value"": ""value_that_must_exist"" }
- ""image"": [0.9, 0.1, 0.1, 0.2],
+ ],
- ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2]
+ },
- }
+ }
- },
+ ```
- {
- ""id"": 2,
- ""vector"": {
+ If this claim is present, Qdrant will check if there is a point in the collection with the specified key-values. If it does, the token is valid.
- ""image"": [0.2, 0.1, 0.3, 0.9],
- ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9]
- }
+ This claim is especially useful if you want to have an ability to revoke tokens without changing the `api_key`.
- }
+ Consider a case where you have a collection of users, and you want to revoke access to a specific user.
- ]
-}
-```
+ ```json
+ {
+ ""value_exists"": {
-```python
+ ""collection"": ""users"",
-client.upsert(
+ ""matches"": [
- collection_name=""{collection_name}"",
+ { ""key"": ""user_id"", ""value"": ""andrey"" },
- points=[
+ { ""key"": ""role"", ""value"": ""manager"" }
- models.PointStruct(
+ ],
- id=1,
+ },
- vector={
+ }
- ""image"": [0.9, 0.1, 0.1, 0.2],
+ ```
- ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
- },
- ),
+ You can create a token with this claim, and when you want to revoke access, you can change the `role` of the user to something else, and the token will be invalid.
- models.PointStruct(
- id=2,
- vector={
+- **`access`** - This claim defines the [access level](#table-of-access) of the token. If this claim is present, Qdrant will check if the token has the required access level to perform the operation. If this claim is **not** present, **manage** access is assumed.
- ""image"": [0.2, 0.1, 0.3, 0.9],
- ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
- },
+ It can provide global access with `r` for read-only, or `m` for manage. For example:
- ),
- ],
-)
+ ```json
-```
+ {
+ ""access"": ""r""
+ }
-```typescript
+ ```
-client.upsert(""{collection_name}"", {
- points: [
- {
+ It can also be specific to one or more collections. The `access` level for each collection is `r` for read-only, or `rw` for read-write, like this:
- id: 1,
- vector: {
- image: [0.9, 0.1, 0.1, 0.2],
+ ```json
- text: [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
+ {
- },
+ ""access"": [
- },
+ {
- {
+ ""collection"": ""my_collection"",
- id: 2,
+ ""access"": ""rw""
- vector: {
+ }
- image: [0.2, 0.1, 0.3, 0.9],
+ ]
- text: [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
+ }
- },
+ ```
- },
- ],
-});
+ You can also specify which subset of the collection the user is able to access by specifying a `payload` restriction that the points must have.
-```
+ ```json
-```rust
+ {
-use qdrant_client::qdrant::PointStruct;
+ ""access"": [
-use std::collections::HashMap;
+ {
+ ""collection"": ""my_collection"",
+ ""access"": ""r"",
-client
+ ""payload"": {
- .upsert_points_blocking(
+ ""user_id"": ""user_123456""
- ""{collection_name}"".to_string(),
+ }
- None,
+ }
- vec![
+ ]
- PointStruct::new(
+ }
- 1,
+ ```
- HashMap::from([
- (""image"".to_string(), vec![0.9, 0.1, 0.1, 0.2]),
- (
+ This `payload` claim will be used to implicitly filter the points in the collection. It will be equivalent to appending this filter to each request:
- ""text"".to_string(),
- vec![0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
- ),
+ ```json
- ]),
+ { ""filter"": { ""must"": [{ ""key"": ""user_id"", ""match"": { ""value"": ""user_123456"" } }] } }
- HashMap::new().into(),
+ ```
- ),
- PointStruct::new(
- 2,
+### Table of access
- HashMap::from([
- (""image"".to_string(), vec![0.2, 0.1, 0.3, 0.9]),
- (
+Check out this table to see which actions are allowed or denied based on the access level.
- ""text"".to_string(),
- vec![0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
- ),
+This is also applicable to using api keys instead of tokens. In that case, `api_key` maps to **manage**, while `read_only_api_key` maps to **read-only**.
- ]),
- HashMap::new().into(),
- ),
+
- ],
- None,
- )
+| Action | manage | read-only | collection read-write | collection read-only | collection with payload claim (r / rw) |
- .await?;
+|--------|--------|-----------|----------------------|-----------------------|------------------------------------|
-```
+| list collections | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
+| get collection info | ✅ | ✅ | ✅ | ✅ | ❌ |
+| create collection | ✅ | ❌ | ❌ | ❌ | ❌ |
-```java
+| delete collection | ✅ | ❌ | ❌ | ❌ | ❌ |
-import java.util.List;
+| update collection params | ✅ | ❌ | ❌ | ❌ | ❌ |
-import java.util.Map;
+| get collection cluster info | ✅ | ✅ | ✅ | ✅ | ❌ |
+| collection exists | ✅ | ✅ | ✅ | ✅ | ✅ |
+| update collection cluster setup | ✅ | ❌ | ❌ | ❌ | ❌ |
-import static io.qdrant.client.PointIdFactory.id;
+| update aliases | ✅ | ❌ | ❌ | ❌ | ❌ |
-import static io.qdrant.client.VectorFactory.vector;
+| list collection aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
-import static io.qdrant.client.VectorsFactory.namedVectors;
+| list aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 |
+| create shard key | ✅ | ❌ | ❌ | ❌ | ❌ |
+| delete shard key | ✅ | ❌ | ❌ | ❌ | ❌ |
-import io.qdrant.client.grpc.Points.PointStruct;
+| create payload index | ✅ | ❌ | ✅ | ❌ | ❌ |
+| delete payload index | ✅ | ❌ | ✅ | ❌ | ❌ |
+| list collection snapshots | ✅ | ✅ | ✅ | ✅ | ❌ |
-client
+| create collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
- .upsertAsync(
+| delete collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
- ""{collection_name}"",
+| download collection snapshot | ✅ | ✅ | ✅ | ✅ | ❌ |
- List.of(
+| upload collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- PointStruct.newBuilder()
+| recover collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- .setId(id(1))
+| list shard snapshots | ✅ | ✅ | ✅ | ✅ | ❌ |
- .setVectors(
+| create shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
- namedVectors(
+| delete shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ |
- Map.of(
+| download shard snapshot | ✅ | ✅ | ✅ | ✅ | ❌ |
- ""image"",
+| upload shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- vector(List.of(0.9f, 0.1f, 0.1f, 0.2f)),
+| recover shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- ""text"",
+| list full snapshots | ✅ | ✅ | ❌ | ❌ | ❌ |
- vector(List.of(0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f)))))
+| create full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- .build(),
+| delete full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ |
- PointStruct.newBuilder()
+| download full snapshot | ✅ | ✅ | ❌ | ❌ | ❌ |
- .setId(id(2))
+| get cluster info | ✅ | ✅ | ❌ | ❌ | ❌ |
- .setVectors(
+| recover raft state | ✅ | ❌ | ❌ | ❌ | ❌ |
- namedVectors(
+| delete peer | ✅ | ❌ | ❌ | ❌ | ❌ |
- Map.of(
+| get point | ✅ | ✅ | ✅ | ✅ | ❌ |
- ""image"",
+| get points | ✅ | ✅ | ✅ | ✅ | ❌ |
- List.of(0.2f, 0.1f, 0.3f, 0.9f),
+| upsert points | ✅ | ❌ | ✅ | ❌ | ❌ |
- ""text"",
+| update points batch | ✅ | ❌ | ✅ | ❌ | ❌ |
- List.of(0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f))))
+| delete points | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 |
- .build()))
+| update vectors | ✅ | ❌ | ✅ | ❌ | ❌ |
- .get();
+| delete vectors | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 |
-```
+| set payload | ✅ | ❌ | ✅ | ❌ | ❌ |
+| overwrite payload | ✅ | ❌ | ✅ | ❌ | ❌ |
+| delete payload | ✅ | ❌ | ✅ | ❌ | ❌ |
-```csharp
+| clear payload | ✅ | ❌ | ✅ | ❌ | ❌ |
-using Qdrant.Client;
+| scroll points | ✅ | ✅ | ✅ | ✅ | 🟡 |
-using Qdrant.Client.Grpc;
+| query points | ✅ | ✅ | ✅ | ✅ | 🟡 |
+| search points | ✅ | ✅ | ✅ | ✅ | 🟡 |
+| search groups | ✅ | ✅ | ✅ | ✅ | 🟡 |
-var client = new QdrantClient(""localhost"", 6334);
+| recommend points | ✅ | ✅ | ✅ | ✅ | ❌ |
+| recommend groups | ✅ | ✅ | ✅ | ✅ | ❌ |
+| discover points | ✅ | ✅ | ✅ | ✅ | ❌ |
-await client.UpsertAsync(
+| count points | ✅ | ✅ | ✅ | ✅ | 🟡 |
- collectionName: ""{collection_name}"",
+| version | ✅ | ✅ | ✅ | ✅ | ✅ |
- points: new List
+| readyz, healthz, livez | ✅ | ✅ | ✅ | ✅ | ✅ |
- {
+| telemetry | ✅ | ✅ | ❌ | ❌ | ❌ |
- new()
+| metrics | ✅ | ✅ | ❌ | ❌ | ❌ |
- {
+| update locks | ✅ | ❌ | ❌ | ❌ | ❌ |
- Id = 1,
+| get locks | ✅ | ✅ | ❌ | ❌ | ❌ |
- Vectors = new Dictionary
- {
- [""image""] = [0.9f, 0.1f, 0.1f, 0.2f],
+## TLS
- [""text""] = [0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f]
- }
- },
+*Available as of v1.2.0*
- new()
- {
- Id = 2,
+TLS for encrypted connections can be enabled on your Qdrant instance to secure
- Vectors = new Dictionary
+connections.
- {
- [""image""] = [0.2f, 0.1f, 0.3f, 0.9f],
- [""text""] = [0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f]
+
- }
- }
- }
+First make sure you have a certificate and private key for TLS, usually in
-);
+`.pem` format. On your local machine you may use
-```
+[mkcert](https://github.com/FiloSottile/mkcert#readme) to generate a self signed
+certificate.
-*Available as of v1.2.0*
+To enable TLS, set the following properties in the Qdrant configuration with the
+correct paths and restart:
-Named vectors are optional. When uploading points, some vectors may be omitted.
-For example, you can upload one point with only the `image` vector and a second
-one with only the `text` vector.
+```yaml
+service:
+ # Enable HTTPS for the REST and gRPC API
-When uploading a point with an existing ID, the existing point is deleted first,
+ enable_tls: true
-then it is inserted with just the specified vectors. In other words, the entire
-point is replaced, and any unspecified vectors are set to null. To keep existing
-vectors unchanged and only update specified vectors, see [update vectors](#update-vectors).
+# TLS configuration.
+# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
+tls:
-*Available as of v1.7.0*
+ # Server certificate chain file
+ cert: ./tls/cert.pem
-Points can contain dense and sparse vectors.
+ # Server private key file
+ key: ./tls/key.pem
-A sparse vector is an array in which most of the elements have a value of zero.
+```
-It is possible to take advantage of this property to have an optimized representation, for this reason they have a different shape than dense vectors.
+For internal communication when running cluster mode, TLS can be enabled with:
-They are represented as a list of `(index, value)` pairs, where `index` is an integer and `value` is a floating point number. The `index` is the position of the non-zero value in the vector. The `values` is the value of the non-zero element.
+```yaml
+cluster:
+ # Configuration of the inter-cluster communication
-For example, the following vector:
+ p2p:
+ # Use TLS for communication between peers
+ enable_tls: true
```
-[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0]
-```
+With TLS enabled, you must start using HTTPS connections. For example:
-can be represented as a sparse vector:
+```bash
+curl -X GET https://localhost:6333
```
-[(6, 1.0), (7, 2.0)]
-```
+```python
+from qdrant_client import QdrantClient
-Qdrant uses the following JSON representation throughout its APIs.
+client = QdrantClient(
-```json
+ url=""https://localhost:6333"",
-{
+)
- ""indices"": [6, 7],
+```
- ""values"": [1.0, 2.0]
-}
-```
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-The `indices` and `values` arrays must have the same length.
-And the `indices` must be unique.
+const client = new QdrantClient({ url: ""https://localhost"", port: 6333 });
+```
-If the `indices` are not sorted, Qdrant will sort them internally so you may not rely on the order of the elements.
+```rust
+use qdrant_client::Qdrant;
-Sparse vectors must be named and can be uploaded in the same way as dense vectors.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-```http
+```
-PUT /collections/{collection_name}/points
-{
- ""points"": [
+Certificate rotation is enabled with a default refresh time of one hour. This
- {
+reloads certificate files every hour while Qdrant is running. This way changed
- ""id"": 1,
+certificates are picked up when they get updated externally. The refresh time
- ""vector"": {
+can be tuned by changing the `tls.cert_ttl` setting. You can leave this on, even
- ""text"": {
+if you don't plan to update your certificates. Currently this is only supported
- ""indices"": [6, 7],
+for the REST API.
- ""values"": [1.0, 2.0]
- }
- }
+Optionally, you can enable client certificate validation on the server against a
- },
+local certificate authority. Set the following properties and restart:
- {
- ""id"": 2,
- ""vector"": {
+```yaml
- ""text"": {
+service:
- ""indices"": [1, 1, 2, 3, 4, 5],
+ # Check user HTTPS client certificate against CA file specified in tls config
- ""values"": [0.1, 0.2, 0.3, 0.4, 0.5]
+ verify_https_client_certificate: false
- }
- }
- }
+# TLS configuration.
- ]
+# Required if either service.enable_tls or cluster.p2p.enable_tls is true.
-}
+tls:
-```
+ # Certificate authority certificate file.
+ # This certificate will be used to validate the certificates
+ # presented by other nodes during inter-cluster communication.
-```python
+ #
-client.upsert(
+ # If verify_https_client_certificate is true, it will verify
- collection_name=""{collection_name}"",
+ # HTTPS client certificate
- points=[
+ #
- models.PointStruct(
+ # Required if cluster.p2p.enable_tls is true.
- id=1,
+ ca_cert: ./tls/cacert.pem
- vector={
+```
- ""text"": models.SparseVector(
- indices=[6, 7],
- values=[1.0, 2.0],
+## Hardening
- )
- },
- ),
+We recommend reducing the amount of permissions granted to Qdrant containers so that you can reduce the risk of exploitation. Here are some ways to reduce the permissions of a Qdrant container:
- models.PointStruct(
- id=2,
- vector={
+* Run Qdrant as a non-root user. This can help mitigate the risk of future container breakout vulnerabilities. Qdrant does not need the privileges of the root user for any purpose.
- ""text"": models.SparseVector(
+ - You can use the image `qdrant/qdrant:-unprivileged` instead of the default Qdrant image.
- indices=[1, 2, 3, 4, 5],
+ - You can use the flag `--user=1000:2000` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/).
- values= [0.1, 0.2, 0.3, 0.4, 0.5],
+ - You can set [`user: 1000`](https://docs.docker.com/compose/compose-file/05-services/#user) when using Docker Compose.
- )
+ - You can set [`runAsUser: 1000`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default).
- },
- ),
- ],
+* Run Qdrant with a read-only root filesystem. This can help mitigate vulnerabilities that require the ability to modify system files, which is a permission Qdrant does not need. As long as the container uses mounted volumes for storage (`/qdrant/storage` and `/qdrant/snapshots` by default), Qdrant can continue to operate while being prevented from writing data outside of those volumes.
-)
+ - You can use the flag `--read-only` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/).
-```
+ - You can set [`read_only: true`](https://docs.docker.com/compose/compose-file/05-services/#read_only) when using Docker Compose.
+ - You can set [`readOnlyRootFilesystem: true`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default).
-```typescript
-client.upsert(""{collection_name}"", {
+* Block Qdrant's external network access. This can help mitigate [server side request forgery attacks](https://owasp.org/www-community/attacks/Server_Side_Request_Forgery), like via the [snapshot recovery API](https://api.qdrant.tech/api-reference/snapshots/recover-from-snapshot). Single-node Qdrant clusters do not require any outbound network access. Multi-node Qdrant clusters only need the ability to connect to other Qdrant nodes via TCP ports 6333, 6334, and 6335.
- points: [
+ - You can use [`docker network create --internal `](https://docs.docker.com/reference/cli/docker/network/create/#internal) and use that network when running [`docker run --network `](https://docs.docker.com/reference/cli/docker/container/run/#network).
- {
+ - You can create an [internal network](https://docs.docker.com/compose/compose-file/06-networks/#internal) when using Docker Compose.
- id: 1,
+ - You can create a [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) when using Kubernetes. Note that multi-node Qdrant clusters [will also need access to cluster DNS in Kubernetes](https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/11-deny-egress-traffic-from-an-application.md#allowing-dns-traffic).
- vector: {
- text: {
- indices: [6, 7],
+There are other techniques for reducing the permissions such as dropping [Linux capabilities](https://www.man7.org/linux/man-pages/man7/capabilities.7.html) depending on your deployment method, but the methods mentioned above are the most important.
+",documentation/guides/security.md
+"---
- values: [1.0, 2.0]
+title: Private RAG Information Extraction Engine
- },
+weight: 32
- },
+social_preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png
- },
+aliases:
- {
+ - /documentation/tutorials/rag-chatbot-vultr-dspy-ollama/
- id: 2,
+---
- vector: {
- text: {
- indices=[1, 2, 3, 4, 5],
+# Private RAG Information Extraction Engine
- values= [0.1, 0.2, 0.3, 0.4, 0.5],
- },
- },
+| Time: 90 min | Level: Advanced | | |
- },
+|--------------|-----------------|--|----|
- ],
-});
-```
+Handling private documents is a common task in many industries. Various businesses possess a large amount of
+unstructured data stored as huge files that must be processed and analyzed. Industry reports, financial analysis, legal
+documents, and many other documents are stored in PDF, Word, and other formats. Conversational chatbots built on top of
-```rust
+RAG pipelines are one of the viable solutions for finding the relevant answers in such documents. However, if we want to
-use qdrant_client::qdrant::{PointStruct, Vector};
+extract structured information from these documents, and pass them to downstream systems, we need to use a different
-use std::collections::HashMap;
+approach.
-client
+Information extraction is a process of structuring unstructured data into a format that can be easily processed by
- .upsert_points_blocking(
+machines. In this tutorial, we will show you how to use [DSPy](https://dspy-docs.vercel.app/) to perform that process on
- ""{collection_name}"".to_string(),
+a set of documents. Assuming we cannot send our data to an external service, we will use [Ollama](https://ollama.com/)
- vec![
+to run our own LLM model on our premises, using [Vultr](https://www.vultr.com/) as a cloud provider. Qdrant, acting in
- PointStruct::new(
+this setup as a knowledge base providing the relevant pieces of documents for a given query, will also be hosted in the
- 1,
+Hybrid Cloud mode on Vultr. The last missing piece, the DSPy application will be also running in the same environment.
- HashMap::from([
+If you work in a regulated industry, or just need to keep your data private, this tutorial is for you.
- (
- ""text"".to_string(),
- Vector::from(
+![Architecture diagram](/documentation/examples/information-extraction-ollama-vultr/architecture-diagram.png)
- (vec![6, 7], vec![1.0, 2.0])
- ),
- ),
+## Deploying Qdrant Hybrid Cloud on Vultr
- ]),
- HashMap::new().into(),
- ),
+All the services we are going to use in this tutorial will be running on [Vultr Kubernetes
- PointStruct::new(
+Engine](https://www.vultr.com/kubernetes/). That gives us a lot of flexibility in terms of scaling and managing the resources. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.
- 2,
- HashMap::from([
- (
+1. To start using managed Kubernetes on Vultr, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#vultr).
- ""text"".to_string(),
+2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
- Vector::from(
- (vec![1, 2, 3, 4, 5], vec![0.1, 0.2, 0.3, 0.4, 0.5])
- ),
+### Installing the necessary packages
- ),
- ]),
- HashMap::new().into(),
+We are going to need a couple of Python packages to run our application. They might be installed together with the
- ),
+`dspy-ai` package and `qdrant` extra:
- ],
- None,
- )
+```shell
- .await?;
+pip install dspy-ai[qdrant]
```
-```java
+### Qdrant Hybrid Cloud
-import java.util.List;
-import java.util.Map;
+Our [documentation](/documentation/hybrid-cloud/) contains a comprehensive guide on how to set up Qdrant in the Hybrid Cloud mode on Vultr. Please follow it carefully to get your Qdrant instance up and running. Once it's done, we need to store the Qdrant URL and the API key in the environment variables. You can do it by running the following commands:
-import static io.qdrant.client.PointIdFactory.id;
-import static io.qdrant.client.VectorFactory.vector;
+```shell
+export QDRANT_URL=""https://qdrant.example.com""
+export QDRANT_API_KEY=""your-api-key""
-import io.qdrant.client.grpc.Points.NamedVectors;
+```
-import io.qdrant.client.grpc.Points.PointStruct;
-import io.qdrant.client.grpc.Points.Vectors;
+```python
+import os
-client
- .upsertAsync(
- ""{collection_name}"",
+os.environ[""QDRANT_URL""] = ""https://qdrant.example.com""
- List.of(
+os.environ[""QDRANT_API_KEY""] = ""your-api-key""
- PointStruct.newBuilder()
+```
- .setId(id(1))
- .setVectors(
- Vectors.newBuilder()
+DSPy is framework we are going to use. It's integrated with Qdrant already, but it assumes you use
- .setVectors(
+[FastEmbed](https://qdrant.github.io/fastembed/) to create the embeddings. DSPy does not provide a way to index the
- NamedVectors.newBuilder()
+data, but leaves this task to the user. We are going to create a collection on our own, and fill it with the embeddings
- .putAllVectors(
+of our document chunks.
- Map.of(
- ""text"", vector(List.of(1.0f, 2.0f), List.of(6, 7))))
- .build())
+#### Data indexing
- .build())
- .build(),
- PointStruct.newBuilder()
+FastEmbed uses the `BAAI/bge-small-en` as the default embedding model. We are going to use it as well. Our collection
- .setId(id(2))
+will be created automatically if we call the `.add` method on an existing `QdrantClient` instance. In this tutorial we
- .setVectors(
+are not going to focus much on the document parsing, as there are plenty of tools that can help with that. The
- Vectors.newBuilder()
+[`unstructured`](https://github.com/Unstructured-IO/unstructured) library is one of the options you can launch on your
- .setVectors(
+infrastructure. In our simplified example, we are going to use a list of strings as our documents. These are the
- NamedVectors.newBuilder()
+descriptions of the made up technical events. Each of them should contain the name of the event along with the location
- .putAllVectors(
+and start and end dates.
- Map.of(
- ""text"",
- vector(
+```python
- List.of(0.1f, 0.2f, 0.3f, 0.4f, 0.5f),
+documents = [
- List.of(1, 2, 3, 4, 5))))
+ ""Taking place in San Francisco, USA, from the 10th to the 12th of June, 2024, the Global Developers Conference is the annual gathering spot for developers worldwide, offering insights into software engineering, web development, and mobile applications."",
- .build())
+ ""The AI Innovations Summit, scheduled for 15-17 September 2024 in London, UK, aims at professionals and researchers advancing artificial intelligence and machine learning."",
- .build())
+ ""Berlin, Germany will host the CyberSecurity World Conference between November 5th and 7th, 2024, serving as a key forum for cybersecurity professionals to exchange strategies and research on threat detection and mitigation."",
- .build()))
+ ""Data Science Connect in New York City, USA, occurring from August 22nd to 24th, 2024, connects data scientists, analysts, and engineers to discuss data science's innovative methodologies, tools, and applications."",
- .get();
+ ""Set for July 14-16, 2024, in Tokyo, Japan, the Frontend Developers Fest invites developers to delve into the future of UI/UX design, web performance, and modern JavaScript frameworks."",
-```
+ ""The Blockchain Expo Global, happening May 20-22, 2024, in Dubai, UAE, focuses on blockchain technology's applications, opportunities, and challenges for entrepreneurs, developers, and investors."",
+ ""Singapore's Cloud Computing Summit, scheduled for October 3-5, 2024, is where IT professionals and cloud experts will convene to discuss strategies, architectures, and cloud solutions."",
+ ""The IoT World Forum, taking place in Barcelona, Spain from December 1st to 3rd, 2024, is the premier conference for those focused on the Internet of Things, from smart cities to IoT security."",
-```csharp
+ ""Los Angeles, USA, will become the hub for game developers, designers, and enthusiasts at the Game Developers Arcade, running from April 18th to 20th, 2024, to showcase new games and discuss development tools."",
-using Qdrant.Client;
+ ""The TechWomen Summit in Sydney, Australia, from March 8-10, 2024, aims to empower women in tech with workshops, keynotes, and networking opportunities."",
-using Qdrant.Client.Grpc;
+ ""Seoul, South Korea's Mobile Tech Conference, happening from September 29th to October 1st, 2024, will explore the future of mobile technology, including 5G networks and app development trends."",
+ ""The Open Source Summit, to be held in Helsinki, Finland from August 11th to 13th, 2024, celebrates open source technologies and communities, offering insights into the latest software and collaboration techniques."",
+ ""Vancouver, Canada will play host to the VR/AR Innovation Conference from June 20th to 22nd, 2024, focusing on the latest in virtual and augmented reality technologies."",
-var client = new QdrantClient(""localhost"", 6334);
+ ""Scheduled for May 5-7, 2024, in London, UK, the Fintech Leaders Forum brings together experts to discuss the future of finance, including innovations in blockchain, digital currencies, and payment technologies."",
+ ""The Digital Marketing Summit, set for April 25-27, 2024, in New York City, USA, is designed for marketing professionals and strategists to discuss digital marketing and social media trends."",
+ ""EcoTech Symposium in Paris, France, unfolds over 2024-10-09 to 2024-10-11, spotlighting sustainable technologies and green innovations for environmental scientists, tech entrepreneurs, and policy makers."",
-await client.UpsertAsync(
+ ""Set in Tokyo, Japan, from 16th to 18th May '24, the Robotic Innovations Conference showcases automation, robotics, and AI-driven solutions, appealing to enthusiasts and engineers."",
- collectionName: ""{collection_name}"",
+ ""The Software Architecture World Forum in Dublin, Ireland, occurring 22-24 Sept 2024, gathers software architects and IT managers to discuss modern architecture patterns."",
- points: new List
+ ""Quantum Computing Summit, convening in Silicon Valley, USA from 2024/11/12 to 2024/11/14, is a rendezvous for exploring quantum computing advancements with physicists and technologists."",
- {
+ ""From March 3 to 5, 2024, the Global EdTech Conference in London, UK, discusses the intersection of education and technology, featuring e-learning and digital classrooms."",
- new()
+ ""Bangalore, India's NextGen DevOps Days, from 28 to 30 August 2024, is a hotspot for IT professionals keen on the latest DevOps tools and innovations."",
- {
+ ""The UX/UI Design Conference, slated for April 21-23, 2024, in New York City, USA, invites discussions on the latest in user experience and interface design among designers and developers."",
- Id = 1,
+ ""Big Data Analytics Summit, taking place 2024 July 10-12 in Amsterdam, Netherlands, brings together data professionals to delve into big data analysis and insights."",
- Vectors = new Dictionary { [""text""] = ([1.0f, 2.0f], [6, 7]) }
+ ""Toronto, Canada, will see the HealthTech Innovation Forum from June 8 to 10, '24, focusing on technology's impact on healthcare with professionals and innovators."",
- },
+ ""Blockchain for Business Summit, happening in Singapore from 2024-05-02 to 2024-05-04, focuses on blockchain's business applications, from finance to supply chain."",
- new()
+ ""Las Vegas, USA hosts the Global Gaming Expo from October 18th to 20th, 2024, a premiere event for game developers, publishers, and enthusiasts."",
- {
+ ""The Renewable Energy Tech Conference in Copenhagen, Denmark, from 2024/09/05 to 2024/09/07, discusses renewable energy innovations and policies."",
- Id = 2,
+ ""Set for 2024 Apr 9-11 in Boston, USA, the Artificial Intelligence in Healthcare Summit gathers healthcare professionals to discuss AI's healthcare applications."",
- Vectors = new Dictionary
+ ""Nordic Software Engineers Conference, happening in Stockholm, Sweden from June 15 to 17, 2024, focuses on software development in the Nordic region."",
- {
+ ""The International Space Exploration Symposium, scheduled in Houston, USA from 2024-08-05 to 2024-08-07, invites discussions on space exploration technologies and missions.""
- [""text""] = ([0.1f, 0.2f, 0.3f, 0.4f, 0.5f], [1, 2, 3, 4, 5])
+]
- }
+```
- }
- }
-);
+We'll be able to ask general questions, for example, about topics we are interested in or events happening in a specific
-```
+location, but expect the results to be returned in a structured format.
-## Modify points
+![An example of extracted information](/documentation/examples/information-extraction-ollama-vultr/extracted-information.png)
-To change a point, you can modify its vectors or its payload. There are several
+Indexing in Qdrant is a single call if we have the documents defined:
-ways to do this.
+```python
-### Update vectors
+client.add(
+ collection_name=""document-parts"",
+ documents=documents,
-*Available as of v1.2.0*
+ metadata=[{""document"": document} for document in documents],
+)
+```
-This method updates the specified vectors on the given points. Unspecified
-vectors are kept unchanged. All given points must exist.
+Our collection is ready to be queried. We can now move to the next step, which is setting up the Ollama model.
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/update_vectors)):
+### Ollama on Vultr
-```http
-PUT /collections/{collection_name}/points/vectors
+Ollama is a great tool for running the LLM models on your own infrastructure. It's designed to be lightweight and easy
-{
+to use, and [an official Docker image](https://hub.docker.com/r/ollama/ollama) is available. We can use it to run Ollama
- ""points"": [
+on our Vultr Kubernetes cluster. In case of LLMs we may have some special requirements, like a GPU, and Vultr provides
- {
+the [Vultr Kubernetes Engine for Cloud GPU](https://www.vultr.com/products/cloud-gpu/) so the model can be run on a
- ""id"": 1,
+specialized machine. Please refer to the official documentation to get Ollama up and running within your environment.
- ""vector"": {
+Once it's done, we need to store the Ollama URL in the environment variable:
- ""image"": [0.1, 0.2, 0.3, 0.4]
- }
- },
+```shell
- {
+export OLLAMA_URL=""https://ollama.example.com""
- ""id"": 2,
+```
- ""vector"": {
- ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
- }
+```python
- }
+os.environ[""OLLAMA_URL""] = ""https://ollama.example.com""
- ]
+```
-}
-```
+We will refer to this URL later on when configuring the Ollama model in our application.
-```python
-client.update_vectors(
+#### Setting up the Large Language Model
- collection_name=""{collection_name}"",
- points=[
- models.PointVectors(
+We are going to use one of the lightweight LLMs available in Ollama, a `gemma:2b` model. It was developed by Google
- id=1,
+DeepMind team and has 3B parameters. The [Ollama version](https://ollama.com/library/gemma:2b) uses 4-bit quantization.
- vector={
+Installing the model is as simple as running the following command on the machine where Ollama is running:
- ""image"": [0.1, 0.2, 0.3, 0.4],
- },
- ),
+```shell
- models.PointVectors(
+ollama run gemma:2b
- id=2,
+```
- vector={
- ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
- },
+Ollama models are also integrated with DSPy, so we can use them directly in our application.
- ),
- ],
-)
+## Implementing the information extraction pipeline
-```
+DSPy is a bit different from the other LLM frameworks. It's designed to optimize the prompts and weights of LMs in a
-```typescript
+pipeline. It's a bit like a compiler for LMs: you write a pipeline in a high-level language, and DSPy generates the
-client.updateVectors(""{collection_name}"", {
+prompts and weights for you. This means you can build complex systems without having to worry about the details of how
- points: [
+to prompt your LMs, as DSPy will do that for you. It is somehow similar to PyTorch but for LLMs.
- {
- id: 1,
- vector: {
+First of all, we will define the Language Model we are going to use:
- image: [0.1, 0.2, 0.3, 0.4],
- },
- },
+```python
- {
+import dspy
- id: 2,
- vector: {
- text: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
+gemma_model = dspy.OllamaLocal(
- },
+ model=""gemma:2b"",
- },
+ base_url=os.environ.get(""OLLAMA_URL""),
- ],
+ max_tokens=500,
-});
+)
```
-```rust
+Similarly, we have to define connection to our Qdrant Hybrid Cloud cluster:
-use qdrant_client::qdrant::PointVectors;
-use std::collections::HashMap;
+```python
+from dspy.retrieve.qdrant_rm import QdrantRM
-client
+from qdrant_client import QdrantClient, models
- .update_vectors_blocking(
- ""{collection_name}"",
- None,
+client = QdrantClient(
- &[
+ os.environ.get(""QDRANT_URL""),
- PointVectors {
+ api_key=os.environ.get(""QDRANT_API_KEY""),
- id: Some(1.into()),
+)
- vectors: Some(
+qdrant_retriever = QdrantRM(
- HashMap::from([(""image"".to_string(), vec![0.1, 0.2, 0.3, 0.4])]).into(),
+ qdrant_collection_name=""document-parts"",
- ),
+ qdrant_client=client,
- },
+)
- PointVectors {
+```
- id: Some(2.into()),
- vectors: Some(
- HashMap::from([(
+Finally, both components have to be configured in DSPy with a simple call to one of the functions:
- ""text"".to_string(),
- vec![0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
- )])
+```python
- .into(),
+dspy.configure(lm=gemma_model, rm=qdrant_retriever)
- ),
+```
- },
- ],
- None,
+### Application logic
- )
- .await?;
-```
+There is a concept of signatures which defines input and output formats of the pipeline. We are going to define a simple
+signature for the event:
-```java
-import java.util.List;
+```python
-import java.util.Map;
+class Event(dspy.Signature):
+ description = dspy.InputField(
+ desc=""Textual description of the event, including name, location and dates""
-import static io.qdrant.client.PointIdFactory.id;
+ )
-import static io.qdrant.client.VectorFactory.vector;
+ event_name = dspy.OutputField(desc=""Name of the event"")
-import static io.qdrant.client.VectorsFactory.namedVectors;
+ location = dspy.OutputField(desc=""Location of the event"")
+ start_date = dspy.OutputField(desc=""Start date of the event, YYYY-MM-DD"")
+ end_date = dspy.OutputField(desc=""End date of the event, YYYY-MM-DD"")
-client
+```
- .updateVectorsAsync(
- ""{collection_name}"",
- List.of(
+It is designed to derive the structured information from the textual description of the event. Now, we can build our
- PointVectors.newBuilder()
+module that will use it, along with Qdrant and Ollama model. Let's call it `EventExtractor`:
- .setId(id(1))
- .setVectors(namedVectors(Map.of(""image"", vector(List.of(0.1f, 0.2f, 0.3f, 0.4f)))))
- .build(),
+```python
- PointVectors.newBuilder()
+class EventExtractor(dspy.Module):
- .setId(id(2))
- .setVectors(
- namedVectors(
+ def __init__(self):
- Map.of(
+ super().__init__()
- ""text"", vector(List.of(0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f)))))
+ # Retrieve module to get relevant documents
- .build()))
+ self.retriever = dspy.Retrieve(k=3)
- .get();
+ # Predict module for the created signature
-```
+ self.predict = dspy.Predict(Event)
-```csharp
+ def forward(self, query: str):
-using Qdrant.Client;
+ # Retrieve the most relevant documents
-using Qdrant.Client.Grpc;
+ results = self.retriever.forward(query)
-var client = new QdrantClient(""localhost"", 6334);
+ # Try to extract events from the retrieved documents
+ events = []
+ for document in results.passages:
-await client.UpdateVectorsAsync(
+ event = self.predict(description=document)
- collectionName: ""{collection_name}"",
+ events.append(event)
- points: new List
- {
- new() { Id = 1, Vectors = (""image"", new float[] { 0.1f, 0.2f, 0.3f, 0.4f }) },
+ return events
- new()
+```
- {
- Id = 2,
- Vectors = (""text"", new float[] { 0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f })
+The logic is simple: we retrieve the most relevant documents from Qdrant, and then try to extract the structured
- }
+information from them using the `Event` signature. We can simply call it and see the results:
- }
-);
+
+```python
+
+extractor = EventExtractor()
+
+extractor.forward(""Blockchain events close to Europe"")
```
-To update points and replace all of its vectors, see [uploading
+Output:
+
-points](#upload-points).
+```python
+[
-### Delete vectors
+ Prediction(
+ event_name='Event Name: Blockchain Expo Global',
+ location='Dubai, UAE',
-*Available as of v1.2.0*
+ start_date='2024-05-20',
+ end_date='2024-05-22'
+ ),
-This method deletes just the specified vectors from the given points. Other
+ Prediction(
-vectors are kept unchanged. Points are never deleted.
+ event_name='Event Name: Blockchain for Business Summit',
+ location='Singapore',
+ start_date='2024-05-02',
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/deleted_vectors)):
+ end_date='2024-05-04'
+ ),
+ Prediction(
-```http
+ event_name='Event Name: Open Source Summit',
-POST /collections/{collection_name}/points/vectors/delete
+ location='Helsinki, Finland',
-{
+ start_date='2024-08-11',
- ""points"": [0, 3, 100],
+ end_date='2024-08-13'
- ""vectors"": [""text"", ""image""]
+ )
-}
+]
```
-```python
-
-client.delete_vectors(
+The task was solved successfully, even without any optimization. However, each of the events has the ""Event Name: ""
- collection_name=""{collection_name}"",
+prefix that we might want to remove. DSPy allows optimizing the module, so we can improve the results. Optimization
- points_selector=models.PointIdsList(
+might be done in different ways, and it's [well covered in the DSPy
- points=[0, 3, 100],
+documentation](https://dspy-docs.vercel.app/docs/building-blocks/optimizers).
- ),
- vectors=[""text"", ""image""],
-)
+We are not going to go through the optimization process in this tutorial. However, we encourage you to experiment with
-```
+it, as it might significantly improve the performance of your pipeline.
-```typescript
+Created module might be easily stored on a specific path, and loaded later on:
-client.deleteVectors(""{collection_name}"", {
- points: [0, 3, 10],
- vectors: [""text"", ""image""],
+```python
-});
+extractor.save(""event_extractor"")
```
-```rust
+To load, just create an instance of the module and call the `load` method:
-use qdrant_client::qdrant::{
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector, VectorsSelector,
-};
+```python
+second_extractor = EventExtractor()
+second_extractor.load(""event_extractor"")
-client
+```
- .delete_vectors_blocking(
- ""{collection_name}"",
- None,
+This is especially useful when you optimize the module, as the optimized version might be stored and loaded later on
- &PointsSelector {
+without redoing the optimization process each time you run the application.
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
- ids: vec![0.into(), 3.into(), 10.into()],
- })),
+### Deploying the extraction pipeline
- },
- &VectorsSelector {
- names: vec![""text"".into(), ""image"".into()],
+Vultr gives us a lot of flexibility in terms of deploying the applications. Perfectly, we would use the Kubernetes
- },
+cluster we set up earlier to run it. The deployment is as simple as running any other Python application. This time we
- None,
+don't need a GPU, as Ollama is already running on a separate machine, and DSPy just interacts with it.
- )
- .await?;
-```
+## Wrapping up
-```java
+In this tutorial, we showed you how to set up a private environment for information extraction using DSPy, Ollama, and
-import java.util.List;
+Qdrant. All the components might be securely hosted on the Vultr cloud, giving you full control over your data. ",documentation/examples/rag-chatbot-vultr-dspy-ollama.md
+"---
+title: ""Inference with Mighty""
+short_description: ""Mighty offers a speedy scalable embedding, a perfect fit for the speedy scalable Qdrant search. Let's combine them!""
-import static io.qdrant.client.PointIdFactory.id;
+description: ""We combine Mighty and Qdrant to create a semantic search service in Rust with just a few lines of code.""
+weight: 17
+author: Andre Bogus
-client
+author_link: https://llogiq.github.io
- .deleteVectorsAsync(
+date: 2023-06-01T11:24:20+01:00
- ""{collection_name}"", List.of(""text"", ""image""), List.of(id(0), id(3), id(10)))
+draft: true
- .get();
+keywords:
-```
+ - vector search
+ - embeddings
+ - mighty
-To delete entire points, see [deleting points](#delete-points).
+ - rust
+ - semantic search
+---
-### Update payload
+# Semantic Search with Mighty and Qdrant
-Learn how to modify the payload of a point in the [Payload](../payload/#update-payload) section.
+Much like Qdrant, the [Mighty](https://max.io/) inference server is written in Rust and promises to offer low latency and high scalability. This brief demo combines Mighty and Qdrant into a simple semantic search service that is efficient, affordable and easy to setup. We will use [Rust](https://rust-lang.org) and our [qdrant\_client crate](https://docs.rs/qdrant_client) for this integration.
-## Delete points
+## Initial setup
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/delete_points)):
+For Mighty, start up a [docker container](https://hub.docker.com/layers/maxdotio/mighty-sentence-transformers/0.9.9/images/sha256-0d92a89fbdc2c211d927f193c2d0d34470ecd963e8179798d8d391a4053f6caf?context=explore) with an open port 5050. Just loading the port in a window shows the following:
-```http
-POST /collections/{collection_name}/points/delete
-{
+```json
- ""points"": [0, 3, 100]
+{
-}
+ ""name"": ""sentence-transformers/all-MiniLM-L6-v2"",
-```
+ ""architectures"": [
+ ""BertModel""
+ ],
-```python
+ ""model_type"": ""bert"",
-client.delete(
+ ""max_position_embeddings"": 512,
- collection_name=""{collection_name}"",
+ ""labels"": null,
- points_selector=models.PointIdsList(
+ ""named_entities"": null,
- points=[0, 3, 100],
+ ""image_size"": null,
- ),
+ ""source"": ""https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2""
-)
+}
```
-```typescript
+Note that this uses the `MiniLM-L6-v2` model from Hugging Face. As per their website, the model ""maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search"". The distance measure to use is cosine similarity.
-client.delete(""{collection_name}"", {
- points: [0, 3, 100],
-});
+Verify that mighty works by calling `curl https://:5050/sentence-transformer?q=hello+mighty`. This will give you a result like (formatted via `jq`):
-```
+```json
-```rust
+{
-use qdrant_client::qdrant::{
+ ""outputs"": [
- points_selector::PointsSelectorOneOf, PointsIdsList, PointsSelector,
+ [
-};
+ -0.05019686743617058,
+ 0.051746174693107605,
+ 0.048117730766534805,
-client
+ ... (381 values skipped)
- .delete_points_blocking(
+ ]
- ""{collection_name}"",
+ ],
- None,
+ ""shape"": [
- &PointsSelector {
+ 1,
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
+ 384
- ids: vec![0.into(), 3.into(), 100.into()],
+ ],
- })),
+ ""texts"": [
- },
+ ""Hello mighty""
- None,
+ ],
- )
+ ""took"": 77
- .await?;
+}
```
-```java
+For Qdrant, follow our [cloud documentation](../../cloud/cloud-quick-start/) to spin up a [free tier](https://cloud.qdrant.io/). Make sure to retrieve an API key.
-import java.util.List;
+## Implement model API
-import static io.qdrant.client.PointIdFactory.id;
+For mighty, you will need a way to emit HTTP(S) requests. This version uses the [reqwest](https://docs.rs/reqwest) crate, so add the following to your `Cargo.toml`'s dependencies section:
-client.deleteAsync(""{collection_name}"", List.of(id(0), id(3), id(100)));
-```
+```toml
+[dependencies]
-```csharp
+reqwest = { version = ""0.11.18"", default-features = false, features = [""json"", ""rustls-tls""] }
-using Qdrant.Client;
+```
-var client = new QdrantClient(""localhost"", 6334);
+Mighty offers a variety of model APIs which will download and cache the model on first use. For semantic search, use the `sentence-transformer` API (as in the above `curl` command). The Rust code to make the call is:
-await client.DeleteAsync(collectionName: ""{collection_name}"", ids: [0, 3, 100]);
+```rust
-```
+use anyhow::anyhow;
+use reqwest::Client;
+use serde::Deserialize;
-Alternative way to specify which points to remove is to use filter.
+use serde_json::Value as JsonValue;
-```http
+#[derive(Deserialize)]
-POST /collections/{collection_name}/points/delete
+struct EmbeddingsResponse {
-{
+ pub outputs: Vec>,
- ""filter"": {
+}
- ""must"": [
- {
- ""key"": ""color"",
+pub async fn get_mighty_embedding(
- ""match"": {
+ client: &Client,
- ""value"": ""red""
+ url: &str,
- }
+ text: &str
- }
+) -> anyhow::Result> {
- ]
+ let response = client.get(url).query(&[(""text"", text)]).send().await?;
- }
-}
-```
+ if !response.status().is_success() {
+ return Err(anyhow!(
+ ""Mighty API returned status code {}"",
-```python
+ response.status()
-client.delete(
+ ));
- collection_name=""{collection_name}"",
+ }
- points_selector=models.FilterSelector(
- filter=models.Filter(
- must=[
+ let embeddings: EmbeddingsResponse = response.json().await?;
- models.FieldCondition(
+ // ignore multiple embeddings at the moment
- key=""color"",
+ embeddings.get(0).ok_or_else(|| anyhow!(""mighty returned empty embedding""))
- match=models.MatchValue(value=""red""),
+}
- ),
+```
- ],
- )
- ),
+Note that mighty can return multiple embeddings (if the input is too long to fit the model, it is automatically split).
-)
-```
+## Create embeddings and run a query
-```typescript
-client.delete(""{collection_name}"", {
+Use this code to create embeddings both for insertion and search. On the Qdrant side, take the embedding and run a query:
- filter: {
- must: [
- {
+```rust
- key: ""color"",
+use anyhow::anyhow;
- match: {
+use qdrant_client::prelude::*;
- value: ""red"",
- },
- },
+pub const SEARCH_LIMIT: u64 = 5;
- ],
+const COLLECTION_NAME: &str = ""mighty"";
- },
-});
-```
+pub async fn qdrant_search_embeddings(
+ qdrant_client: &QdrantClient,
+ vector: Vec,
-```rust
+) -> anyhow::Result> {
-use qdrant_client::qdrant::{
+ qdrant_client
- points_selector::PointsSelectorOneOf, Condition, Filter, PointsSelector,
+ .search_points(&SearchPoints {
-};
+ collection_name: COLLECTION_NAME.to_string(),
+ vector,
+ limit: SEARCH_LIMIT,
-client
+ with_payload: Some(true.into()),
- .delete_points_blocking(
+ ..Default::default()
- ""{collection_name}"",
+ })
- None,
+ .await
- &PointsSelector {
+ .map_err(|err| anyhow!(""Failed to search Qdrant: {}"", err))
- points_selector_one_of: Some(PointsSelectorOneOf::Filter(Filter::must([
+}
- Condition::matches(""color"", ""red"".to_string()),
+```
- ]))),
- },
- None,
+You can convert the [`ScoredPoint`](https://docs.rs/qdrant-client/latest/qdrant_client/qdrant/struct.ScoredPoint.html)s to fit your desired output format.",documentation/examples/mighty.md
+"---
- )
+title: Question-Answering System for AI Customer Support
- .await?;
+weight: 26
-```
+social_preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png
+aliases:
+ - /documentation/tutorials/rag-customer-support-cohere-airbyte-aws/
-```java
+---
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+# Question-Answering System for AI Customer Support
-import io.qdrant.client.grpc.Points.Filter;
+| Time: 120 min | Level: Advanced | |
-client
+| --- | ----------- | ----------- |----------- |
- .deleteAsync(
- ""{collection_name}"",
- Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build())
+Maintaining top-notch customer service is vital to business success. As your operation expands, so does the influx of customer queries. Many of these queries are repetitive, making automation a time-saving solution.
- .get();
+Your support team's expertise is typically kept private, but you can still use AI to automate responses securely.
-```
+In this tutorial we will setup a private AI service that answers customer support queries with high accuracy and effectiveness. By leveraging Cohere's powerful models (deployed to [AWS](https://cohere.com/deployment-options/aws)) with Qdrant Hybrid Cloud, you can create a fully private customer support system. Data synchronization, facilitated by [Airbyte](https://airbyte.com/), will complete the setup.
-```csharp
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+![Architecture diagram](/documentation/examples/customer-support-cohere-airbyte/architecture-diagram.png)
-var client = new QdrantClient(""localhost"", 6334);
+## System design
-await client.DeleteAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""color"", ""red""));
+The history of past interactions with your customers is not a static dataset. It is constantly evolving, as new
-```
+questions are coming in. You probably have a ticketing system that stores all the interactions, or use a different way
+to communicate with your customers. No matter what is the communication channel, you need to bring the correct answers
+to the selected Large Language Model, and have an established way to do it in a continuous manner. Thus, we will build
-This example removes all points with `{ ""color"": ""red"" }` from the collection.
+an ingestion pipeline and then a Retrieval Augmented Generation application that will use the data.
-## Retrieve points
+- **Dataset:** a [set of Frequently Asked Questions from Qdrant
+ users](/documentation/faq/qdrant-fundamentals/) as an incrementally updated Excel sheet
+- **Embedding model:** Cohere `embed-multilingual-v3.0`, to support different languages with the same pipeline
-There is a method for retrieving points by their ids.
+- **Knowledge base:** Qdrant, running in Hybrid Cloud mode
+- **Ingestion pipeline:** [Airbyte](https://airbyte.com/), loading the data into Qdrant
+- **Large Language Model:** Cohere [Command-R](https://docs.cohere.com/docs/command-r)
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_points)):
+- **RAG:** Cohere [RAG](https://docs.cohere.com/docs/retrieval-augmented-generation-rag) using our knowledge base
+ through a custom connector
-```http
-POST /collections/{collection_name}/points
+All the selected components are compatible with the [AWS](https://aws.amazon.com/) infrastructure. Thanks to Cohere models' availability, you can build a fully private customer support system completely isolates data within your infrastructure. Also, if you have AWS credits, you can now use them without spending additional money on the models or
-{
+semantic search layer.
- ""ids"": [0, 3, 100]
-}
-```
+### Data ingestion
-```python
+Building a RAG starts with a well-curated dataset. In your specific case you may prefer loading the data directly from
-client.retrieve(
+a ticketing system, such as [Zendesk Support](https://airbyte.com/connectors/zendesk-support),
- collection_name=""{collection_name}"",
+[Freshdesk](https://airbyte.com/connectors/freshdesk), or maybe integrate it with a shared inbox. However, in case of
- ids=[0, 3, 100],
+customer questions quality over quantity is the key. There should be a conscious decision on what data to include in the
-)
+knowledge base, so we do not confuse the model with possibly irrelevant information. We'll assume there is an [Excel
-```
+sheet](https://docs.airbyte.com/integrations/sources/file) available over HTTP/FTP that Airbyte can access and load into
+Qdrant in an incremental manner.
-```typescript
-client.retrieve(""{collection_name}"", {
+### Cohere <> Qdrant Connector for RAG
- ids: [0, 3, 100],
-});
-```
+Cohere RAG relies on [connectors](https://docs.cohere.com/docs/connectors) which brings additional context to the model.
+The connector is a web service that implements a specific interface, and exposes its data through HTTP API. With that
+setup, the Large Language Model becomes responsible for communicating with the connectors, so building a prompt with the
-```rust
+context is not needed anymore.
-client
- .get_points(
- ""{collection_name}"",
+### Answering bot
- None,
- &[0.into(), 30.into(), 100.into()],
- Some(false),
+Finally, we want to automate the responses and send them automatically when we are sure that the model is confident
- Some(false),
+enough. Again, the way such an application should be created strongly depends on the system you are using within the
- None,
+customer support team. If it exposes a way to set up a webhook whenever a new question is coming in, you can create a
- )
+web service and use it to automate the responses. In general, our bot should be created specifically for the platform
- .await?;
+you use, so we'll just cover the general idea here and build a simple CLI tool.
-```
+## Prerequisites
-```java
-import java.util.List;
+### Cohere models on AWS
-import static io.qdrant.client.PointIdFactory.id;
+One of the possible ways to deploy Cohere models on AWS is to use AWS SageMaker. Cohere's website has [a detailed
+guide on how to deploy the models in that way](https://docs.cohere.com/docs/amazon-sagemaker-setup-guide), so you can
-client
+follow the steps described there to set up your own instance.
- .retrieveAsync(""{collection_name}"", List.of(id(0), id(30), id(100)), false, false, null)
- .get();
-```
+### Qdrant Hybrid Cloud on AWS
-```csharp
+Our documentation covers the deployment of Qdrant on AWS as a Hybrid Cloud Environment, so you can follow the steps described
-using Qdrant.Client;
+there to set up your own instance. The deployment process is quite straightforward, and you can have your Qdrant cluster
+up and running in a few minutes.
-var client = new QdrantClient(""localhost"", 6334);
+[//]: # (TODO: refer to the documentation on how to deploy Qdrant on AWS)
-await client.RetrieveAsync(
- collectionName: ""{collection_name}"",
+Once you perform all the steps, your Qdrant cluster should be running on a specific URL. You will need this URL and the
- ids: [0, 30, 100],
+API key to interact with Qdrant, so let's store them both in the environment variables:
- withPayload: false,
- withVectors: false
-);
+```shell
+
+export QDRANT_URL=""https://qdrant.example.com""
+
+export QDRANT_API_KEY=""your-api-key""
```
-This method has additional parameters `with_vectors` and `with_payload`.
+```python
-Using these parameters, you can select parts of the point you want as a result.
+import os
-Excluding helps you not to waste traffic transmitting useless data.
+os.environ[""QDRANT_URL""] = ""https://qdrant.example.com""
-The single point can also be retrieved via the API:
+os.environ[""QDRANT_API_KEY""] = ""your-api-key""
+```
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/get_point)):
+### Airbyte Open Source
-```http
-GET /collections/{collection_name}/points/{point_id}
+Airbyte is an open-source data integration platform that helps you replicate your data in your warehouses, lakes, and
-```
+databases. You can install it on your infrastructure and use it to load the data into Qdrant. The installation process
+for AWS EC2 is described in the [official documentation](https://docs.airbyte.com/deploying-airbyte/on-aws-ec2).
+Please follow the instructions to set up your own instance.
-
+into Qdrant. The configuration will require setting up the source and destination connectors. In this tutorial we will
+use the following connectors:
-## Scroll points
+- **Source:** [File](https://docs.airbyte.com/integrations/sources/file) to load the data from an Excel sheet
+- **Destination:** [Qdrant](https://docs.airbyte.com/integrations/destinations/qdrant) to load the data into Qdrant
-Sometimes it might be necessary to get all stored points without knowing ids, or iterate over points that correspond to a filter.
+Airbyte UI will guide you through the process of setting up the source and destination and connecting them. Here is how
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#operation/scroll_points)):
+the configuration of the source might look like:
-```http
+![Airbyte source configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-excel-source.png)
-POST /collections/{collection_name}/points/scroll
-{
- ""filter"": {
+Qdrant is our target destination, so we need to set up the connection to it. We need to specify which fields should be
- ""must"": [
+included to generate the embeddings. In our case it makes complete sense to embed just the questions, as we are going
- {
+to look for similar questions asked in the past and provide the answers.
- ""key"": ""color"",
- ""match"": {
- ""value"": ""red""
+![Airbyte destination configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-qdrant-destination.png)
- }
- }
- ]
+Once we have the destination set up, we can finally configure a connection. The connection will define the schedule
- },
+of the data synchronization.
- ""limit"": 1,
- ""with_payload"": true,
- ""with_vector"": false
+![Airbyte connection configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-connection.png)
-}
-```
+Airbyte should now be ready to accept any data updates from the source and load them into Qdrant. You can monitor the
+progress of the synchronization in the UI.
-```python
-client.scroll(
- collection_name=""{collection_name}"",
+## RAG connector
- scroll_filter=models.Filter(
- must=[
- models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
+One of our previous tutorials, guides you step-by-step on [implementing custom connector for Cohere
- ]
+RAG](../cohere-rag-connector/) with Cohere Embed v3 and Qdrant. You can just point it to use your Hybrid Cloud
- ),
+Qdrant instance running on AWS. Created connector might be deployed to Amazon Web Services in various ways, even in a
- limit=1,
+[Serverless](https://aws.amazon.com/serverless/) manner using [AWS
- with_payload=True,
+Lambda](https://aws.amazon.com/lambda/?c=ser&sec=srv).
- with_vectors=False,
-)
-```
+In general, RAG connector has to expose a single endpoint that will accept POST requests with `query` parameter and
+return the matching documents as JSON document with a specific structure. Our FastAPI implementation created [in the
+related tutorial](../cohere-rag-connector/) is a perfect fit for this task. The only difference is that you
-```typescript
+should point it to the Cohere models and Qdrant running on AWS infrastructure.
-client.scroll(""{collection_name}"", {
- filter: {
- must: [
+> Our connector is a lightweight web service that exposes a single endpoint and glues the Cohere embedding model with
- {
+> our Qdrant Hybrid Cloud instance. Thus, it perfectly fits the serverless architecture, requiring no additional
- key: ""color"",
+> infrastructure to run.
- match: {
- value: ""red"",
- },
+You can also run the connector as another service within your [Kubernetes cluster running on AWS
- },
+(EKS)](https://aws.amazon.com/eks/), or by launching an [EC2](https://aws.amazon.com/ec2/) compute instance. This step
- ],
+is dependent on the way you deploy your other services, so we'll leave it to you to decide how to run the connector.
- },
- limit: 1,
- with_payload: true,
+Eventually, the web service should be available under a specific URL, and it's a good practice to store it in the
- with_vector: false,
+environment variable, so the other services can easily access it.
-});
+
+
+```shell
+
+export RAG_CONNECTOR_URL=""https://rag-connector.example.com/search""
```
-```rust
+```python
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+os.environ[""RAG_CONNECTOR_URL""] = ""https://rag-connector.example.com/search""
+```
-client
- .scroll(&ScrollPoints {
+## Customer interface
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([Condition::matches(
- ""color"",
+At this part we have all the data loaded into Qdrant, and the RAG connector is ready to serve the relevant context. The
- ""red"".to_string(),
+last missing piece is the customer interface, that will call the Command model to create the answer. Such a system
- )])),
+should be built specifically for the platform you use and integrated into its workflow, but we will build the strong
- limit: Some(1),
+foundation for it and show how to use it in a simple CLI tool.
- with_payload: Some(true.into()),
- with_vectors: Some(false.into()),
- ..Default::default()
+> Our application does not have to connect to Qdrant anymore, as the model will connect to the RAG connector directly.
- })
- .await?;
-```
+First of all, we have to create a connection to Cohere services through the Cohere SDK.
-```java
+```python
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+import cohere
-import static io.qdrant.client.WithPayloadSelectorFactory.enable;
+# Create a Cohere client pointing to the AWS instance
-import io.qdrant.client.grpc.Points.Filter;
+cohere_client = cohere.Client(...)
-import io.qdrant.client.grpc.Points.ScrollPoints;
+```
-client
+Next, our connector should be registered. **Please make sure to do it once, and store the id of the connector in the
- .scrollAsync(
+environment variable or in any other way that will be accessible to the application.**
- ScrollPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setFilter(Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build())
+```python
- .setLimit(1)
+import os
- .setWithPayload(enable(true))
- .build())
- .get();
+connector_response = cohere_client.connectors.create(
-```
+ name=""customer-support"",
+ url=os.environ[""RAG_CONNECTOR_URL""],
+)
-```csharp
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+# The id returned by the API should be stored for future use
+connector_id = connector_response.connector.id
+```
-var client = new QdrantClient(""localhost"", 6334);
+Finally, we can create a prompt and get the answer from the model. Additionally, we define which of the connectors
-await client.ScrollAsync(
+should be used to provide the context, as we may have multiple connectors and want to use specific ones, depending on
- collectionName: ""{collection_name}"",
+some conditions. Let's start with asking a question.
- filter: MatchKeyword(""color"", ""red""),
- limit: 1,
- payloadSelector: true
+```python
-);
+query = ""Why Qdrant does not return my vectors?""
```
-Returns all point with `color` = `red`.
-
+Now we can send the query to the model, get the response, and possibly send it back to the customer.
-```json
-{
+```python
- ""result"": {
+response = cohere_client.chat(
- ""next_page_offset"": 1,
+ message=query,
- ""points"": [
+ connectors=[
- {
+ cohere.ChatConnector(id=connector_id),
- ""id"": 0,
+ ],
- ""payload"": {
+ model=""command-r"",
- ""color"": ""red""
+)
- }
- }
- ]
+print(response.text)
- },
+```
- ""status"": ""ok"",
- ""time"": 0.0001
-}
+The output should be the answer to the question, generated by the model, for example:
-```
+> Qdrant is set up by default to minimize network traffic and therefore doesn't return vectors in search results. However, you can make Qdrant return your vectors by setting the 'with_vector' parameter of the Search/Scroll function to true.
-The Scroll API will return all points that match the filter in a page-by-page manner.
+Customer support should not be fully automated, as some completely new issues might require human intervention. We
-All resulting points are sorted by ID. To query the next page it is necessary to specify the largest seen ID in the `offset` field.
+should play with prompt engineering and expect the model to provide the answer with a certain confidence level. If the
-For convenience, this ID is also returned in the field `next_page_offset`.
+confidence is too low, we should not send the answer automatically but present it to the support team for review.
-If the value of the `next_page_offset` field is `null` - the last page is reached.
+## Wrapping up
-
+weight: 34
+social_preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png
+aliases:
-## Counting points
+ - /documentation/tutorials/recommendation-system-ovhcloud/
+---
-*Available as of v0.8.4*
+# Movie Recommendation System
-Sometimes it can be useful to know how many points fit the filter conditions without doing a real search.
+| Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb) |
+| --- | ----------- | ----------- |----------- |
-Among others, for example, we can highlight the following scenarios:
+In this tutorial, you will build a mechanism that recommends movies based on defined preferences. Vector databases like Qdrant are good for storing high-dimensional data, such as user and item embeddings. They can enable personalized recommendations by quickly retrieving similar entries based on advanced indexing techniques. In this specific case, we will use [sparse vectors](/articles/sparse-vectors/) to create an efficient and accurate recommendation system.
-* Evaluation of results size for faceted search
-* Determining the number of pages for pagination
-* Debugging the query execution speed
+**Privacy and Sovereignty:** Since preference data is proprietary, it should be stored in a secure and controlled environment. Our vector database can easily be hosted on [OVHcloud](https://ovhcloud.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your OVHcloud region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.ovhcloud.com/en/public-cloud/kubernetes/) service.
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/count_points)):
+> To see the entire output, use our [notebook with complete instructions](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb).
-```http
+## Components
-POST /collections/{collection_name}/points/count
-{
- ""filter"": {
+- **Dataset:** The [MovieLens dataset](https://grouplens.org/datasets/movielens/) contains a list of movies and ratings given by users.
- ""must"": [
+- **Cloud:** [OVHcloud](https://ovhcloud.com/), with managed Kubernetes.
- {
+- **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on [OVHcloud](https://ovhcloud.com/).
- ""key"": ""color"",
- ""match"": {
- ""value"": ""red""
+**Methodology:** We're adopting a collaborative filtering approach to construct a recommendation system from the dataset provided. Collaborative filtering works on the premise that if two users share similar tastes, they're likely to enjoy similar movies. Leveraging this concept, we'll identify users whose ratings align closely with ours, and explore the movies they liked but we haven't seen yet. To do this, we'll represent each user's ratings as a vector in a high-dimensional, sparse space. Using Qdrant, we'll index these vectors and search for users whose ratings vectors closely match ours. Ultimately, we will see which movies were enjoyed by users similar to us.
- }
- }
- ]
+![](/documentation/examples/recommendation-system-ovhcloud/architecture-diagram.png)
- },
- ""exact"": true
-}
+## Deploying Qdrant Hybrid Cloud on OVHcloud
-```
+[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
-```python
-client.count(
- collection_name=""{collection_name}"",
+1. To start using managed Kubernetes on OVHcloud, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud).
- count_filter=models.Filter(
+2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
- must=[
- models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
- ]
+## Prerequisites
- ),
- exact=True,
-)
+Download and unzip the MovieLens dataset:
-```
+```shell
-```typescript
+mkdir -p data
-client.count(""{collection_name}"", {
+wget https://files.grouplens.org/datasets/movielens/ml-1m.zip
- filter: {
+unzip ml-1m.zip -d data
- must: [
+```
- {
- key: ""color"",
- match: {
+The necessary * libraries are installed using `pip`, including `pandas` for data manipulation, `qdrant-client` for interfacing with Qdrant, and `*-dotenv` for managing environment variables.
- value: ""red"",
- },
- },
+```python
- ],
+!pip install -U \
- },
+ pandas \
- exact: true,
+ qdrant-client \
-});
+ *-dotenv
```
-```rust
+The `.env` file is used to store sensitive information like the Qdrant host URL and API key securely.
-use qdrant_client::qdrant::{Condition, CountPoints, Filter};
+```shell
-client
+QDRANT_HOST
- .count(&CountPoints {
+QDRANT_API_KEY
- collection_name: ""{collection_name}"".to_string(),
+```
- filter: Some(Filter::must([Condition::matches(
+Load all environment variables into the setup:
- ""color"",
- ""red"".to_string(),
- )])),
+```python
- exact: Some(true),
+import os
- })
+from dotenv import load_dotenv
- .await?;
+load_dotenv('./.env')
```
-```java
+## Implementation
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+Load the data from the MovieLens dataset into pandas DataFrames to facilitate data manipulation and analysis.
-import io.qdrant.client.grpc.Points.Filter;
+```python
-client
+from qdrant_client import QdrantClient, models
- .countAsync(
+import pandas as pd
- ""{collection_name}"",
+```
- Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
+Load user data:
- true)
+```python
- .get();
+users = pd.read_csv(
-```
+ 'data/ml-1m/users.dat',
+ sep='::',
+ names=['user_id', 'gender', 'age', 'occupation', 'zip'],
-```csharp
+ engine='*'
-using Qdrant.Client;
+)
-using static Qdrant.Client.Grpc.Conditions;
+users.head()
+```
+Add movies:
-var client = new QdrantClient(""localhost"", 6334);
+```python
+movies = pd.read_csv(
+ 'data/ml-1m/movies.dat',
-await client.CountAsync(
+ sep='::',
- collectionName: ""{collection_name}"",
+ names=['movie_id', 'title', 'genres'],
- filter: MatchKeyword(""color"", ""red""),
+ engine='*',
- exact: true
+ encoding='latin-1'
-);
+)
+
+movies.head()
```
+Finally, add the ratings:
+```python
-Returns number of counts matching given filtering conditions:
+ratings = pd.read_csv(
+ 'data/ml-1m/ratings.dat',
+ sep='::',
-```json
+ names=['user_id', 'movie_id', 'rating', 'timestamp'],
-{
+ engine='*'
- ""count"": 3811
+)
-}
+ratings.head()
```
-## Batch update
+### Normalize the ratings
-*Available as of v1.5.0*
+Sparse vectors can use advantage of negative values, so we can normalize ratings to have a mean of 0 and a standard deviation of 1. This normalization ensures that ratings are consistent and centered around zero, enabling accurate similarity calculations. In this scenario we can take into account movies that we don't like.
-You can batch multiple point update operations. This includes inserting,
+```python
-updating and deleting points, vectors and payload.
+ratings.rating = (ratings.rating - ratings.rating.mean()) / ratings.rating.std()
+```
+To get the results:
-A batch update request consists of a list of operations. These are executed in
-order. These operations can be batched:
+```python
+ratings.head()
-- [Upsert points](#upload-points): `upsert` or `UpsertOperation`
+```
-- [Delete points](#delete-points): `delete_points` or `DeleteOperation`
-- [Update vectors](#update-vectors): `update_vectors` or `UpdateVectorsOperation`
-- [Delete vectors](#delete-vectors): `delete_vectors` or `DeleteVectorsOperation`
+### Data preparation
-- [Set payload](#set-payload): `set_payload` or `SetPayloadOperation`
-- [Overwrite payload](#overwrite-payload): `overwrite_payload` or `OverwritePayload`
-- [Delete payload](#delete-payload-keys): `delete_payload` or `DeletePayloadOperation`
+Now you will transform user ratings into sparse vectors, where each vector represents ratings for different movies. This step prepares the data for indexing in Qdrant.
-- [Clear payload](#clear-payload): `clear_payload` or `ClearPayloadOperation`
+First, create a collection with configured sparse vectors. For sparse vectors, you don't need to specify the dimension, because it's extracted from the data automatically.
-The following example snippet makes use of all operations.
+```python
-REST API ([Schema](https://qdrant.github.io/qdrant/redoc/index.html#tag/points/operation/batch_update)):
+from collections import defaultdict
-```http
+user_sparse_vectors = defaultdict(lambda: {""values"": [], ""indices"": []})
-POST /collections/{collection_name}/points/batch
-{
- ""operations"": [
+for row in ratings.itertuples():
- {
+ user_sparse_vectors[row.user_id][""values""].append(row.rating)
- ""upsert"": {
+ user_sparse_vectors[row.user_id][""indices""].append(row.movie_id)
- ""points"": [
+```
- {
+Connect to Qdrant and create a collection called **movielens**:
- ""id"": 1,
- ""vector"": [1.0, 2.0, 3.0, 4.0],
- ""payload"": {}
+```python
- }
+client = QdrantClient(
- ]
+ url = os.getenv(""QDRANT_HOST""),
- }
+ api_key = os.getenv(""QDRANT_API_KEY"")
- },
+)
- {
- ""update_vectors"": {
- ""points"": [
+client.create_collection(
- {
+ ""movielens"",
- ""id"": 1,
+ vectors_config={},
- ""vector"": [1.0, 2.0, 3.0, 4.0]
+ sparse_vectors_config={
- }
+ ""ratings"": models.SparseVectorParams()
- ]
+ }
- }
+)
- },
+```
- {
- ""delete_vectors"": {
- ""points"": [1],
+Upload user ratings to the **movielens** collection in Qdrant as sparse vectors, along with user metadata. This step populates the database with the necessary data for recommendation generation.
- ""vector"": [""""]
- }
- },
+```python
- {
+def data_generator():
- ""overwrite_payload"": {
+ for user in users.itertuples():
- ""payload"": {
+ yield models.PointStruct(
- ""test_payload"": ""1""
+ id=user.user_id,
- },
+ vector={
- ""points"": [1]
+ ""ratings"": user_sparse_vectors[user.user_id]
- }
+ },
- },
+ payload=user._asdict()
- {
+ )
- ""set_payload"": {
- ""payload"": {
- ""test_payload_2"": ""2"",
+client.upload_points(
- ""test_payload_3"": ""3""
+ ""movielens"",
- },
+ data_generator()
- ""points"": [1]
+)
- }
+```
- },
- {
- ""delete_payload"": {
+## Recommendations
- ""keys"": [""test_payload_2""],
- ""points"": [1]
- }
+Personal movie ratings are specified, where positive ratings indicate likes and negative ratings indicate dislikes. These ratings serve as the basis for finding similar users with comparable tastes.
- },
- {
- ""clear_payload"": {
+Personal ratings are converted into a sparse vector representation suitable for querying Qdrant. This vector represents the user's preferences across different movies.
- ""points"": [1]
- }
- },
+Let's try to recommend something for ourselves:
- {""delete"": {""points"": [1]}}
- ]
-}
+```
+
+1 = Like
+
+-1 = dislike
```
@@ -30739,873 +30221,771 @@ POST /collections/{collection_name}/points/batch
```python
-client.batch_update_points(
+# Search with movies[movies.title.str.contains(""Matrix"", case=False)].
- collection_name=collection_name,
- update_operations=[
- models.UpsertOperation(
+my_ratings = {
- upsert=models.PointsList(
+ 2571: 1, # Matrix
- points=[
+ 329: 1, # Star Trek
- models.PointStruct(
+ 260: 1, # Star Wars
- id=1,
+ 2288: -1, # The Thing
- vector=[1.0, 2.0, 3.0, 4.0],
+ 1: 1, # Toy Story
- payload={},
+ 1721: -1, # Titanic
- ),
+ 296: -1, # Pulp Fiction
- ]
+ 356: 1, # Forrest Gump
- )
+ 2116: 1, # Lord of the Rings
- ),
+ 1291: -1, # Indiana Jones
- models.UpdateVectorsOperation(
+ 1036: -1 # Die Hard
- update_vectors=models.UpdateVectors(
+}
- points=[
- models.PointVectors(
- id=1,
+inverse_ratings = {k: -v for k, v in my_ratings.items()}
- vector=[1.0, 2.0, 3.0, 4.0],
- )
- ]
+def to_vector(ratings):
- )
+ vector = models.SparseVector(
- ),
+ values=[],
- models.DeleteVectorsOperation(
+ indices=[]
- delete_vectors=models.DeleteVectors(points=[1], vector=[""""])
+ )
- ),
+ for movie_id, rating in ratings.items():
- models.OverwritePayloadOperation(
+ vector.values.append(rating)
- overwrite_payload=models.SetPayload(
+ vector.indices.append(movie_id)
- payload={""test_payload"": 1},
+ return vector
- points=[1],
+```
- )
- ),
- models.SetPayloadOperation(
+Query Qdrant to find users with similar tastes based on the provided personal ratings. The search returns a list of similar users along with their ratings, facilitating collaborative filtering.
- set_payload=models.SetPayload(
- payload={
- ""test_payload_2"": 2,
+```python
- ""test_payload_3"": 3,
+results = client.query_points(
- },
+ ""movielens"",
- points=[1],
+ query=to_vector(my_ratings),
- )
+ using=""ratings"",
- ),
+ with_vectors=True, # We will use those to find new movies
- models.DeletePayloadOperation(
+ limit=20
- delete_payload=models.DeletePayload(keys=[""test_payload_2""], points=[1])
+).points
- ),
+```
- models.ClearPayloadOperation(clear_payload=models.PointIdsList(points=[1])),
- models.DeleteOperation(delete=models.PointIdsList(points=[1])),
- ],
+Movie scores are computed based on how frequently each movie appears in the ratings of similar users, weighted by their ratings. This step identifies popular movies among users with similar tastes. Calculate how frequently each movie is found in similar users' ratings
-)
-```
+```python
+def results_to_scores(results):
-```typescript
+ movie_scores = defaultdict(lambda: 0)
-client.batchUpdate(""{collection_name}"", {
- operations: [
- {
+ for user in results:
- upsert: {
+ user_scores = user.vector['ratings']
- points: [
+ for idx, rating in zip(user_scores.indices, user_scores.values):
- {
+ if idx in my_ratings:
- id: 1,
+ continue
- vector: [1.0, 2.0, 3.0, 4.0],
+ movie_scores[idx] += rating
- payload: {},
- },
- ],
+ return movie_scores
- },
+```
- },
- {
- update_vectors: {
+The top-rated movies are sorted based on their scores and printed as recommendations for the user. These recommendations are tailored to the user's preferences and aligned with their tastes. Sort movies by score and print top five:
- points: [
- {
- id: 1,
+```python
- vector: [1.0, 2.0, 3.0, 4.0],
+movie_scores = results_to_scores(results)
- },
+top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True)
- ],
- },
- },
+for movie_id, score in top_movies[:5]:
- {
+ print(movies[movies.movie_id == movie_id].title.values[0], score)
- delete_vectors: {
+```
- points: [1],
- vector: [""""],
- },
+Result:
- },
- {
- overwrite_payload: {
+```text
- payload: {
+Star Wars: Episode V - The Empire Strikes Back (1980) 20.02387858
- test_payload: 1,
+Star Wars: Episode VI - Return of the Jedi (1983) 16.443184379999998
- },
+Princess Bride, The (1987) 15.840068229999996
- points: [1],
+Raiders of the Lost Ark (1981) 14.94489462
- },
+Sixth Sense, The (1999) 14.570322149999999
- },
+```",documentation/examples/recommendation-system-ovhcloud.md
+"---
- {
+title: Chat With Product PDF Manuals Using Hybrid Search
- set_payload: {
+weight: 27
- payload: {
+social_preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png
- test_payload_2: 2,
+aliases:
- test_payload_3: 3,
+ - /documentation/tutorials/hybrid-search-llamaindex-jinaai/
- },
+---
- points: [1],
- },
- },
+# Chat With Product PDF Manuals Using Hybrid Search
- {
- delete_payload: {
- keys: [""test_payload_2""],
+| Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) |
- points: [1],
+| --- | ----------- | ----------- |----------- |
- },
- },
- {
+With the proliferation of digital manuals and the increasing demand for quick and accurate customer support, having a chatbot capable of efficiently parsing through complex PDF documents and delivering precise information can be a game-changer for any business.
- clear_payload: {
- points: [1],
- },
+In this tutorial, we'll walk you through the process of building a RAG-based chatbot, designed specifically to assist users with understanding the operation of various household appliances.
- },
+We'll cover the essential steps required to build your system, including data ingestion, natural language understanding, and response generation for customer support use cases.
- {
- delete: {
- points: [1],
+## Components
- },
- },
- ],
+- **Embeddings:** Jina Embeddings, served via the [Jina Embeddings API](https://jina.ai/embeddings/#apiform)
-});
+- **Database:** [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/), deployed in a managed Kubernetes cluster on [DigitalOcean
-```
+ (DOKS)](https://www.digitalocean.com/products/kubernetes)
+- **LLM:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) language model on HuggingFace
+- **Framework:** [LlamaIndex](https://www.llamaindex.ai/) for extended RAG functionality and [Hybrid Search support](https://docs.llamaindex.ai/en/stable/examples/vector_stores/qdrant_hybrid/).
-```rust
+- **Parser:** [LlamaParse](https://github.com/run-llama/llama_parse) as a way to parse complex documents with embedded objects such as tables and figures.
-use qdrant_client::qdrant::{
- points_selector::PointsSelectorOneOf,
- points_update_operation::{
+![Architecture diagram](/documentation/examples/hybrid-search-llamaindex-jinaai/architecture-diagram.png)
- DeletePayload, DeleteVectors, Operation, PointStructList, SetPayload, UpdateVectors,
- },
- PointStruct, PointVectors, PointsIdsList, PointsSelector, PointsUpdateOperation,
+### Procedure
- VectorsSelector,
-};
-use serde_json::json;
+Retrieval Augmented Generation (RAG) combines search with language generation. An external information retrieval system is used to identify documents likely to provide information relevant to the user's query. These documents, along with the user's request, are then passed on to a text-generating language model, producing a natural response.
-use std::collections::HashMap;
+This method enables a language model to respond to questions and access information from a much larger set of documents than it could see otherwise. The language model only looks at a few relevant sections of the documents when generating responses, which also helps to reduce inexplicable errors.
-client
- .batch_updates_blocking(
- ""{collection_name}"",
+##
- &[
- PointsUpdateOperation {
- operation: Some(Operation::Upsert(PointStructList {
+[Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility.
- points: vec![PointStruct::new(
- 1,
- vec![1.0, 2.0, 3.0, 4.0],
- json!({}).try_into().unwrap(),
- )],
- })),
- },
+## Prerequisites
- PointsUpdateOperation {
- operation: Some(Operation::UpdateVectors(UpdateVectors {
- points: vec![PointVectors {
+### Deploying Qdrant Hybrid Cloud on DigitalOcean
- id: Some(1.into()),
- vectors: Some(vec![1.0, 2.0, 3.0, 4.0].into()),
- }],
+[DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.
- })),
- },
- PointsUpdateOperation {
+1. To start using managed Kubernetes on DigitalOcean, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean).
- operation: Some(Operation::DeleteVectors(DeleteVectors {
+2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
- points_selector: Some(PointsSelector {
+3. Once it's deployed, you should have a running Qdrant cluster with an API key.
- points_selector_one_of: Some(PointsSelectorOneOf::Points(
- PointsIdsList {
- ids: vec![1.into()],
+### Development environment
- },
- )),
- }),
+Then, install all dependencies:
- vectors: Some(VectorsSelector {
- names: vec!["""".into()],
- }),
+```python
- })),
+!pip install -U \
- },
+ llama-index \
- PointsUpdateOperation {
+ llama-parse \
- operation: Some(Operation::OverwritePayload(SetPayload {
+ python-dotenv \
- points_selector: Some(PointsSelector {
+ llama-index-embeddings-jinaai \
- points_selector_one_of: Some(PointsSelectorOneOf::Points(
+ llama-index-llms-huggingface \
- PointsIdsList {
+ llama-index-vector-stores-qdrant \
- ids: vec![1.into()],
+ ""huggingface_hub[inference]"" \
- },
+ datasets
- )),
+```
- }),
- payload: HashMap::from([(""test_payload"".to_string(), 1.into())]),
- })),
+Set up secret key values on `.env` file:
- },
- PointsUpdateOperation {
- operation: Some(Operation::SetPayload(SetPayload {
+```bash
- points_selector: Some(PointsSelector {
+JINAAI_API_KEY
- points_selector_one_of: Some(PointsSelectorOneOf::Points(
+HF_INFERENCE_API_KEY
- PointsIdsList {
+LLAMA_CLOUD_API_KEY
- ids: vec![1.into()],
+QDRANT_HOST
- },
+QDRANT_API_KEY
- )),
+```
- }),
- payload: HashMap::from([
- (""test_payload_2"".to_string(), 2.into()),
+Load all environment variables:
- (""test_payload_3"".to_string(), 3.into()),
- ]),
- })),
+```python
- },
+import os
- PointsUpdateOperation {
+from dotenv import load_dotenv
- operation: Some(Operation::DeletePayload(DeletePayload {
+load_dotenv('./.env')
- points_selector: Some(PointsSelector {
+```
- points_selector_one_of: Some(PointsSelectorOneOf::Points(
+## Implementation
- PointsIdsList {
- ids: vec![1.into()],
- },
+### Connect Jina Embeddings and Mixtral LLM
- )),
- }),
- keys: vec![""test_payload_2"".to_string()],
+LlamaIndex provides built-in support for the [Jina Embeddings API](https://jina.ai/embeddings/#apiform). To use it, you need to initialize the `JinaEmbedding` object with your API Key and model name.
- })),
- },
- PointsUpdateOperation {
+For the LLM, you need wrap it in a subclass of `llama_index.llms.CustomLLM` to make it compatible with LlamaIndex.
- operation: Some(Operation::ClearPayload(PointsSelector {
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
- ids: vec![1.into()],
+```python
- })),
+# connect embeddings
- })),
+from llama_index.embeddings.jinaai import JinaEmbedding
- },
- PointsUpdateOperation {
- operation: Some(Operation::Delete(PointsSelector {
+jina_embedding_model = JinaEmbedding(
- points_selector_one_of: Some(PointsSelectorOneOf::Points(PointsIdsList {
+ model=""jina-embeddings-v2-base-en"",
- ids: vec![1.into()],
+ api_key=os.getenv(""JINAAI_API_KEY""),
- })),
+)
- })),
- },
- ],
+# connect LLM
- None,
+from llama_index.llms.huggingface import HuggingFaceInferenceAPI
- )
- .await?;
-```
+mixtral_llm = HuggingFaceInferenceAPI(
+ model_name = ""mistralai/Mixtral-8x7B-Instruct-v0.1"",
+ token=os.getenv(""HF_INFERENCE_API_KEY""),
-```java
+)
-import java.util.List;
+```
-import java.util.Map;
+### Prepare data for RAG
-import static io.qdrant.client.PointIdFactory.id;
-import static io.qdrant.client.ValueFactory.value;
-import static io.qdrant.client.VectorsFactory.vectors;
+This example will use household appliance manuals, which are generally available as PDF documents.
+LlamaPar
+In the `data` folder, we have three documents, and we will use it to extract the textual content from the PDF and use it as a knowledge base in a simple RAG.
-import io.qdrant.client.grpc.Points.PointStruct;
-import io.qdrant.client.grpc.Points.PointVectors;
-import io.qdrant.client.grpc.Points.PointsIdsList;
+The free LlamaIndex Cloud plan is sufficient for our example:
-import io.qdrant.client.grpc.Points.PointsSelector;
-import io.qdrant.client.grpc.Points.PointsUpdateOperation;
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.ClearPayload;
+```python
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePayload;
+import nest_asyncio
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePoints;
+nest_asyncio.apply()
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeleteVectors;
+from llama_parse import LlamaParse
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.PointStructList;
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.SetPayload;
-import io.qdrant.client.grpc.Points.PointsUpdateOperation.UpdateVectors;
+llamaparse_api_key = os.getenv(""LLAMA_CLOUD_API_KEY"")
-import io.qdrant.client.grpc.Points.VectorsSelector;
+llama_parse_documents = LlamaParse(api_key=llamaparse_api_key, result_type=""markdown"").load_data([
-client
+ ""data/DJ68-00682F_0.0.pdf"",
- .batchUpdateAsync(
+ ""data/F500E_WF80F5E_03445F_EN.pdf"",
- ""{collection_name}"",
+ ""data/O_ME4000R_ME19R7041FS_AA_EN.pdf""
- List.of(
+])
- PointsUpdateOperation.newBuilder()
+```
- .setUpsert(
- PointStructList.newBuilder()
- .addPoints(
+### Store data into Qdrant
- PointStruct.newBuilder()
+The code below does the following:
- .setId(id(1))
- .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
- .build())
+- create a vector store with Qdrant client;
- .build())
+- get an embedding for each chunk using Jina Embeddings API;
- .build(),
+- combines `sparse` and `dense` vectors for hybrid search;
- PointsUpdateOperation.newBuilder()
+- stores all data into Qdrant;
- .setUpdateVectors(
- UpdateVectors.newBuilder()
- .addPoints(
+Hybrid search with Qdrant must be enabled from the beginning - we can simply set `enable_hybrid=True`.
- PointVectors.newBuilder()
- .setId(id(1))
- .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
+```python
- .build())
+# By default llamaindex uses OpenAI models
- .build())
+# setting embed_model to Jina and llm model to Mixtral
- .build(),
+from llama_index.core import Settings
- PointsUpdateOperation.newBuilder()
+Settings.embed_model = jina_embedding_model
- .setDeleteVectors(
+Settings.llm = mixtral_llm
- DeleteVectors.newBuilder()
- .setPointsSelector(
- PointsSelector.newBuilder()
+from llama_index.core import VectorStoreIndex, StorageContext
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+from llama_index.vector_stores.qdrant import QdrantVectorStore
- .build())
+import qdrant_client
- .setVectors(VectorsSelector.newBuilder().addNames("""").build())
- .build())
- .build(),
+client = qdrant_client.QdrantClient(
- PointsUpdateOperation.newBuilder()
+ url=os.getenv(""QDRANT_HOST""),
- .setOverwritePayload(
+ api_key=os.getenv(""QDRANT_API_KEY"")
- SetPayload.newBuilder()
+)
- .setPointsSelector(
- PointsSelector.newBuilder()
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+vector_store = QdrantVectorStore(
- .build())
+ client=client, collection_name=""demo"", enable_hybrid=True, batch_size=20
- .putAllPayload(Map.of(""test_payload"", value(1)))
+)
- .build())
+Settings.chunk_size = 512
- .build(),
- PointsUpdateOperation.newBuilder()
- .setSetPayload(
+storage_context = StorageContext.from_defaults(vector_store=vector_store)
- SetPayload.newBuilder()
+index = VectorStoreIndex.from_documents(
- .setPointsSelector(
+ documents=llama_parse_documents,
- PointsSelector.newBuilder()
+ storage_context=storage_context
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+)
- .build())
+```
- .putAllPayload(
- Map.of(""test_payload_2"", value(2), ""test_payload_3"", value(3)))
- .build())
+### Prepare a prompt
- .build(),
+Here we will create a custom prompt template. This prompt asks the LLM to use only the context information retrieved from Qdrant. When querying with hybrid mode, we can set `similarity_top_k` and `sparse_top_k` separately:
- PointsUpdateOperation.newBuilder()
- .setDeletePayload(
- DeletePayload.newBuilder()
+- `sparse_top_k` represents how many nodes will be retrieved from each dense and sparse query.
- .setPointsSelector(
+- `similarity_top_k` controls the final number of returned nodes. In the above setting, we end up with 10 nodes.
- PointsSelector.newBuilder()
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
- .build())
+Then, we assemble the query engine using the prompt.
- .addKeys(""test_payload_2"")
- .build())
- .build(),
+```python
- PointsUpdateOperation.newBuilder()
+from llama_index.core import PromptTemplate
- .setClearPayload(
- ClearPayload.newBuilder()
- .setPoints(
+qa_prompt_tmpl = (
- PointsSelector.newBuilder()
+ ""Context information is below.\n""
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+ ""-------------------------------""
- .build())
+ ""{context_str}\n""
- .build())
+ ""-------------------------------""
- .build(),
+ ""Given the context information and not prior knowledge,""
- PointsUpdateOperation.newBuilder()
+ ""answer the query. Please be concise, and complete.\n""
- .setDeletePoints(
+ ""If the context does not contain an answer to the query,""
- DeletePoints.newBuilder()
+ ""respond with \""I don't know!\"".""
- .setPoints(
+ ""Query: {query_str}\n""
- PointsSelector.newBuilder()
+ ""Answer: ""
- .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+)
- .build())
+qa_prompt = PromptTemplate(qa_prompt_tmpl)
- .build())
- .build()))
- .get();
+from llama_index.core.retrievers import VectorIndexRetriever
-```
+from llama_index.core.query_engine import RetrieverQueryEngine
+from llama_index.core import get_response_synthesizer
+from llama_index.core import Settings
-To batch many points with a single operation type, please use batching
+Settings.embed_model = jina_embedding_model
-functionality in that operation directly.
-",documentation/concepts/points.md
-"---
+Settings.llm = mixtral_llm
-title: Snapshots
-weight: 110
-aliases:
+# retriever
- - ../snapshots
+retriever = VectorIndexRetriever(
----
+ index=index,
+ similarity_top_k=2,
+ sparse_top_k=12,
-# Snapshots
+ vector_store_query_mode=""hybrid""
+)
-*Available as of v0.8.4*
+# response synthesizer
+response_synthesizer = get_response_synthesizer(
-Snapshots are `tar` archive files that contain data and configuration of a specific collection on a specific node at a specific time. In a distributed setup, when you have multiple nodes in your cluster, you must create snapshots for each node separately when dealing with a single collection.
+ llm=mixtral_llm,
+ text_qa_template=qa_prompt,
+ response_mode=""compact"",
-This feature can be used to archive data or easily replicate an existing deployment. For disaster recovery, Qdrant Cloud users may prefer to use [Backups](/documentation/cloud/backups/) instead, which are physical disk-level copies of your data.
+)
-For a step-by-step guide on how to use snapshots, see our [tutorial](/documentation/tutorials/create-snapshot/).
+# query engine
+query_engine = RetrieverQueryEngine(
+ retriever=retriever,
-## Store snapshots
+ response_synthesizer=response_synthesizer,
+)
+```
-The target directory used to store generated snapshots is controlled through the [configuration](../../guides/configuration) or using the ENV variable: `QDRANT__STORAGE__SNAPSHOTS_PATH=./snapshots`.
+## Run a test query
-You can set the snapshots storage directory from the [config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) file. If no value is given, default is `./snapshots`.
+Now you can ask questions and receive answers based on the data:
-```yaml
+**Question**
-storage:
- # Specify where you want to store snapshots.
- snapshots_path: ./snapshots
+```python
-```
+result = query_engine.query(""What temperature should I use for my laundry?"")
+print(result.response)
+```
-*Available as of v1.3.0*
+**Answer**
-While a snapshot is being created, temporary files are by default placed in the configured storage directory.
-This location may have limited capacity or be on a slow network-attached disk. You may specify a separate location for temporary files:
+```text
+The water temperature is set to 70 ˚C during the Eco Drum Clean cycle. You cannot change the water temperature. However, the temperature for other cycles is not specified in the context.
-```yaml
+```
-storage:
- # Where to store temporary files
- temp_path: /tmp
+And that's it! Feel free to scale this up to as many documents and complex PDFs as you like. ",documentation/examples/hybrid-search-llamaindex-jinaai.md
+"---
-```
+title: Region-Specific Contract Management System
+weight: 28
+social_preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png
-## Create snapshot
+aliases:
+ - /documentation/tutorials/rag-contract-management-stackit-aleph-alpha/
+---
-
+# Region-Specific Contract Management System
-To create a new snapshot for an existing collection:
+| Time: 90 min | Level: Advanced | |
-```http
+| --- | ----------- | ----------- |----------- |
-POST /collections/{collection_name}/snapshots
-```
+Contract management benefits greatly from Retrieval Augmented Generation (RAG), streamlining the handling of lengthy business contract texts. With AI assistance, complex questions can be asked and well-informed answers generated, facilitating efficient document management. This proves invaluable for businesses with extensive relationships, like shipping companies, construction firms, and consulting practices. Access to such contracts is often restricted to authorized team members due to security and regulatory requirements, such as GDPR in Europe, necessitating secure storage practices.
-```python
-from qdrant_client import QdrantClient
+Companies want their data to be kept and processed within specific geographical boundaries. For that reason, this RAG-centric tutorial focuses on dealing with a region-specific cloud provider. You will set up a contract management system using [Aleph Alpha's](https://aleph-alpha.com/) embeddings and LLM. You will host everything on [STACKIT](https://www.stackit.de/), a German business cloud provider. On this platform, you will run Qdrant Hybrid Cloud as well as the rest of your RAG application. This setup will ensure that your data is stored and processed in Germany.
-client = QdrantClient(""localhost"", port=6333)
+![Architecture diagram](/documentation/examples/contract-management-stackit-aleph-alpha/architecture-diagram.png)
-client.create_snapshot(collection_name=""{collection_name}"")
+## Components
-```
+A contract management platform is not a simple CLI tool, but an application that should be available to all team
-```typescript
+members. It needs an interface to upload, search, and manage the documents. Ideally, the system should be
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+integrated with org's existing stack, and the permissions/access controls inherited from LDAP or Active
+Directory.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+> **Note:** In this tutorial, we are going to build a solid foundation for such a system. However, it is up to your organization's setup to implement the entire solution.
-client.createSnapshot(""{collection_name}"");
-```
+- **Dataset** - a collection of documents, using different formats, such as PDF or DOCx, scraped from internet
+- **Asymmetric semantic embeddings** - [Aleph Alpha embedding](https://docs.aleph-alpha.com/api/semantic-embed/) to
+ convert the queries and the documents into vectors
-```rust
+- **Large Language Model** - the [Luminous-extended-control
-use qdrant_client::client::QdrantClient;
+ model](https://docs.aleph-alpha.com/docs/introduction/model-card/), but you can play with a different one from the
+ Luminous family
+- **Qdrant Hybrid Cloud** - a knowledge base to store the vectors and search over the documents
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+- **STACKIT** - a [German business cloud](https://www.stackit.de) to run the Qdrant Hybrid Cloud and the application
+ processes
-client.create_snapshot(""{collection_name}"").await?;
-```
+We will implement the process of uploading the documents, converting them into vectors, and storing them in Qdrant.
+Then, we will build a search interface to query the documents and get the answers. All that, assuming the user
+interacts with the system with some set of permissions, and can only access the documents they are allowed to.
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+## Prerequisites
-QdrantClient client =
+### Aleph Alpha account
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+Since you will be using Aleph Alpha's models, [sign up](https://app.aleph-alpha.com/signup) with their managed service and generate an API token in the [User Profile](https://app.aleph-alpha.com/profile). Once you have it ready, store it as an environment variable:
-client.createSnapshotAsync(""{collection_name}"").get();
-```
+```shell
+export ALEPH_ALPHA_API_KEY=""""
-```csharp
+```
-using Qdrant.Client;
+```python
-var client = new QdrantClient(""localhost"", 6334);
+import os
-await client.CreateSnapshotAsync(""{collection_name}"");
+os.environ[""ALEPH_ALPHA_API_KEY""] = """"
```
-This is a synchronous operation for which a `tar` archive file will be generated into the `snapshot_path`.
+### Qdrant Hybrid Cloud on STACKIT
-### Delete snapshot
+Please refer to our documentation to see [how to deploy Qdrant Hybrid Cloud on
+STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit). Once you finish the deployment, you will
+have the API endpoint to interact with the Qdrant server. Let's store it in the environment variable as well:
-*Available as of v1.0.0*
+```shell
-```http
+export QDRANT_URL=""https://qdrant.example.com""
-DELETE /collections/{collection_name}/snapshots/{snapshot_name}
+export QDRANT_API_KEY=""your-api-key""
```
@@ -31613,295 +30993,325 @@ DELETE /collections/{collection_name}/snapshots/{snapshot_name}
```python
-from qdrant_client import QdrantClient
+os.environ[""QDRANT_URL""] = ""https://qdrant.example.com""
+
+os.environ[""QDRANT_API_KEY""] = ""your-api-key""
+```
-client = QdrantClient(""localhost"", port=6333)
+Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
-client.delete_snapshot(
- collection_name=""{collection_name}"", snapshot_name=""{snapshot_name}""
+*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
-)
-```
+```shell
+export LANGCHAIN_TRACING_V2=true
-```typescript
+export LANGCHAIN_API_KEY=""your-api-key""
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default""
+```
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+## Implementation
-client.deleteSnapshot(""{collection_name}"", ""{snapshot_name}"");
-```
+To build the application, we can use the official SDKs of Aleph Alpha and Qdrant. However, to streamline the process
+let's use [LangChain](https://python.langchain.com/docs/get_started/introduction). This framework is already integrated with both services, so we can focus our efforts on
+developing business logic.
-```rust
-use qdrant_client::client::QdrantClient;
+### Qdrant collection
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+Aleph Alpha embeddings are high dimensional vectors by default, with a dimensionality of `5120`. However, a pretty
+unique feature of that model is that they might be compressed to a size of `128`, with a small drop in accuracy
-client.delete_snapshot(""{collection_name}"", ""{snapshot_name}"").await?;
+performance (4-6%, according to the docs). Qdrant can store even the original vectors easily, and this sounds like a
-```
+good idea to enable [Binary Quantization](/documentation/guides/quantization/#binary-quantization) to save space and
+make the retrieval faster. Let's create a collection with such settings:
-```java
-import io.qdrant.client.QdrantClient;
+```python
-import io.qdrant.client.QdrantGrpcClient;
+from qdrant_client import QdrantClient, models
-QdrantClient client =
+client = QdrantClient(
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ location=os.environ[""QDRANT_URL""],
+ api_key=os.environ[""QDRANT_API_KEY""],
+)
-client.deleteSnapshotAsync(""{collection_name}"", ""{snapshot_name}"").get();
+client.create_collection(
-```
+ collection_name=""contracts"",
+ vectors_config=models.VectorParams(
+ size=5120,
-```csharp
+ distance=models.Distance.COSINE,
-using Qdrant.Client;
+ quantization_config=models.BinaryQuantization(
+ binary=models.BinaryQuantizationConfig(
+ always_ram=True,
-var client = new QdrantClient(""localhost"", 6334);
+ )
+ )
+ ),
-await client.DeleteSnapshotAsync(collectionName: ""{collection_name}"", snapshotName: ""{snapshot_name}"");
+)
```
-## List snapshot
+We are going to use the `contracts` collection to store the vectors of the documents. The `always_ram` flag is set to
+`True` to keep the quantized vectors in RAM, which will speed up the search process. We also wanted to restrict access
+to the individual documents, so only users with the proper permissions can see them. In Qdrant that should be solved by
-List of snapshots for a collection:
+adding a payload field that defines who can access the document. We'll call this field `roles` and set it to an array
+of strings with the roles that can access the document.
-```http
-GET /collections/{collection_name}/snapshots
+```python
-```
+client.create_payload_index(
+ collection_name=""contracts"",
+ field_name=""metadata.roles"",
-```python
+ field_schema=models.PayloadSchemaType.KEYWORD,
-from qdrant_client import QdrantClient
+)
+```
-client = QdrantClient(""localhost"", port=6333)
+Since we use Langchain, the `roles` field is a nested field of the `metadata`, so we have to define it as
+`metadata.roles`. The schema says that the field is a keyword, which means it is a string or an array of strings. We are
-client.list_snapshots(collection_name=""{collection_name}"")
+going to use the name of the customers as the roles, so the access control will be based on the customer name.
-```
+### Ingestion pipeline
-```typescript
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Semantic search systems rely on high-quality data as their foundation. With the [unstructured integration of Langchain](https://python.langchain.com/docs/integrations/providers/unstructured), ingestion of various document formats like PDFs, Microsoft Word files, and PowerPoint presentations becomes effortless. However, it's crucial to split the text intelligently to avoid converting entire documents into vectors; instead, they should be divided into meaningful chunks. Subsequently, the extracted documents are converted into vectors using Aleph Alpha embeddings and stored in the Qdrant collection.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+Let's start by defining the components and connecting them together:
-client.listSnapshots(""{collection_name}"");
-```
+```python
+embeddings = AlephAlphaAsymmetricSemanticEmbedding(
+ model=""luminous-base"",
-```rust
+ aleph_alpha_api_key=os.environ[""ALEPH_ALPHA_API_KEY""],
+
+ normalize=True,
-use qdrant_client::client::QdrantClient;
+)
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+qdrant = Qdrant(
+ client=client,
+ collection_name=""contracts"",
-client.list_snapshots(""{collection_name}"").await?;
+ embeddings=embeddings,
+
+)
```
-```java
+Now it's high time to index our documents. Each of the documents is a separate file, and we also have to know the
-import io.qdrant.client.QdrantClient;
+customer name to set the access control properly. There might be several roles for a single document, so let's keep them
-import io.qdrant.client.QdrantGrpcClient;
+in a list.
-QdrantClient client =
+```python
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+documents = {
+ ""data/Data-Processing-Agreement_STACKIT_Cloud_version-1.2.pdf"": [""stackit""],
+ ""data/langchain-terms-of-service.pdf"": [""langchain""],
-client.listSnapshotAsync(""{collection_name}"").get();
+}
```
-```csharp
+This is how the documents might look like:
-using Qdrant.Client;
+![Example of the indexed document](/documentation/examples/contract-management-stackit-aleph-alpha/indexed-document.png)
-var client = new QdrantClient(""localhost"", 6334);
+Each has to be split into chunks first; there is no silver bullet. Our chunking algorithm will be simple and based on
-await client.ListSnapshotsAsync(""{collection_name}"");
+recursive splitting, with the maximum chunk size of 500 characters and the overlap of 100 characters.
-```
+```python
-## Retrieve snapshot
+from langchain_text_splitters import RecursiveCharacterTextSplitter
-
+text_splitter = RecursiveCharacterTextSplitter(
+ chunk_size=500,
+ chunk_overlap=100,
-To download a specified snapshot from a collection as a file:
+)
+```
-```http
-GET /collections/{collection_name}/snapshots/{snapshot_name}
+Now we can iterate over the documents, split them into chunks, convert them into vectors with Aleph Alpha embedding
-```
+model, and store them in the Qdrant.
-```shell
+```python
-curl 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.snapshot' \
+from langchain_community.document_loaders.unstructured import UnstructuredFileLoader
- -H 'api-key: ********' \
- --output 'filename.snapshot'
-```
+for document_path, roles in documents.items():
+ document_loader = UnstructuredFileLoader(file_path=document_path)
-## Restore snapshot
+ # Unstructured loads each file into a single Document object
+ loaded_documents = document_loader.load()
-
+ for doc in loaded_documents:
+ doc.metadata[""roles""] = roles
-Snapshots can be restored in three possible ways:
+ # Chunks will have the same metadata as the original document
+ document_chunks = text_splitter.split_documents(loaded_documents)
-1. [Recovering from a URL or local file](#recover-from-a-url-or-local-file) (useful for restoring a snapshot file that is on a remote server or already stored on the node)
-3. [Recovering from an uploaded file](#recover-from-an-uploaded-file) (useful for migrating data to a new cluster)
-3. [Recovering during start-up](#recover-during-start-up) (useful when running a self-hosted single-node Qdrant instance)
+ # Add the documents to the Qdrant collection
+ qdrant.add_documents(document_chunks, batch_size=20)
+```
-Regardless of the method used, Qdrant will extract the shard data from the snapshot and properly register shards in the cluster.
-If there are other active replicas of the recovered shards in the cluster, Qdrant will replicate them to the newly recovered node by default to maintain data consistency.
+Our collection is filled with data, and we can start searching over it. In a real-world scenario, the ingestion process
+should be automated and triggered by the new documents uploaded to the system. Since we already use Qdrant Hybrid Cloud
-### Recover from a URL or local file
+running on Kubernetes, we can easily deploy the ingestion pipeline as a job to the same environment. On STACKIT, you
+probably use the [STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) and launch it in a
+container. The [Compute Engine](https://www.stackit.de/en/product/stackit-compute-engine/) is also an option, but
-*Available as of v0.11.3*
+everything depends on the specifics of your organization.
-This method of recovery requires the snapshot file to be downloadable from a URL or exist as a local file on the node (like if you [created the snapshot](#create-snapshot) on this node previously). If instead you need to upload a snapshot file, see the next section.
+### Search application
-To recover from a URL or local file use the [snapshot recovery endpoint](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_snapshot). This endpoint accepts either a URL like `https://example.com` or a [file URI](https://en.wikipedia.org/wiki/File_URI_scheme) like `file:///tmp/snapshot-2022-10-10.snapshot`. If the target collection does not exist, it will be created.
+Specialized Document Management Systems have a lot of features, but semantic search is not yet a standard. We are going
+to build a simple search mechanism which could be possibly integrated with the existing system. The search process is
+quite simple: we convert the query into a vector using the same Aleph Alpha model, and then search for the most similar
-```http
+documents in the Qdrant collection. The access control is also applied, so the user can only see the documents they are
-PUT /collections/{collection_name}/snapshots/recover
+allowed to.
-{
- ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot""
-}
+We start with creating an instance of the LLM of our choice, and set the maximum number of tokens to 200, as the default
-```
+value is 64, which might be too low for our purposes.
```python
-from qdrant_client import QdrantClient
-
+from langchain.llms.aleph_alpha import AlephAlpha
-client = QdrantClient(""qdrant-node-2"", port=6333)
+llm = AlephAlpha(
+ model=""luminous-extended-control"",
-client.recover_snapshot(
-
- ""{collection_name}"",
+ aleph_alpha_api_key=os.environ[""ALEPH_ALPHA_API_KEY""],
- ""http://qdrant-node-1:6333/collections/collection_name/snapshots/snapshot-2022-10-10.shapshot"",
+ maximum_tokens=200,
)
@@ -31909,947 +31319,1005 @@ client.recover_snapshot(
-```typescript
-
-import { QdrantClient } from ""@qdrant/js-client-rest"";
-
-
-
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+Then, we can glue the components together and build the search process. `RetrievalQA` is a class that takes implements
+the Question Retrieval process, with a specified retriever and Large Language Model. The instance of `Qdrant` might be
+converted into a retriever, with additional filter that will be passed to the `similarity_search` method. The filter
-client.recoverSnapshot(""{collection_name}"", {
+is created as [in a regular Qdrant query](../../../documentation/concepts/filtering/), with the `roles` field set to the
- location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+user's roles.
-});
-```
+```python
+user_roles = [""stackit"", ""aleph-alpha""]
-
+qdrant_retriever = qdrant.as_retriever(
-### Recover from an uploaded file
+ search_kwargs={
+ ""filter"": models.Filter(
+ must=[
-The snapshot file can also be uploaded as a file and restored using the [recover from uploaded snapshot](https://qdrant.github.io/qdrant/redoc/index.html#tag/collections/operation/recover_from_uploaded_snapshot). This endpoint accepts the raw snapshot data in the request body. If the target collection does not exist, it will be created.
+ models.FieldCondition(
+ key=""metadata.roles"",
+ match=models.MatchAny(any=user_roles)
-```bash
+ )
-curl -X POST 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
+ ]
- -H 'api-key: ********' \
+ )
- -H 'Content-Type:multipart/form-data' \
+ }
- -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
+)
```
-This method is typically used to migrate data from one cluster to another, so we recommend setting the [priority](#snapshot-priority) to ""snapshot"" for that use-case.
-
+We set the user roles to `stackit` and `aleph-alpha`, so the user can see the documents that are accessible to these
+customers, but not to the others. The final step is to create the `RetrievalQA` instance and use it to search over the
-### Recover during start-up
+documents, with the custom prompt.
-
+```python
+from langchain.prompts import PromptTemplate
+from langchain.chains.retrieval_qa.base import RetrievalQA
-If you have a single-node deployment, you can recover any collection at start-up and it will be immediately available.
-Restoring snapshots is done through the Qdrant CLI at start-up time via the `--snapshot` argument which accepts a list of pairs such as `:`
+prompt_template = """"""
+Question: {question}
-For example:
+Answer the question using the Source. If there's no answer, say ""NO ANSWER IN TEXT"".
-```bash
+Source: {context}
-./qdrant --snapshot /snapshots/test-collection-archive.snapshot:test-collection --snapshot /snapshots/test-collection-archive.snapshot:test-copy-collection
-```
+### Response:
+""""""
-The target collection **must** be absent otherwise the program will exit with an error.
+prompt = PromptTemplate(
+ template=prompt_template, input_variables=[""context"", ""question""]
+)
-If you wish instead to overwrite an existing collection, use the `--force_snapshot` flag with caution.
+retrieval_qa = RetrievalQA.from_chain_type(
-### Snapshot priority
+ llm=llm,
+ chain_type=""stuff"",
+ retriever=qdrant_retriever,
-When recovering a snapshot to a non-empty node, there may be conflicts between the snapshot data and the existing data. The ""priority"" setting controls how Qdrant handles these conflicts. The priority setting is important because different priorities can give very
+ return_source_documents=True,
-different end results. The default priority may not be best for all situations.
+ chain_type_kwargs={""prompt"": prompt},
+)
-The available snapshot recovery priorities are:
+response = retrieval_qa.invoke({""query"": ""What are the rules of performing the audit?""})
+print(response[""result""])
-- `replica`: _(default)_ prefer existing data over the snapshot.
+```
-- `snapshot`: prefer snapshot data over existing data.
-- `no_sync`: restore snapshot without any additional synchronization.
+Output:
-To recover a new collection from a snapshot, you need to set
-the priority to `snapshot`. With `snapshot` priority, all data from the snapshot
+```text
-will be recovered onto the cluster. With `replica` priority _(default)_, you'd
+The rules for performing the audit are as follows:
-end up with an empty collection because the collection on the cluster did not
-contain any points and that source was preferred.
+1. The Customer must inform the Contractor in good time (usually at least two weeks in advance) about any and all circumstances related to the performance of the audit.
+2. The Customer is entitled to perform one audit per calendar year. Any additional audits may be performed if agreed with the Contractor and are subject to reimbursement of expenses.
-`no_sync` is for specialized use cases and is not commonly used. It allows
+3. If the Customer engages a third party to perform the audit, the Customer must obtain the Contractor's consent and ensure that the confidentiality agreements with the third party are observed.
-managing shards and transferring shards between clusters manually without any
+4. The Contractor may object to any third party deemed unsuitable.
-additional synchronization. Using it incorrectly will leave your cluster in a
+```
-broken state.
+There are some other parameters that might be tuned to optimize the search process. The `k` parameter defines how many
-To recover from a URL, you specify an additional parameter in the request body:
+documents should be returned, but Langchain allows us also to control the retrieval process by choosing the type of the
+search operation. The default is `similarity`, which is just vector search, but we can also use `mmr` which stands for
+Maximal Marginal Relevance. It is a technique to diversify the search results, so the user gets the most relevant
-```http
+documents, but also the most diverse ones. The `mmr` search is slower, but might be more user-friendly.
-PUT /collections/{collection_name}/snapshots/recover
-{
- ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+Our search application is ready, and we can deploy it to the same environment as the ingestion pipeline on STACKIT. The
- ""priority"": ""snapshot""
+same rules apply here, so you can use the SKE or the Compute Engine, depending on the specifics of your organization.
-}
-```
+## Next steps
-```python
-from qdrant_client import QdrantClient, models
+We built a solid foundation for the contract management system, but there is still a lot to do. If you want to make the
+system production-ready, you should consider implementing the mechanism into your existing stack. If you have any
+questions, feel free to ask on our [Discord community](https://qdrant.to/discord).",documentation/examples/rag-contract-management-stackit-aleph-alpha.md
+"---
-client = QdrantClient(""qdrant-node-2"", port=6333)
+title: Implement Cohere RAG connector
+weight: 24
+aliases:
-client.recover_snapshot(
+ - /documentation/tutorials/cohere-rag-connector/
- ""{collection_name}"",
+---
- ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
- priority=models.SnapshotPriority.SNAPSHOT,
-)
+# Implement custom connector for Cohere RAG
-```
+| Time: 45 min | Level: Intermediate | | |
-```typescript
+|--------------|---------------------|-|----|
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+The usual approach to implementing Retrieval Augmented Generation requires users to build their prompts with the
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+relevant context the LLM may rely on, and manually sending them to the model. Cohere is quite unique here, as their
+models can now speak to the external tools and extract meaningful data on their own. You can virtually connect any data
+source and let the Cohere LLM know how to access it. Obviously, vector search goes well with LLMs, and enabling semantic
-client.recoverSnapshot(""{collection_name}"", {
+search over your data is a typical case.
- location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
- priority: ""snapshot""
-});
+Cohere RAG has lots of interesting features, such as inline citations, which help you to refer to the specific parts of
-```
+the documents used to generate the response.
-```bash
+![Cohere RAG citations](/documentation/tutorials/cohere-rag-connector/cohere-rag-citations.png)
-curl -X POST 'http://qdrant-node-1:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
- -H 'api-key: ********' \
- -H 'Content-Type:multipart/form-data' \
+*Source: https://docs.cohere.com/docs/retrieval-augmented-generation-rag*
- -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
-```
+The connectors have to implement a specific interface and expose the data source as HTTP REST API. Cohere documentation
+[describes a general process of creating a connector](https://docs.cohere.com/docs/creating-and-deploying-a-connector).
-## Snapshots for the whole storage
+This tutorial guides you step by step on building such a service around Qdrant.
-*Available as of v0.8.5*
+## Qdrant connector
-Sometimes it might be handy to create snapshot not just for a single collection, but for the whole storage, including collection aliases.
+You probably already have some collections you would like to bring to the LLM. Maybe your pipeline was set up using some
-Qdrant provides a dedicated API for that as well. It is similar to collection-level snapshots, but does not require `collection_name`.
+of the popular libraries such as Langchain, Llama Index, or Haystack. Cohere connectors may implement even more complex
+logic, e.g. hybrid search. In our case, we are going to start with a fresh Qdrant collection, index data using Cohere
+Embed v3, build the connector, and finally connect it with the [Command-R model](https://txt.cohere.com/command-r/).
-
+### Building the collection
-### Create full storage snapshot
+First things first, let's build a collection and configure it for the Cohere `embed-multilingual-v3.0` model. It
-```http
+produces 1024-dimensional embeddings, and we can choose any of the distance metrics available in Qdrant. Our connector
-POST /snapshots
+will act as a personal assistant of a software engineer, and it will expose our notes to suggest the priorities or
-```
+actions to perform.
```python
-from qdrant_client import QdrantClient
+from qdrant_client import QdrantClient, models
-client = QdrantClient(""localhost"", port=6333)
+client = QdrantClient(
+ ""https://my-cluster.cloud.qdrant.io:6333"",
+ api_key=""my-api-key"",
-client.create_full_snapshot()
+)
-```
+client.create_collection(
+ collection_name=""personal-notes"",
+ vectors_config=models.VectorParams(
-```typescript
+ size=1024,
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ distance=models.Distance.DOT,
+ ),
+)
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```
-client.createFullSnapshot();
+Our notes will be represented as simple JSON objects with a `title` and `text` of the specific note. The embeddings will
-```
+be created from the `text` field only.
-```rust
+```python
-use qdrant_client::client::QdrantClient;
+notes = [
+ {
+ ""title"": ""Project Alpha Review"",
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ ""text"": ""Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks.""
+ },
+ {
-client.create_full_snapshot().await?;
+ ""title"": ""Learning Path Update"",
-```
+ ""text"": ""Update the learning path document with the latest courses on React and Node.js from Pluralsight. Schedule at least 2 hours weekly to dedicate to these courses. Aim to complete the React course by the end of the month and the Node.js course by mid-next month.""
+ },
+ {
-```java
+ ""title"": ""Weekly Team Meeting Agenda"",
-import io.qdrant.client.QdrantClient;
+ ""text"": ""Prepare the agenda for the weekly team meeting. Include the following topics: project updates, review of the sprint backlog, discussion on the new feature requests, and a brainstorming session for improving remote work practices. Send out the agenda and the Zoom link by Thursday afternoon.""
-import io.qdrant.client.QdrantGrpcClient;
+ },
+ {
+ ""title"": ""Code Review Process Improvement"",
-QdrantClient client =
+ ""text"": ""Analyze the current code review process to identify inefficiencies. Consider adopting a new tool that integrates with our version control system. Explore options such as GitHub Actions for automating parts of the process. Draft a proposal with recommendations and share it with the team for feedback.""
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+ },
+ {
+ ""title"": ""Cloud Migration Strategy"",
-client.createFullSnapshotAsync().get();
+ ""text"": ""Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan.""
-```
+ },
+ {
+ ""title"": ""Quarterly Goals Review"",
-```csharp
+ ""text"": ""Review the progress towards the quarterly goals. Update the documentation to reflect any completed objectives and outline steps for any remaining goals. Schedule individual meetings with team members to discuss their contributions and any support they might need to achieve their targets.""
-using Qdrant.Client;
+ },
+ {
+ ""title"": ""Personal Development Plan"",
-var client = new QdrantClient(""localhost"", 6334);
+ ""text"": ""Reflect on the past quarter's achievements and areas for improvement. Update the personal development plan to include new technical skills to learn, certifications to pursue, and networking events to attend. Set realistic timelines and check-in points to monitor progress.""
+ },
+ {
-await client.CreateFullSnapshotAsync();
+ ""title"": ""End-of-Year Performance Reviews"",
-```
+ ""text"": ""Start preparing for the end-of-year performance reviews. Collect feedback from peers and managers, review project contributions, and document achievements. Consider areas for improvement and set goals for the next year. Schedule preliminary discussions with each team member to gather their self-assessments.""
+ },
+ {
-### Delete full storage snapshot
+ ""title"": ""Technology Stack Evaluation"",
+ ""text"": ""Conduct an evaluation of our current technology stack to identify any outdated technologies or tools that could be replaced for better performance and productivity. Research emerging technologies that might benefit our projects. Prepare a report with findings and recommendations to present to the management team.""
+ },
-*Available as of v1.0.0*
+ {
+ ""title"": ""Team Building Event Planning"",
+ ""text"": ""Plan a team-building event for the next quarter. Consider activities that can be done remotely, such as virtual escape rooms or online game nights. Survey the team for their preferences and availability. Draft a budget proposal for the event and submit it for approval.""
-```http
+ }
-DELETE /snapshots/{snapshot_name}
+]
```
-```python
+Storing the embeddings along with the metadata is fairly simple.
-from qdrant_client import QdrantClient
+```python
-client = QdrantClient(""localhost"", port=6333)
+import cohere
+import uuid
-client.delete_full_snapshot(snapshot_name=""{snapshot_name}"")
-```
+cohere_client = cohere.Client(api_key=""my-cohere-api-key"")
-```typescript
+response = cohere_client.embed(
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+ texts=[
+ note.get(""text"")
+ for note in notes
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+ ],
+ model=""embed-multilingual-v3.0"",
+ input_type=""search_document"",
-client.deleteFullSnapshot(""{snapshot_name}"");
+)
-```
+client.upload_points(
-```rust
+ collection_name=""personal-notes"",
-use qdrant_client::client::QdrantClient;
+ points=[
+ models.PointStruct(
+ id=uuid.uuid4().hex,
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+ vector=embedding,
+ payload=note,
+ )
-client.delete_full_snapshot(""{snapshot_name}"").await?;
+ for note, embedding in zip(notes, response.embeddings)
-```
+ ]
+)
+```
-```java
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+Our collection is now ready to be searched over. In the real world, the set of notes would be changing over time, so the
+ingestion process won't be as straightforward. This data is not yet exposed to the LLM, but we will build the connector
+in the next step.
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+### Connector web service
-client.deleteFullSnapshotAsync(""{snapshot_name}"").get();
-```
+[FastAPI](https://fastapi.tiangolo.com/) is a modern web framework and perfect a choice for a simple HTTP API. We are
+going to use it for the purposes of our connector. There will be just one endpoint, as required by the model. It will
+accept POST requests at the `/search` path. There is a single `query` parameter required. Let's define a corresponding
-```csharp
+model.
-using Qdrant.Client;
+```python
-var client = new QdrantClient(""localhost"", 6334);
+from pydantic import BaseModel
-await client.DeleteFullSnapshotAsync(""{snapshot_name}"");
+class SearchQuery(BaseModel):
+
+ query: str
```
-### List full storage snapshots
+RAG connector does not have to return the documents in any specific format. There are [some good practices to follow](https://docs.cohere.com/docs/creating-and-deploying-a-connector#configure-the-connection-between-the-connector-and-the-chat-api),
+but Cohere models are quite flexible here. Results just have to be returned as JSON, with a list of objects in a
+`results` property of the output. We will use the same document structure as we did for the Qdrant payloads, so there
-```http
+is no conversion required. That requires two additional models to be created.
-GET /snapshots
-```
+```python
+from typing import List
-```python
-from qdrant_client import QdrantClient
+class Document(BaseModel):
+ title: str
-client = QdrantClient(""localhost"", port=6333)
+ text: str
-client.list_full_snapshots()
+class SearchResults(BaseModel):
+
+ results: List[Document]
```
-```typescript
+Once our model classes are ready, we can implement the logic that will get the query and provide the notes that are
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+relevant to it. Please note the LLM is not going to define the number of documents to be returned. That's completely
+up to you how many of them you want to bring to the context.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+There are two services we need to interact with - Qdrant server and Cohere API. FastAPI has a concept of a [dependency
+injection](https://fastapi.tiangolo.com/tutorial/dependencies/#dependencies), and we will use it to provide both
-client.listFullSnapshots();
+clients into the implementation.
-```
+In case of queries, we need to set the `input_type` to `search_query` in the calls to Cohere API.
-```rust
-use qdrant_client::client::QdrantClient;
+```python
+from fastapi import FastAPI, Depends
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
+from typing import Annotated
-client.list_full_snapshots().await?;
+app = FastAPI()
-```
+def client() -> QdrantClient:
-```java
+ return QdrantClient(config.QDRANT_URL, api_key=config.QDRANT_API_KEY)
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+def cohere_client() -> cohere.Client:
+ return cohere.Client(api_key=config.COHERE_API_KEY)
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+@app.post(""/search"")
+def search(
-client.listFullSnapshotAsync().get();
+ query: SearchQuery,
-```
+ client: Annotated[QdrantClient, Depends(client)],
+ cohere_client: Annotated[cohere.Client, Depends(cohere_client)],
+) -> SearchResults:
-```csharp
+ response = cohere_client.embed(
-using Qdrant.Client;
+ texts=[query.query],
+ model=""embed-multilingual-v3.0"",
+ input_type=""search_query"",
-var client = new QdrantClient(""localhost"", 6334);
+ )
+ results = client.query_points(
+ collection_name=""personal-notes"",
-await client.ListFullSnapshotsAsync();
+ query=response.embeddings[0],
-```
+ limit=2,
+ ).points
+ return SearchResults(
-### Download full storage snapshot
+ results=[
+ Document(**point.payload)
+ for point in results
-
+ ]
+ )
+```
-```http
-GET /snapshots/{snapshot_name}
-```
+Our app might be launched locally for the development purposes, given we have the `uvicorn` server installed:
-## Restore full storage snapshot
+```shell
+uvicorn main:app
+```
-Restoring snapshots can only be done through the Qdrant CLI at startup time.
+FastAPI exposes an interactive documentation at `http://localhost:8000/docs`, where we can test our endpoint. The
-For example:
+`/search` endpoint is available there.
-```bash
+![FastAPI documentation](/documentation/tutorials/cohere-rag-connector/fastapi-openapi.png)
-./qdrant --storage-snapshot /snapshots/full-snapshot-2022-07-18-11-20-51.snapshot
-```
-",documentation/concepts/snapshots.md
-"---
-title: Filtering
+We can interact with it and check the documents that will be returned for a specific query. For example, we want to know
-weight: 60
+recall what we are supposed to do regarding the infrastructure for your projects.
-aliases:
- - ../filtering
----
+```shell
+curl -X ""POST"" \
+ -H ""Content-type: application/json"" \
-# Filtering
+ -d '{""query"": ""Is there anything I have to do regarding the project infrastructure?""}' \
+ ""http://localhost:8000/search""
+```
-With Qdrant, you can set conditions when searching or retrieving points.
-For example, you can impose conditions on both the [payload](../payload) and the `id` of the point.
+The output should look like following:
-Setting additional conditions is important when it is impossible to express all the features of the object in the embedding.
-Examples include a variety of business requirements: stock availability, user location, or desired price range.
+```json
+{
+ ""results"": [
-## Filtering clauses
+ {
+ ""title"": ""Cloud Migration Strategy"",
+ ""text"": ""Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan.""
-Qdrant allows you to combine conditions in clauses.
+ },
-Clauses are different logical operations, such as `OR`, `AND`, and `NOT`.
+ {
-Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression.
+ ""title"": ""Project Alpha Review"",
+ ""text"": ""Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks.""
+ }
-Let's take a look at the clauses implemented in Qdrant.
+ ]
+}
+```
-Suppose we have a set of points with the following payload:
+### Connecting to Command-R
-```json
-[
- { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+Our web service is implemented, yet running only on our local machine. It has to be exposed to the public before
- { ""id"": 2, ""city"": ""London"", ""color"": ""red"" },
+Command-R can interact with it. For a quick experiment, it might be enough to set up tunneling using services such as
- { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+[ngrok](https://ngrok.com/). We won't cover all the details in the tutorial, but their
- { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" },
+[Quickstart](https://ngrok.com/docs/guides/getting-started/) is a great resource describing the process step-by-step.
- { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
+Alternatively, you can also deploy the service with a public URL.
- { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
-]
-```
+Once it's done, we can create the connector first, and then tell the model to use it, while interacting through the chat
+API. Creating a connector is a single call to Cohere client:
-### Must
+```python
+connector_response = cohere_client.connectors.create(
-Example:
+ name=""personal-notes"",
+ url=""https:/this-is-my-domain.app/search"",
+)
-```http
+```
-POST /collections/{collection_name}/points/scroll
-{
- ""filter"": {
+The `connector_response.connector` will be a descriptor, with `id` being one of the attributes. We'll use this
- ""must"": [
+identifier for our interactions like this:
- { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
- { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
- ]
+```python
- }
+response = cohere_client.chat(
- ...
+ message=(
-}
+ ""Is there anything I have to do regarding the project infrastructure? ""
-```
+ ""Please mention the tasks briefly.""
+ ),
+ connectors=[
-```python
+ cohere.ChatConnector(id=connector_response.connector.id)
-from qdrant_client import QdrantClient
+ ],
-from qdrant_client.http import models
+ model=""command-r"",
+)
+```
-client = QdrantClient(host=""localhost"", port=6333)
+We changed the `model` to `command-r`, as this is currently the best Cohere model available to public. The
-client.scroll(
+`response.text` is the output of the model:
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
- must=[
+```text
- models.FieldCondition(
+Here are some of the tasks related to project infrastructure that you might have to perform:
- key=""city"",
+- You need to draft a plan for migrating your on-premise infrastructure to the cloud and come up with a plan for the selection of a cloud provider, cost analysis, and a gradual migration approach.
- match=models.MatchValue(value=""London""),
+- It's important to evaluate your current technology stack to identify any outdated technologies. You should also research emerging technologies and the benefits they could bring to your projects.
- ),
+```
- models.FieldCondition(
- key=""color"",
- match=models.MatchValue(value=""red""),
+You only need to create a specific connector once! Please do not call `cohere_client.connectors.create` for every single
- ),
+message you send to the `chat` method.
- ]
- ),
-)
+## Wrapping up
-```
+We have built a Cohere RAG connector that integrates with your existing knowledge base stored in Qdrant. We covered just
-```typescript
+the basic flow, but in real world scenarios, you should also consider e.g. [building the authentication
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+system](https://docs.cohere.com/docs/connector-authentication) to prevent unauthorized access.",documentation/examples/cohere-rag-connector.md
+"---
+title: Aleph Alpha Search
+weight: 16
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+draft: true
+---
-client.scroll(""{collection_name}"", {
- filter: {
+# Multimodal Semantic Search with Aleph Alpha
- must: [
- {
- key: ""city"",
+| Time: 30 min | Level: Beginner | | |
- match: { value: ""London"" },
+| --- | ----------- | ----------- |----------- |
- },
- {
- key: ""color"",
+This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks.
- match: { value: ""red"" },
- },
- ],
+In most cases, semantic search is limited to homogenous data types for both documents and queries (text-text, image-image, audio-audio, etc.). With the recent growth of multimodal architectures, it is now possible to encode different data types into the same latent space. That opens up some great possibilities, as you can finally explore non-textual data, for example visual, with text queries.
- },
-});
-```
+In the past, this would require labelling every image with a description of what it presents. Right now, you can rely on vector embeddings, which can represent all
+the inputs in the same space.
-```rust
-use qdrant_client::{
+*Figure 1: Two examples of text-image pairs presenting a similar object, encoded by a multimodal network into the same
- client::QdrantClient,
+2D latent space. Both texts are examples of English [pangrams](https://en.wikipedia.org/wiki/Pangram).
- qdrant::{Condition, Filter, ScrollPoints},
+https://deepai.org generated the images with pangrams used as input prompts.*
-};
+![](/docs/integrations/aleph-alpha/2d_text_image_embeddings.png)
-let client = QdrantClient::from_url(""http://localhost:6334"").build()?;
-client
- .scroll(&ScrollPoints {
+## Sample dataset
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([
- Condition::matches(""city"", ""london"".to_string()),
+You will be using [COCO](https://cocodataset.org/), a large-scale object detection, segmentation, and captioning dataset. It provides
- Condition::matches(""color"", ""red"".to_string()),
+various splits, 330,000 images in total. For demonstration purposes, this tutorials uses the
- ])),
+[2017 validation split](http://images.cocodataset.org/zips/train2017.zip) that contains 5000 images from different
- ..Default::default()
+categories with total size about 19GB.
- })
+```terminal
- .await?;
+wget http://images.cocodataset.org/zips/train2017.zip
```
-```java
-
-import java.util.List;
-
+## Prerequisites
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+There is no need to curate your datasets and train the models. [Aleph Alpha](https://www.aleph-alpha.com/), already has multimodality and multilinguality already built-in. There is an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that simplifies the integration.
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
+In order to enable the search capabilities, you need to build the search index to query on. For this example,
-import io.qdrant.client.grpc.Points.Filter;
+you are going to vectorize the images and store their embeddings along with the filenames. You can then return the most
-import io.qdrant.client.grpc.Points.ScrollPoints;
+similar files for given query.
-QdrantClient client =
+There are two things you need to set up before you start:
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+1. You need to have a Qdrant instance running. If you want to launch it locally,
-client
+ [Docker is the fastest way to do that](/documentation/quick_start/#installation).
- .scrollAsync(
+2. You need to have a registered [Aleph Alpha account](https://app.aleph-alpha.com/).
- ScrollPoints.newBuilder()
+3. Upon registration, create an API key (see: [API Tokens](https://app.aleph-alpha.com/profile)).
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder()
+Now you can store the Aleph Alpha API key in a variable and choose the model your are going to use.
- .addAllMust(
- List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
- .build())
+```python
- .build())
+aa_token = ""<< your_token >>""
- .get();
+model = ""luminous-base""
```
-```csharp
+## Vectorize the dataset
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+In this example, images have been extracted and are stored in the `val2017` directory:
-var client = new QdrantClient(""localhost"", 6334);
+```python
+from aleph_alpha_client import (
-// & operator combines two conditions in an AND conjunction(must)
+ Prompt,
-await client.ScrollAsync(
+ AsyncClient,
- collectionName: ""{collection_name}"",
+ SemanticEmbeddingRequest,
- filter: MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"")
+ SemanticRepresentation,
-);
+ Image,
-```
+)
-Filtered points would be:
+from glob import glob
-```json
+ids, vectors, payloads = [], [], []
-[{ ""id"": 2, ""city"": ""London"", ""color"": ""red"" }]
+async with AsyncClient(token=aa_token) as aa_client:
-```
+ for i, image_path in enumerate(glob(""./val2017/*.jpg"")):
+ # Convert the JPEG file into the embedding by calling
+ # Aleph Alpha API
-When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied.
+ prompt = Image.from_file(image_path)
-In this sense, `must` is equivalent to the operator `AND`.
+ prompt = Prompt.from_image(prompt)
+ query_params = {
+ ""prompt"": prompt,
-### Should
+ ""representation"": SemanticRepresentation.Symmetric,
+ ""compress_to_size"": 128,
+ }
-Example:
+ query_request = SemanticEmbeddingRequest(**query_params)
+ query_response = await aa_client.semantic_embed(request=query_request, model=model)
-```http
-POST /collections/{collection_name}/points/scroll
+ # Finally store the id, vector and the payload
-{
+ ids.append(i)
- ""filter"": {
+ vectors.append(query_response.embedding)
- ""should"": [
+ payloads.append({""filename"": image_path})
- { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+```
- { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
- ]
- }
+## Load embeddings into Qdrant
-}
-```
+
+Add all created embeddings, along with their ids and payloads into the `COCO` collection.
```python
-client.scroll(
+import qdrant_client
- collection_name=""{collection_name}"",
+from qdrant_client.models import Batch, VectorParams, Distance
- scroll_filter=models.Filter(
- should=[
- models.FieldCondition(
+client = qdrant_client.QdrantClient()
- key=""city"",
+client.create_collection(
- match=models.MatchValue(value=""London""),
+ collection_name=""COCO"",
- ),
+ vectors_config=VectorParams(
- models.FieldCondition(
+ size=len(vectors[0]),
- key=""color"",
+ distance=Distance.COSINE,
- match=models.MatchValue(value=""red""),
+ ),
- ),
+)
- ]
+client.upsert(
+
+ collection_name=""COCO"",
+
+ points=Batch(
+
+ ids=ids,
+
+ vectors=vectors,
+
+ payloads=payloads,
),
@@ -32859,1089 +32327,1025 @@ client.scroll(
-```typescript
+## Query the database
-client.scroll(""{collection_name}"", {
- filter: {
- should: [
+The `luminous-base`, model can provide you the vectors for both texts and images, which means you can run both
- {
+text queries and reverse image search. Assume you want to find images similar to the one below:
- key: ""city"",
- match: { value: ""London"" },
- },
+![An image used to query the database](/docs/integrations/aleph-alpha/visual_search_query.png)
- {
- key: ""color"",
- match: { value: ""red"" },
+With the following code snippet create its vector embedding and then perform the lookup in Qdrant:
- },
- ],
- },
+```python
-});
+async with AsyncCliet(token=aa_token) as aa_client:
-```
+ prompt = ImagePrompt.from_file(""query.jpg"")
+
+ prompt = Prompt.from_image(prompt)
-```rust
+ query_params = {
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+ ""prompt"": prompt,
+ ""representation"": SemanticRepresentation.Symmetric,
+ ""compress_to_size"": 128,
-client
+ }
- .scroll(&ScrollPoints {
+ query_request = SemanticEmbeddingRequest(**query_params)
- collection_name: ""{collection_name}"".to_string(),
+ query_response = await aa_client.semantic_embed(request=query_request, model=model)
- filter: Some(Filter::should([
- Condition::matches(""city"", ""london"".to_string()),
- Condition::matches(""color"", ""red"".to_string()),
+ results = client.query_points(
- ])),
+ collection_name=""COCO"",
- ..Default::default()
+ query=query_response.embedding,
- })
+ limit=3,
- .await?;
+ ).points
+
+ print(results)
```
-```java
+Here are the results:
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+![Visual search results](/docs/integrations/aleph-alpha/visual_search_results.png)
-import io.qdrant.client.grpc.Points.Filter;
-import io.qdrant.client.grpc.Points.ScrollPoints;
-import java.util.List;
+**Note:** AlephAlpha models can provide embeddings for English, French, German, Italian
+and Spanish. Your search is not only multimodal, but also multilingual, without any need for translations.
-client
- .scrollAsync(
+```python
- ScrollPoints.newBuilder()
+text = ""Surfing""
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder()
+async with AsyncClient(token=aa_token) as aa_client:
- .addAllShould(
+ query_params = {
- List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
+ ""prompt"": Prompt.from_text(text),
- .build())
+ ""representation"": SemanticRepresentation.Symmetric,
- .build())
+ ""compres_to_size"": 128,
- .get();
+ }
-```
+ query_request = SemanticEmbeddingRequest(**query_params)
+ query_response = await aa_client.semantic_embed(request=query_request, model=model)
-```csharp
-using Qdrant.Client;
+ results = client.query_points(
-using static Qdrant.Client.Grpc.Conditions;
+ collection_name=""COCO"",
+ query=query_response.embedding,
+ limit=3,
-var client = new QdrantClient(""localhost"", 6334);
+ ).points
+ print(results)
+```
-// | operator combines two conditions in an OR disjunction(should)
-await client.ScrollAsync(
- collectionName: ""{collection_name}"",
+Here are the top 3 results for “Surfing”:
- filter: MatchKeyword(""city"", ""London"") | MatchKeyword(""color"", ""red"")
-);
-```
+![Text search results](/docs/integrations/aleph-alpha/text_search_results.png)
+",documentation/examples/aleph-alpha-search.md
+"---
+title: Private Chatbot for Interactive Learning
+weight: 23
-Filtered points would be:
+social_preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png
+aliases:
+ - /documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/
-```json
+---
-[
- { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
- { ""id"": 2, ""city"": ""London"", ""color"": ""red"" },
+# Private Chatbot for Interactive Learning
- { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
- { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" }
-]
+| Time: 120 min | Level: Advanced | |
-```
+| --- | ----------- | ----------- |----------- |
-When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied.
+With chatbots, companies can scale their training programs to accommodate a large workforce, delivering consistent and standardized learning experiences across departments, locations, and time zones. Furthermore, having already completed their online training, corporate employees might want to refer back old course materials. Most of this information is proprietary to the company, and manually searching through an entire library of materials takes time. However, a chatbot built on this knowledge can respond in the blink of an eye.
-In this sense, `should` is equivalent to the operator `OR`.
+With a simple RAG pipeline, you can build a private chatbot. In this tutorial, you will combine open source tools inside of a closed infrastructure and tie them together with a reliable framework. This custom solution lets you run a chatbot without public internet access. You will be able to keep sensitive data secure without compromising privacy.
-### Must Not
+![OpenShift](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/openshift-diagram.png)
-Example:
+**Figure 1:** The LLM and Qdrant Hybrid Cloud are containerized as separate services. Haystack combines them into a RAG pipeline and exposes the API via Hayhooks.
-```http
+## Components
-POST /collections/{collection_name}/points/scroll
+To maintain complete data isolation, we need to limit ourselves to open-source tools and use them in a private environment, such as [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift). The pipeline will run internally and will be inaccessible from the internet.
-{
- ""filter"": {
- ""must_not"": [
+- **Dataset:** [Red Hat Interactive Learning Portal](https://developers.redhat.com/learn), an online library of Red Hat course materials.
- { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+- **LLM:** `mistralai/Mistral-7B-Instruct-v0.1`, deployed as a standalone service on OpenShift.
- { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+- **Embedding Model:** `BAAI/bge-base-en-v1.5`, lightweight embedding model deployed from within the Haystack pipeline
- ]
+ with [FastEmbed](https://github.com/qdrant/fastembed)
- }
+- **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on OpenShift.
-}
+- **Framework:** [Haystack 2.x](https://haystack.deepset.ai/) to connect all and [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) to serve the app through HTTP endpoints.
-```
+### Procedure
-```python
+The [Haystack](https://haystack.deepset.ai/) framework leverages two pipelines, which combine our components sequentially to process data.
-client.scroll(
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
+1. The **Indexing Pipeline** will run offline in batches, when new data is added or updated.
- must_not=[
+2. The **Search Pipeline** will retrieve information from Qdrant and use an LLM to produce an answer.
- models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")),
- models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
- ]
+> **Note:** We will define the pipelines in Python and then export them to YAML format, so that [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) can run them as a web service.
- ),
-)
-```
+## Prerequisites
-```typescript
+### Deploy the LLM to OpenShift
-client.scroll(""{collection_name}"", {
- filter: {
- must_not: [
+Follow the steps in [Chapter 6. Serving large language models](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.5/html/working_on_data_science_projects/serving-large-language-models_serving-large-language-models#doc-wrapper). This will download the LLM from the [HuggingFace](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and deploy it to OpenShift using a *single model serving platform*.
- {
- key: ""city"",
- match: { value: ""London"" },
+Your LLM service will have a URL, which you need to store as an environment variable.
- },
- {
- key: ""color"",
+```shell
- match: { value: ""red"" },
+export INFERENCE_ENDPOINT_URL=""http://mistral-service.default.svc.cluster.local""
- },
+```
- ],
- },
-});
+```python
-```
+import os
-```rust
+os.environ[""INFERENCE_ENDPOINT_URL""] = ""http://mistral-service.default.svc.cluster.local""
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+```
-client
+### Launch Qdrant Hybrid Cloud
- .scroll(&ScrollPoints {
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must_not([
+Complete **How to Set Up Qdrant on Red Hat OpenShift**. When in Hybrid Cloud, your Qdrant instance is private and and its nodes run on the same OpenShift infrastructure as your other components.
- Condition::matches(""city"", ""london"".to_string()),
- Condition::matches(""color"", ""red"".to_string()),
- ])),
+Retrieve your Qdrant URL and API key and store them as environment variables:
- ..Default::default()
- })
- .await?;
+```shell
+
+export QDRANT_URL=""https://qdrant.example.com""
+
+export QDRANT_API_KEY=""your-api-key""
```
-```java
+```python
-import java.util.List;
+os.environ[""QDRANT_URL""] = ""https://qdrant.example.com""
+os.environ[""QDRANT_API_KEY""] = ""your-api-key""
+```
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+## Implementation
-import io.qdrant.client.grpc.Points.Filter;
+We will first create an indexing pipeline to add documents to the system.
-import io.qdrant.client.grpc.Points.ScrollPoints;
+Then, the search pipeline will retrieve relevant data from our documents.
+After the pipelines are tested, we will export them to YAML files.
-client
- .scrollAsync(
+### Indexing pipeline
- ScrollPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setFilter(
+[Haystack 2.x](https://haystack.deepset.ai/) comes packed with a lot of useful components, from data fetching, through
- Filter.newBuilder()
+HTML parsing, up to the vector storage. Before we start, there are a few Python packages that we need to install:
- .addAllMustNot(
- List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
- .build())
+```shell
- .build())
+pip install haystack-ai \
- .get();
+ qdrant-client \
+
+ qdrant-haystack \
+
+ fastembed-haystack
```
-```csharp
+
-var client = new QdrantClient(""localhost"", 6334);
+Our environment is now ready, so we can jump right into the code. Let's define an empty pipeline and gradually add
+components to it:
-// The ! operator negates the condition(must not)
-await client.ScrollAsync(
+```python
- collectionName: ""{collection_name}"",
+from haystack import Pipeline
- filter: !(MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red""))
-);
+
+indexing_pipeline = Pipeline()
```
-Filtered points would be:
+#### Data fetching and conversion
-```json
+In this step, we will use Haystack's `LinkContentFetcher` to download course content from a list of URLs and store it in Qdrant for retrieval.
-[
+As we don't want to store raw HTML, this tool will extract text content from each webpage. Then, the fetcher will divide them into digestible chunks, since the documents might be pretty long.
- { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
- { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
-]
+Let's start with data fetching and text conversion:
-```
+```python
-When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied.
+from haystack.components.fetchers import LinkContentFetcher
-In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`.
+from haystack.components.converters import HTMLToDocument
-### Clauses combination
+fetcher = LinkContentFetcher()
+converter = HTMLToDocument()
-It is also possible to use several clauses simultaneously:
+indexing_pipeline.add_component(""fetcher"", fetcher)
+indexing_pipeline.add_component(""converter"", converter)
-```http
+```
-POST /collections/{collection_name}/points/scroll
-{
- ""filter"": {
+Our pipeline knows there are two components, but they are not connected yet. We need to define the flow between them:
- ""must"": [
- { ""key"": ""city"", ""match"": { ""value"": ""London"" } }
- ],
+```python
- ""must_not"": [
+indexing_pipeline.connect(""fetcher.streams"", ""converter.sources"")
- { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+```
- ]
- }
-}
+Each component has a set of inputs and outputs which might be combined in a directed graph. The definitions of the
-```
+inputs and outputs are usually provided in the documentation of the component. The `LinkContentFetcher` has the
+following parameters:
-```python
-client.scroll(
+![Parameters of the `LinkContentFetcher`](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/haystack-link-content-fetcher.png)
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
- must=[
+*Source: https://docs.haystack.deepset.ai/docs/linkcontentfetcher*
- models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")),
- ],
- must_not=[
+#### Chunking and creating the embeddings
- models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
- ],
- ),
+We used `HTMLToDocument` to convert the HTML sources into `Document` instances of Haystack, which is a
-)
+base class containing some data to be queried. However, a single document might be too long to be processed by the
-```
+embedding model, and it also carries way too much information to make the search relevant.
-```typescript
+Therefore, we need to split the document into smaller parts and convert them into embeddings. For this, we will use the
-client.scroll(""{collection_name}"", {
+`DocumentSplitter` and `FastembedDocumentEmbedder` pointed to our `BAAI/bge-base-en-v1.5` model:
- filter: {
- must: [
- {
-
- key: ""city"",
+```python
- match: { value: ""London"" },
+from haystack.components.preprocessors import DocumentSplitter
- },
+from haystack_integrations.components.embedders.fastembed import FastembedDocumentEmbedder
- ],
- must_not: [
- {
+splitter = DocumentSplitter(split_by=""sentence"", split_length=5, split_overlap=2)
- key: ""color"",
+embedder = FastembedDocumentEmbedder(model=""BAAI/bge-base-en-v1.5"")
- match: { value: ""red"" },
+embedder.warm_up()
- },
- ],
- },
+indexing_pipeline.add_component(""splitter"", splitter)
-});
+indexing_pipeline.add_component(""embedder"", embedder)
-```
+indexing_pipeline.connect(""converter.documents"", ""splitter.documents"")
-```rust
+indexing_pipeline.connect(""splitter.documents"", ""embedder.documents"")
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+```
-client
+#### Writing data to Qdrant
- .scroll(&ScrollPoints {
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter {
+The splitter will be producing chunks with a maximum length of 5 sentences, with an overlap of 2 sentences. Then, these
- must: vec![Condition::matches(""city"", ""London"".to_string())],
+smaller portions will be converted into embeddings.
- must_not: vec![Condition::matches(""color"", ""red"".to_string())],
- ..Default::default()
- }),
+Finally, we need to store our embeddings in Qdrant.
- ..Default::default()
- })
- .await?;
+```python
-```
+from haystack.utils import Secret
+from haystack_integrations.document_stores.qdrant import QdrantDocumentStore
+from haystack.components.writers import DocumentWriter
-```java
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+document_store = QdrantDocumentStore(
+ os.environ[""QDRANT_URL""],
-import io.qdrant.client.grpc.Points.Filter;
+ api_key=Secret.from_env_var(""QDRANT_API_KEY""),
-import io.qdrant.client.grpc.Points.ScrollPoints;
+ index=""red-hat-learning"",
+ return_embedding=True,
+ embedding_dim=768,
-client
+)
- .scrollAsync(
+writer = DocumentWriter(document_store=document_store)
- ScrollPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setFilter(
+indexing_pipeline.add_component(""writer"", writer)
- Filter.newBuilder()
- .addMust(matchKeyword(""city"", ""London""))
- .addMustNot(matchKeyword(""color"", ""red""))
+indexing_pipeline.connect(""embedder.documents"", ""writer.documents"")
- .build())
+```
- .build())
- .get();
-```
+Our pipeline is now complete. Haystack comes with a handy visualization of the pipeline, so you can see and verify the
+connections between the components. It is displayed in the Jupyter notebook, but you can also export it to a file:
-```csharp
-using Qdrant.Client;
+```python
-using static Qdrant.Client.Grpc.Conditions;
+indexing_pipeline.draw(""indexing_pipeline.png"")
+```
-var client = new QdrantClient(""localhost"", 6334);
+![Structure of the indexing pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/indexing_pipeline.png)
-await client.ScrollAsync(
- collectionName: ""{collection_name}"",
+#### Test the entire pipeline
- filter: MatchKeyword(""city"", ""London"") & !MatchKeyword(""color"", ""red"")
-);
-```
+We can finally run it on a list of URLs to index the content in Qdrant. We have a bunch of URLs to all the Red Hat
+OpenShift Foundations course lessons, so let's use them:
-Filtered points would be:
+```python
+course_urls = [
-```json
+ ""https://developers.redhat.com/learn/openshift/foundations-openshift"",
-[
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:openshift-and-developer-sandbox"",
- { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:overview-web-console"",
- { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:use-terminal-window-within-red-hat-openshift-web-console"",
-]
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console"",
-```
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console"",
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-using-oc-cli-tool"",
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-using-oc-cli-tool"",
-In this case, the conditions are combined by `AND`.
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-openshift-web-console"",
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-oc-cli-tool"",
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-using-oc-cli-tool"",
-Also, the conditions could be recursively nested. Example:
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console"",
+ ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:view-performance-information-using-openshift-web-console"",
+]
-```http
-POST /collections/{collection_name}/points/scroll
-{
+indexing_pipeline.run(data={
- ""filter"": {
+ ""fetcher"": {
- ""must_not"": [
+ ""urls"": course_urls,
- {
+ }
- ""must"": [
+})
- { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+```
- { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
- ]
- }
+The execution might take a while, as the model needs to process all the documents. After the process is finished, we
- ]
+should have all the documents stored in Qdrant, ready for search. You should see a short summary of processed documents:
- }
-}
-```
+```shell
+{'writer': {'documents_written': 381}}
+```
-```python
-client.scroll(
- collection_name=""{collection_name}"",
+### Search pipeline
- scroll_filter=models.Filter(
- must_not=[
- models.Filter(
+Our documents are now indexed and ready for search. The next pipeline is a bit simpler, but we still need to define a
- must=[
+few components. Let's start again with an empty pipeline:
- models.FieldCondition(
- key=""city"", match=models.MatchValue(value=""London"")
- ),
+```python
- models.FieldCondition(
+search_pipeline = Pipeline()
- key=""color"", match=models.MatchValue(value=""red"")
+```
- ),
- ],
- ),
+Our second process takes user input, converts it into embeddings and then searches for the most relevant documents
- ],
+using the query embedding. This might look familiar, but we arent working with `Document` instances
- ),
+anymore, since the query only accepts raw text. Thus, some of the components will be different, especially the embedder,
-)
+as it has to accept a single string as an input and produce a single embedding as an output:
-```
+```python
-```typescript
+from haystack_integrations.components.embedders.fastembed import FastembedTextEmbedder
-client.scroll(""{collection_name}"", {
+from haystack_integrations.components.retrievers.qdrant import QdrantEmbeddingRetriever
- filter: {
- must_not: [
- {
+query_embedder = FastembedTextEmbedder(model=""BAAI/bge-base-en-v1.5"")
- must: [
+query_embedder.warm_up()
- {
- key: ""city"",
- match: { value: ""London"" },
+retriever = QdrantEmbeddingRetriever(
- },
+ document_store=document_store, # The same document store as the one used for indexing
- {
+ top_k=3, # Number of documents to return
- key: ""color"",
+)
- match: { value: ""red"" },
- },
- ],
+search_pipeline.add_component(""query_embedder"", query_embedder)
- },
+search_pipeline.add_component(""retriever"", retriever)
- ],
- },
-});
+search_pipeline.connect(""query_embedder.embedding"", ""retriever.query_embedding"")
```
-```rust
+#### Run a test query
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+If our goal was to just retrieve the relevant documents, we could stop here. Let's try the current pipeline on a simple
-client
+query:
- .scroll(&ScrollPoints {
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must_not([Filter::must([
+```python
- Condition::matches(""city"", ""London"".to_string()),
+query = ""How to install an application using the OpenShift web console?""
- Condition::matches(""color"", ""red"".to_string()),
- ])
- .into()])),
+search_pipeline.run(data={
- ..Default::default()
+ ""query_embedder"": {
- })
+ ""text"": query
- .await?;
+ }
-```
+})
+```
-```java
-import java.util.List;
+We set the `top_k` parameter to 3, so the retriever should return the three most relevant documents. Your output should look like this:
-import static io.qdrant.client.ConditionFactory.filter;
+```text
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+{
+ 'retriever': {
+ 'documents': [
-import io.qdrant.client.grpc.Points.Filter;
+ Document(id=867b4aa4c37a91e72dc7ff452c47972c1a46a279a7531cd6af14169bcef1441b, content: 'Install a Node.js application from GitHub using the web console The following describes the steps r...', meta: {'content_type': 'text/html', 'source_id': 'f56e8f827dda86abe67c0ba3b4b11331d896e2d4f7b2b43c74d3ce973d07be0c', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console'}, score: 0.9209432),
-import io.qdrant.client.grpc.Points.ScrollPoints;
+ Document(id=0c74381c178597dd91335ebfde790d13bf5989b682d73bf5573c7734e6765af7, content: 'How to remove an application from OpenShift using the web console. In addition to providing the cap...', meta: {'content_type': 'text/html', 'source_id': '2a0759f3ce4a37d9f5c2af9c0ffcc80879077c102fb8e41e576e04833c9d24ce', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console'}, score: 0.9132109500000001),
+ Document(id=3e5f8923a34ab05611ef20783211e5543e880c709fd6534d9c1f63576edc4061, content: 'Path resource: Install an application from source code in a GitHub repository using the OpenShift w...', meta: {'content_type': 'text/html', 'source_id': 'a4c4cd62d07c0d9d240e3289d2a1cc0a3d1127ae70704529967f715601559089', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console'}, score: 0.912748935)
+ ]
-client
+ }
- .scrollAsync(
+}
- ScrollPoints.newBuilder()
+```
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder()
+#### Generating the answer
- .addMustNot(
- filter(
- Filter.newBuilder()
+Retrieval should serve more than just documents. Therefore, we will need to use an LLM to generate exact answers to our question.
- .addAllMust(
+This is the final component of our second pipeline.
- List.of(
- matchKeyword(""city"", ""London""),
- matchKeyword(""color"", ""red"")))
+Haystack will create a prompt which adds your documents to the model's context.
- .build()))
- .build())
- .build())
+```python
- .get();
+from haystack.components.builders.prompt_builder import PromptBuilder
-```
+from haystack.components.generators import HuggingFaceTGIGenerator
-```csharp
+prompt_builder = PromptBuilder(""""""
-using Qdrant.Client;
+Given the following information, answer the question.
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+Context:
+{% for document in documents %}
-var client = new QdrantClient(""localhost"", 6334);
+ {{ document.content }}
+{% endfor %}
-await client.ScrollAsync(
- collectionName: ""{collection_name}"",
+Question: {{ query }}
- filter: new Filter { MustNot = { MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"") } }
+"""""")
-);
+llm = HuggingFaceTGIGenerator(
-```
+ model=""mistralai/Mistral-7B-Instruct-v0.1"",
+ url=os.environ[""INFERENCE_ENDPOINT_URL""],
+ generation_kwargs={
-Filtered points would be:
+ ""max_new_tokens"": 1000, # Allow longer responses
+ },
+)
-```json
-[
- { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+search_pipeline.add_component(""prompt_builder"", prompt_builder)
- { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+search_pipeline.add_component(""llm"", llm)
- { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" },
- { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
- { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
+search_pipeline.connect(""retriever.documents"", ""prompt_builder.documents"")
-]
+search_pipeline.connect(""prompt_builder.prompt"", ""llm.prompt"")
```
-## Filtering conditions
-
+The `PromptBuilder` is a Jinja2 template that will be filled with the documents and the query. The
+`HuggingFaceTGIGenerator` connects to the LLM service and generates the answer. Let's run the pipeline again:
-Different types of values in payload correspond to different kinds of queries that we can apply to them.
-Let's look at the existing condition variants and what types of data they apply to.
+```python
+query = ""How to install an application using the OpenShift web console?""
-### Match
+response = search_pipeline.run(data={
-```json
+ ""query_embedder"": {
-{
+ ""text"": query
- ""key"": ""color"",
+ },
- ""match"": {
+ ""prompt_builder"": {
- ""value"": ""red""
+ ""query"": query
- }
+ },
-}
+})
```
-```python
+The LLM may provide multiple replies, if asked to do so, so let's iterate over and print them out:
-models.FieldCondition(
- key=""color"",
- match=models.MatchValue(value=""red""),
+```python
-)
+for reply in response[""llm""][""replies""]:
+
+ print(reply.strip())
```
-```typescript
+In our case there is a single response, which should be the answer to the question:
-{
- key: 'color',
- match: {value: 'red'}
+```text
-}
+Answer: To install an application using the OpenShift web console, follow these steps:
-```
+1. Select +Add on the left side of the web console.
-```rust
+2. Identify the container image to install.
-Condition::matches(""color"", ""red"".to_string())
+3. Using your web browser, navigate to the Developer Sandbox for Red Hat OpenShift and select Start your Sandbox for free.
+
+4. Install an application from source code stored in a GitHub repository using the OpenShift web console.
```
-```java
+Our final search pipeline might also be visualized, so we can see how the components are glued together:
-matchKeyword(""color"", ""red"");
+
+
+```python
+
+search_pipeline.draw(""search_pipeline.png"")
```
-```csharp
+![Structure of the search pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/search_pipeline.png)
-using static Qdrant.Client.Grpc.Conditions;
+## Deployment
-MatchKeyword(""color"", ""red"");
-```
+The pipelines are now ready, and we can export them to YAML. Hayhooks will use these files to run the
+pipelines as HTTP endpoints. To do this, specify both file paths and your environment variables.
-For the other types, the match condition will look exactly the same, except for the type used:
+> Note: The indexing pipeline might be run inside your ETL tool, but search should be definitely exposed as an HTTP endpoint.
-```json
-{
- ""key"": ""count"",
+Let's run it on the local machine:
- ""match"": {
- ""value"": 0
- }
+```shell
-}
+pip install hayhooks
```
-```python
+First of all, we need to save the pipelines to the YAML file:
-models.FieldCondition(
- key=""count"",
- match=models.MatchValue(value=0),
+```python
-)
+with open(""search-pipeline.yaml"", ""w"") as fp:
+
+ search_pipeline.dump(fp)
```
-```typescript
+And now we are able to run the Hayhooks service:
-{
- key: 'count',
- match: {value: 0}
+```shell
-}
+hayhooks run
```
-```rust
-
-Condition::matches(""count"", 0)
+The command should start the service on the default port, so you can access it at `http://localhost:1416`. The pipeline
-```
+is not deployed yet, but we can do it with just another command:
-```java
+```shell
-import static io.qdrant.client.ConditionFactory.match;
+hayhooks deploy search-pipeline.yaml
+```
-match(""count"", 0);
-```
+Once it's finished, you should be able to see the OpenAPI documentation at
+[http://localhost:1416/docs](http://localhost:1416/docs), and test the newly created endpoint.
-```csharp
-using static Qdrant.Client.Grpc.Conditions;
+![Search pipeline in the OpenAPI documentation](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/hayhooks-openapi.png)
-Match(""count"", 0);
+Our search is now accessible through the HTTP endpoint, so we can integrate it with any other service. We can even
-```
+control the other parameters, like the number of documents to return:
-The simplest kind of condition is one that checks if the stored value equals the given one.
+```shell
-If several values are stored, at least one of them should match the condition.
+curl -X 'POST' \
-You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads.
+ 'http://localhost:1416/search-pipeline' \
+ -H 'Accept: application/json' \
+ -H 'Content-Type: application/json' \
-### Match Any
+ -d '{
+ ""llm"": {
+ },
-*Available as of v1.1.0*
+ ""prompt_builder"": {
+ ""query"": ""How can I remove an application?""
+ },
-In case you want to check if the stored value is one of multiple values, you can use the Match Any condition.
+ ""query_embedder"": {
-Match Any works as a logical OR for the given values. It can also be described as a `IN` operator.
+ ""text"": ""How can I remove an application?""
+ },
+ ""retriever"": {
-You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
+ ""top_k"": 5
+ }
+}'
-Example:
+```
-```json
+The response should be similar to the one we got in the Python before:
-{
- ""key"": ""color"",
- ""match"": {
+```json
- ""any"": [""black"", ""yellow""]
+{
- }
+ ""llm"": {
-}
+ ""replies"": [
-```
+ ""\n\nAnswer: You can remove an application running in OpenShift by right-clicking on the circular graphic representing the application in Topology view and selecting the Delete Application text from the dialog that appears when you click the graphic’s outer ring. Alternatively, you can use the oc CLI tool to delete an installed application using the oc delete all command.""
+ ],
+ ""meta"": [
-```python
+ {
-FieldCondition(
+ ""model"": ""mistralai/Mistral-7B-Instruct-v0.1"",
- key=""color"",
+ ""index"": 0,
- match=models.MatchAny(any=[""black"", ""yellow""]),
+ ""finish_reason"": ""eos_token"",
-)
+ ""usage"": {
-```
+ ""completion_tokens"": 75,
+ ""prompt_tokens"": 642,
+ ""total_tokens"": 717
-```typescript
+ }
-{
+ }
- key: 'color',
+ ]
- match: {any: ['black', 'yellow']}
+ }
}
@@ -33949,1247 +33353,1138 @@ FieldCondition(
-```rust
-
-Condition::matches(""color"", vec![""black"".to_string(), ""yellow"".to_string()])
-
-```
+## Next steps
-```java
+- In this example, [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is the infrastructure of choice for proprietary chatbots. [Read more](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8) about how to host AI projects in their [extensive documentation](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8).
-import static io.qdrant.client.ConditionFactory.matchKeywords;
+- [Haystack's documentation](https://docs.haystack.deepset.ai/docs/kubernetes) describes [how to deploy the Hayhooks service in a Kubernetes
-matchKeywords(""color"", List.of(""black"", ""yellow""));
+environment](https://docs.haystack.deepset.ai/docs/kubernetes), so you can easily move it to your own OpenShift infrastructure.
-```
+- If you are just getting started and need more guidance on Qdrant, read the [quickstart](/documentation/quick-start/) or try out our [beginner tutorial](/documentation/tutorials/neural-search/).",documentation/examples/rag-chatbot-red-hat-openshift-haystack.md
+"---
-```csharp
+title: Blog-Reading Chatbot with GPT-4o
-using static Qdrant.Client.Grpc.Conditions;
+weight: 35
+social_preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png
+aliases:
-Match(""color"", [""black"", ""yellow""]);
+ - /documentation/tutorials/rag-chatbot-scaleway/
-```
+---
-In this example, the condition will be satisfied if the stored value is either `black` or `yellow`.
+# Blog-Reading Chatbot with GPT-4o
-If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""black""` is in `[""black"", ""yellow""]`.
+| Time: 90 min | Level: Advanced |[GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb)| |
+|--------------|-----------------|--|----|
+In this tutorial, you will build a RAG system that combines blog content ingestion with the capabilities of semantic search. **OpenAI's GPT-4o LLM** is powerful, but scaling its use requires us to supply context systematically.
-### Match Except
+RAG enhances the LLM's generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks.
-*Available as of v1.2.0*
+A notebook for this tutorial is available on [GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb).
-In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition.
-Match Except works as a logical NOR for the given values.
-It can also be described as a `NOT IN` operator.
+**Data Privacy and Sovereignty:** RAG applications often rely on sensitive or proprietary internal data. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on [Scaleway](https://www.scaleway.com/) addresses this need perfectly, offering a secure, scalable platform that still leverages the full potential of RAG. Scaleway offers serverless [Functions](https://www.scaleway.com/en/serverless-functions/) and serverless [Jobs](https://www.scaleway.com/en/serverless-jobs/), both of which are ideal for embedding creation in large-scale RAG cases.
-You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
+## Components
-Example:
+- **Cloud Host:** [Scaleway on managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) for compatibility with Qdrant Hybrid Cloud.
+- **Vector Database:** Qdrant Hybrid Cloud as the vector search engine for retrieval.
+- **LLM:** GPT-4o, developed by OpenAI is utilized as the generator for producing answers.
-```json
+- **Framework:** [LangChain](https://www.langchain.com/) for extensive RAG capabilities.
-{
- ""key"": ""color"",
- ""match"": {
+![Architecture diagram](/documentation/examples/rag-chatbot-scaleway/architecture-diagram.png)
- ""except"": [""black"", ""yellow""]
- }
-}
+> Langchain [supports a wide range of LLMs](https://python.langchain.com/docs/integrations/chat/), and GPT-4o is used as the main generator in this tutorial. You can easily swap it out for your preferred model that might be launched on your premises to complete the fully private setup. For the sake of simplicity, we used the OpenAI APIs, but LangChain makes the transition seamless.
-```
+## Deploying Qdrant Hybrid Cloud on Scaleway
-```python
-FieldCondition(
- key=""color"",
+[Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane.
- match=models.MatchExcept(**{""except"": [""black"", ""yellow""]}),
-)
-```
+1. To start using managed Kubernetes on Scaleway, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#scaleway).
+2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/).
-```typescript
-{
+## Prerequisites
- key: 'color',
- match: {except: ['black', 'yellow']}
-}
+To prepare the environment for working with Qdrant and related libraries, it's necessary to install all required Python packages. This can be done using Poetry, a tool for dependency management and packaging in Python. The code snippet imports various libraries essential for the tasks ahead, including `bs4` for parsing HTML and XML documents, `langchain` and its community extensions for working with language models and document loaders, and `Qdrant` for vector storage and retrieval. These imports lay the groundwork for utilizing Qdrant alongside other tools for natural language processing and machine learning tasks.
-```
+Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
-```rust
-Condition::matches(
- ""color"",
+```shell
- !MatchValue::from(vec![""black"".to_string(), ""yellow"".to_string()]),
+export QDRANT_URL=""https://qdrant.example.com""
-)
+export QDRANT_API_KEY=""your-api-key""
```
-```java
-
-import static io.qdrant.client.ConditionFactory.matchExceptKeywords;
+*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
-matchExceptKeywords(""color"", List.of(""black"", ""yellow""));
+```shell
-```
+export LANGCHAIN_TRACING_V2=true
+export LANGCHAIN_API_KEY=""your-api-key""
+export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default""
-```csharp
+```
-using static Qdrant.Client.Grpc.Conditions;
+Now you can get started:
-Match(""color"", [""black"", ""yellow""]);
-```
+```python
+import getpass
-In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`.
+import os
-If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""green""` does not match `""black""` nor `""yellow""`.
+import bs4
+from langchain import hub
+from langchain_community.document_loaders import WebBaseLoader
-### Nested key
+from langchain_qdrant import Qdrant
+from langchain_core.output_parsers import StrOutputParser
+from langchain_core.runnables import RunnablePassthrough
-*Available as of v1.1.0*
+from langchain_openai import ChatOpenAI, OpenAIEmbeddings
+from langchain_text_splitters import RecursiveCharacterTextSplitter
+```
-Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field.
+Set up the OpenAI API key:
-For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project.
+```python
-Suppose we have a set of points with the following payload:
+os.environ[""OPENAI_API_KEY""] = getpass.getpass()
+```
-```json
-[
+Initialize the language model:
- {
- ""id"": 1,
- ""country"": {
+```python
- ""name"": ""Germany"",
+llm = ChatOpenAI(model=""gpt-4o"")
- ""cities"": [
+```
- {
- ""name"": ""Berlin"",
- ""population"": 3.7,
+It is here that we configure both the Embeddings and LLM. You can replace this with your own models using Ollama or other services. Scaleway has some great [L4 GPU Instances](https://www.scaleway.com/en/l4-gpu-instance/) you can use for compute here.
- ""sightseeing"": [""Brandenburg Gate"", ""Reichstag""]
- },
- {
+## Download and parse data
- ""name"": ""Munich"",
- ""population"": 1.5,
- ""sightseeing"": [""Marienplatz"", ""Olympiapark""]
+To begin working with blog post contents, the process involves loading and parsing the HTML content. This is achieved using `urllib` and `BeautifulSoup`, which are tools designed for such tasks. After the content is loaded and parsed, it is indexed using Qdrant, a powerful tool for managing and querying vector data. The code snippet demonstrates how to load, chunk, and index the contents of a blog post by specifying the URL of the blog and the specific HTML elements to parse. This step is crucial for preparing the data for further processing and analysis with Qdrant.
- }
- ]
- }
+```python
- },
+# Load, chunk and index the contents of the blog.
- {
+loader = WebBaseLoader(
- ""id"": 2,
+ web_paths=(""https://lilianweng.github.io/posts/2023-06-23-agent/"",),
- ""country"": {
+ bs_kwargs=dict(
- ""name"": ""Japan"",
+ parse_only=bs4.SoupStrainer(
- ""cities"": [
+ class_=(""post-content"", ""post-title"", ""post-header"")
- {
+ )
- ""name"": ""Tokyo"",
+ ),
- ""population"": 9.3,
+)
- ""sightseeing"": [""Tokyo Tower"", ""Tokyo Skytree""]
+docs = loader.load()
- },
- {
- ""name"": ""Osaka"",
+```
- ""population"": 2.7,
- ""sightseeing"": [""Osaka Castle"", ""Universal Studios Japan""]
- }
+### Chunking data
- ]
- }
- }
+When dealing with large documents, such as a blog post exceeding 42,000 characters, it's crucial to manage the data efficiently for processing. Many models have a limited context window and struggle with long inputs, making it difficult to extract or find relevant information. To overcome this, the document is divided into smaller chunks. This approach enhances the model's ability to process and retrieve the most pertinent sections of the document effectively.
-]
-```
+In this scenario, the document is split into chunks using the `RecursiveCharacterTextSplitter` with a specified chunk size and overlap. This method ensures that no critical information is lost between chunks. Following the splitting, these chunks are then indexed into Qdrant—a vector database for efficient similarity search and storage of embeddings. The `Qdrant.from_documents` function is utilized for indexing, with documents being the split chunks and embeddings generated through `OpenAIEmbeddings`. The entire process is facilitated within an in-memory database, signifying that the operations are performed without the need for persistent storage, and the collection is named ""lilianweng"" for reference.
-You can search on a nested field using a dot notation.
+This chunking and indexing strategy significantly improves the management and retrieval of information from large documents, making it a practical solution for handling extensive texts in data processing workflows.
-```http
-POST /collections/{collection_name}/points/scroll
+```python
-{
+text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
- ""filter"": {
+text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
- ""should"": [
+splits = text_splitter.split_documents(docs)
- {
- ""key"": ""country.name"",
- ""match"": {
+vectorstore = Qdrant.from_documents(
- ""value"": ""Germany""
+ documents=splits,
- }
+ embedding=OpenAIEmbeddings(),
- }
+ collection_name=""lilianweng"",
- ]
+ url=os.environ[""QDRANT_URL""],
- }
+ api_key=os.environ[""QDRANT_API_KEY""],
-}
+)
```
-```python
+## Retrieve and generate content
-client.scroll(
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
+The `vectorstore` is used as a retriever to fetch relevant documents based on vector similarity. The `hub.pull(""rlm/rag-prompt"")` function is used to pull a specific prompt from a repository, which is designed to work with retrieved documents and a question to generate a response.
- should=[
- models.FieldCondition(
- key=""country.name"", match=models.MatchValue(value=""Germany"")
+The `format_docs` function formats the retrieved documents into a single string, preparing them for further processing. This formatted string, along with a question, is passed through a chain of operations. Firstly, the context (formatted documents) and the question are processed by the retriever and the prompt. Then, the result is fed into a large language model (`llm`) for content generation. Finally, the output is parsed into a string format using `StrOutputParser()`.
- ),
- ],
- ),
+This chain of operations demonstrates a sophisticated approach to information retrieval and content generation, leveraging both the semantic understanding capabilities of vector search and the generative prowess of large language models.
-)
-```
+Now, retrieve and generate data using relevant snippets from the blogL
-```typescript
-client.scroll(""{collection_name}"", {
+```python
- filter: {
+retriever = vectorstore.as_retriever()
- should: [
+prompt = hub.pull(""rlm/rag-prompt"")
- {
- key: ""country.name"",
- match: { value: ""Germany"" },
- },
- ],
+def format_docs(docs):
- },
+ return ""\n\n"".join(doc.page_content for doc in docs)
-});
-```
-```rust
+rag_chain = (
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+ {""context"": retriever | format_docs, ""question"": RunnablePassthrough()}
+ | prompt
+ | llm
-client
+ | StrOutputParser()
- .scroll(&ScrollPoints {
+)
- collection_name: ""{collection_name}"".to_string(),
+```
- filter: Some(Filter::should([Condition::matches(
- ""country.name"",
- ""Germany"".to_string(),
+### Invoking the RAG Chain
- )])),
- ..Default::default()
- })
+```python
- .await?;
+rag_chain.invoke(""What is Task Decomposition?"")
```
-```java
+## Next steps:
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+We built a solid foundation for a simple chatbot, but there is still a lot to do. If you want to make the
+system production-ready, you should consider implementing the mechanism into your existing stack. We recommend
-import io.qdrant.client.grpc.Points.Filter;
-import io.qdrant.client.grpc.Points.ScrollPoints;
+Our vector database can easily be hosted on [Scaleway](https://www.scaleway.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your Scaleway region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) service.
+Their French deployment regions e.g. France are excellent for network latency and data sovereignty. For hosted GPUs, try [rendering with L4 GPU instances](https://www.scaleway.com/en/l4-gpu-instance/).
-client
- .scrollAsync(
+If you have any questions, feel free to ask on our [Discord community](https://qdrant.to/discord).
- ScrollPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder()
- .addShould(matchKeyword(""country.name"", ""Germany""))
- .build())
- .build())
- .get();
+",documentation/examples/rag-chatbot-scaleway.md
+"---
-```
+title: Multitenancy with LlamaIndex
+weight: 18
+aliases:
-```csharp
+ - /documentation/tutorials/llama-index-multitenancy/
-using Qdrant.Client;
+---
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+# Multitenancy with LlamaIndex
-var client = new QdrantClient(""localhost"", 6334);
-
+If you are building a service that serves vectors for many independent users, and you want to isolate their
-await client.ScrollAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""country.name"", ""Germany""));
-
-```
+data, the best practice is to use a single collection with payload-based partitioning. This approach is
+called **multitenancy**. Our guide on the [Separate Partitions](/documentation/guides/multiple-partitions/) describes
+how to set it up in general, but if you use [LlamaIndex](/documentation/integrations/llama-index/) as a
-You can also search through arrays by projecting inner values using the `[]` syntax.
+backend, you may prefer reading a more specific instruction. So here it is!
-```http
+## Prerequisites
-POST /collections/{collection_name}/points/scroll
-{
- ""filter"": {
+This tutorial assumes that you have already installed Qdrant and LlamaIndex. If you haven't, please run the
- ""should"": [
+following commands:
- {
- ""key"": ""country.cities[].population"",
- ""range"": {
+```bash
- ""gte"": 9.0,
+pip install llama-index llama-index-vector-stores-qdrant
- }
+```
- }
- ]
- }
+We are going to use a local Docker-based instance of Qdrant. If you want to use a remote instance, please
-}
+adjust the code accordingly. Here is how we can start a local instance:
-```
+```bash
-```python
+docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest
-client.scroll(
+```
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
- should=[
+## Setting up LlamaIndex pipeline
- models.FieldCondition(
- key=""country.cities[].population"",
- range=models.Range(
+We are going to implement an end-to-end example of multitenant application using LlamaIndex. We'll be
- gt=None,
+indexing the documentation of different Python libraries, and we definitely don't want any users to see the
- gte=9.0,
+results coming from a library they are not interested in. In real case scenarios, this is even more dangerous,
- lt=None,
+as the documents may contain sensitive information.
- lte=None,
- ),
- ),
+### Creating vector store
- ],
- ),
-)
+[QdrantVectorStore](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo.html) is a
-```
+wrapper around Qdrant that provides all the necessary methods to work with your vector database in LlamaIndex.
+Let's create a vector store for our collection. It requires setting a collection name and passing an instance
+of `QdrantClient`.
-```typescript
-client.scroll(""{collection_name}"", {
- filter: {
+```python
- should: [
+from qdrant_client import QdrantClient
- {
+from llama_index.vector_stores.qdrant import QdrantVectorStore
- key: ""country.cities[].population"",
- range: {
- gt: null,
- gte: 9.0,
- lt: null,
+client = QdrantClient(""http://localhost:6333"")
- lte: null,
- },
- },
+vector_store = QdrantVectorStore(
- ],
+ collection_name=""my_collection"",
- },
+ client=client,
-});
+)
```
-```rust
+### Defining chunking strategy and embedding model
-use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPoints};
+Any semantic search application requires a way to convert text queries into vectors - an embedding model.
-client
+`ServiceContext` is a bundle of commonly used resources used during the indexing and querying stage in any
- .scroll(&ScrollPoints {
+LlamaIndex application. We can also use it to set up an embedding model - in our case, a local
- collection_name: ""{collection_name}"".to_string(),
+[BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5).
- filter: Some(Filter::should([Condition::range(
+set up
- ""country.cities[].population"",
- Range {
- gte: Some(9.0),
+```python
- ..Default::default()
+from llama_index.core import ServiceContext
- },
- )])),
- ..Default::default()
+service_context = ServiceContext.from_defaults(
- })
+ embed_model=""local:BAAI/bge-small-en-v1.5"",
- .await?;
+)
```
+*Note*, in case you are using Large Language Model different from OpenAI's ChatGPT, you should specify
+`llm` parameter for `ServiceContext`.
-```java
-
-import static io.qdrant.client.ConditionFactory.range;
+We can also control how our documents are split into chunks, or nodes using LLamaIndex's terminology.
-import io.qdrant.client.grpc.Points.Filter;
+The `SimpleNodeParser` splits documents into fixed length chunks with an overlap. The defaults are
-import io.qdrant.client.grpc.Points.Range;
+reasonable, but we can also adjust them if we want to. Both values are defined in tokens.
-import io.qdrant.client.grpc.Points.ScrollPoints;
+```python
-client
+from llama_index.core.node_parser import SimpleNodeParser
- .scrollAsync(
- ScrollPoints.newBuilder()
- .setCollectionName(""{collection_name}"")
+node_parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=32)
- .setFilter(
+```
- Filter.newBuilder()
- .addShould(
- range(
+Now we also need to inform the `ServiceContext` about our choices:
- ""country.cities[].population"",
- Range.newBuilder().setGte(9.0).build()))
- .build())
+```python
- .build())
+service_context = ServiceContext.from_defaults(
- .get();
+ embed_model=""local:BAAI/bge-large-en-v1.5"",
-```
+ node_parser=node_parser,
+)
+```
-```csharp
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+Both embedding model and selected node parser will be implicitly used during the indexing and querying.
-var client = new QdrantClient(""localhost"", 6334);
+### Combining everything together
-await client.ScrollAsync(
+The last missing piece, before we can start indexing, is the `VectorStoreIndex`. It is a wrapper around
- collectionName: ""{collection_name}"",
+`VectorStore` that provides a convenient interface for indexing and querying. It also requires a
- filter: Range(""country.cities[].population"", new Qdrant.Client.Grpc.Range { Gte = 9.0 })
+`ServiceContext` to be initialized.
-);
-```
+```python
+from llama_index.core import VectorStoreIndex
-This query would only output the point with id 2 as only Japan has a city with population greater than 9.0.
+index = VectorStoreIndex.from_vector_store(
-And the leaf nested field can also be an array.
+ vector_store=vector_store, service_context=service_context
+)
+```
-```http
-POST /collections/{collection_name}/points/scroll
-{
+## Indexing documents
- ""filter"": {
- ""should"": [
- {
+No matter how our documents are generated, LlamaIndex will automatically split them into nodes, if
- ""key"": ""country.cities[].sightseeing"",
+required, encode using selected embedding model, and then store in the vector store. Let's define
- ""match"": {
+some documents manually and insert them into Qdrant collection. Our documents are going to have
- ""value"": ""Osaka Castle""
+a single metadata attribute - a library name they belong to.
- }
- }
- ]
+```python
- }
+from llama_index.core.schema import Document
-}
-```
+documents = [
+ Document(
-```python
+ text=""LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models."",
-client.scroll(
+ metadata={
- collection_name=""{collection_name}"",
+ ""library"": ""llama-index"",
- scroll_filter=models.Filter(
+ },
- should=[
+ ),
- models.FieldCondition(
+ Document(
- key=""country.cities[].sightseeing"",
+ text=""Qdrant is a vector database & vector similarity search engine."",
- match=models.MatchValue(value=""Osaka Castle""),
+ metadata={
- ),
+ ""library"": ""qdrant"",
- ],
+ },
),
-)
+]
```
-```typescript
+Now we can index them using our `VectorStoreIndex`:
-client.scroll(""{collection_name}"", {
- filter: {
- should: [
+```python
- {
+for document in documents:
- key: ""country.cities[].sightseeing"",
+ index.insert(document)
- match: { value: ""Osaka Castle"" },
+```
- },
- ],
- },
+### Performance considerations
-});
-```
+Our documents have been split into nodes, encoded using the embedding model, and stored in the vector
+store. However, we don't want to allow our users to search for all the documents in the collection,
-```rust
+but only for the documents that belong to a library they are interested in. For that reason, we need
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+to set up the Qdrant [payload index](/documentation/concepts/indexing/#payload-index), so the search
+is more efficient.
-client
- .scroll(&ScrollPoints {
+```python
- collection_name: ""{collection_name}"".to_string(),
+from qdrant_client import models
- filter: Some(Filter::should([Condition::matches(
- ""country.cities[].sightseeing"",
- ""Osaka Castle"".to_string(),
+client.create_payload_index(
- )])),
+ collection_name=""my_collection"",
- ..Default::default()
+ field_name=""metadata.library"",
- })
+ field_type=models.PayloadSchemaType.KEYWORD,
- .await?;
+)
```
-```java
+The payload index is not the only thing we want to change. Since none of the search
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+queries will be executed on the whole collection, we can also change its configuration, so the HNSW
+graph is not built globally. This is also done due to [performance reasons](/documentation/guides/multiple-partitions/#calibrate-performance).
+**You should not be changing these parameters, if you know there will be some global search operations
-import io.qdrant.client.grpc.Points.Filter;
+done on the collection.**
-import io.qdrant.client.grpc.Points.ScrollPoints;
+```python
-client
+client.update_collection(
- .scrollAsync(
+ collection_name=""my_collection"",
- ScrollPoints.newBuilder()
+ hnsw_config=models.HnswConfigDiff(payload_m=16, m=0),
- .setCollectionName(""{collection_name}"")
+)
- .setFilter(
+```
- Filter.newBuilder()
- .addShould(matchKeyword(""country.cities[].sightseeing"", ""Germany""))
- .build())
+Once both operations are completed, we can start searching for our documents.
- .build())
- .get();
-```
+
-```csharp
+## Querying documents with constraints
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+Let's assume we are searching for some information about large language models, but are only allowed to
+use Qdrant documentation. LlamaIndex has a concept of retrievers, responsible for finding the most
-var client = new QdrantClient(""localhost"", 6334);
+relevant nodes for a given query. Our `VectorStoreIndex` can be used as a retriever, with some additional
+constraints - in our case value of the `library` metadata attribute.
-await client.ScrollAsync(
- collectionName: ""{collection_name}"",
+```python
- filter: MatchKeyword(""country.cities[].sightseeing"", ""Germany"")
+from llama_index.core.vector_stores.types import MetadataFilters, ExactMatchFilter
-);
-```
+qdrant_retriever = index.as_retriever(
+ filters=MetadataFilters(
-This query would only output the point with id 2 as only Japan has a city with the ""Osaka castke"" as part of the sightseeing.
+ filters=[
+ ExactMatchFilter(
+ key=""library"",
-### Nested object filter
+ value=""qdrant"",
+ )
+ ]
-*Available as of v1.2.0*
+ )
+)
-By default, the conditions are taking into account the entire payload of a point.
+nodes_with_scores = qdrant_retriever.retrieve(""large language models"")
+for node in nodes_with_scores:
-For instance, given two points with the following payload:
+ print(node.text, node.score)
+# Output: Qdrant is a vector database & vector similarity search engine. 0.60551536
+```
-```json
-[
- {
+The description of Qdrant was the best match, even though it didn't mention large language models
- ""id"": 1,
+at all. However, it was the only document that belonged to the `qdrant` library, so there was no
- ""dinosaur"": ""t-rex"",
+other choice. Let's try to search for something that is not present in the collection.
- ""diet"": [
- { ""food"": ""leaves"", ""likes"": false},
- { ""food"": ""meat"", ""likes"": true}
+Let's define another retrieve, this time for the `llama-index` library:
- ]
- },
- {
+```python
- ""id"": 2,
+llama_index_retriever = index.as_retriever(
- ""dinosaur"": ""diplodocus"",
+ filters=MetadataFilters(
- ""diet"": [
+ filters=[
- { ""food"": ""leaves"", ""likes"": true},
+ ExactMatchFilter(
- { ""food"": ""meat"", ""likes"": false}
+ key=""library"",
- ]
+ value=""llama-index"",
- }
+ )
-]
+ ]
-```
+ )
+)
-The following query would match both points:
+nodes_with_scores = llama_index_retriever.retrieve(""large language models"")
+for node in nodes_with_scores:
-```http
+ print(node.text, node.score)
-POST /collections/{collection_name}/points/scroll
+# Output: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. 0.63576734
-{
+```
- ""filter"": {
- ""must"": [
- {
+The results returned by both retrievers are different, due to the different constraints, so we implemented
- ""key"": ""diet[].food"",
+a real multitenant search application!
+",documentation/examples/llama-index-multitenancy.md
+"---
- ""match"": {
+title: Build Prototypes
- ""value"": ""meat""
+weight: 19
- }
+---
- },
+# Examples
- {
- ""key"": ""diet[].likes"",
- ""match"": {
+| End-to-End Code Samples | Description | Stack |
- ""value"": true
+|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
- }
+| [Multitenancy with LlamaIndex](../examples/llama-index-multitenancy/) | Handle data coming from multiple users in LlamaIndex. | Qdrant, Python, LlamaIndex |
- }
+| [Implement custom connector for Cohere RAG](../examples/cohere-rag-connector/) | Bring data stored in Qdrant to Cohere RAG | Qdrant, Cohere, FastAPI |
- ]
+| [Chatbot for Interactive Learning](../examples/rag-chatbot-red-hat-openshift-haystack/) | Build a Private RAG Chatbot for Interactive Learning | Qdrant, Haystack, OpenShift |
- }
+| [Information Extraction Engine](../examples/rag-chatbot-vultr-dspy-ollama/) | Build a Private RAG Information Extraction Engine | Qdrant, Vultr, DSPy, Ollama |
-}
+| [System for Employee Onboarding](../examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) | Build a RAG System for Employee Onboarding | Qdrant, Cohere, LangChain |
-```
+| [System for Contract Management](../examples/rag-contract-management-stackit-aleph-alpha/) | Build a Region-Specific RAG System for Contract Management | Qdrant, Aleph Alpha, STACKIT |
+| [Question-Answering System for Customer Support](../examples/rag-customer-support-cohere-airbyte-aws/) | Build a RAG System for AI Customer Support | Qdrant, Cohere, Airbyte, AWS |
+| [Hybrid Search on PDF Documents](../examples/hybrid-search-llamaindex-jinaai/) | Develop a Hybrid Search System for Product PDF Manuals | Qdrant, LlamaIndex, Jina AI
-```python
+| [Blog-Reading RAG Chatbot](../examples/rag-chatbot-scaleway) | Develop a RAG-based Chatbot on Scaleway and with LangChain | Qdrant, LangChain, GPT-4o
-client.scroll(
+| [Movie Recommendation System](../examples/recommendation-system-ovhcloud/) | Build a Movie Recommendation System with LlamaIndex and With JinaAI | Qdrant |
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
- must=[
- models.FieldCondition(
- key=""diet[].food"", match=models.MatchValue(value=""meat"")
+## Notebooks
- ),
- models.FieldCondition(
- key=""diet[].likes"", match=models.MatchValue(value=True)
+Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example.
- ),
- ],
- ),
+| Example | Description | Stack |
-)
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------|
-```
+| [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant |
+| [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant |
+| [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant |
-```typescript
+| [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant |
-client.scroll(""{collection_name}"", {
+| [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere |
- filter: {
+| [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant |
- must: [
+| [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant |
- {
+| [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs. | OpenAI, Qdrant, FastEmbed |
+",documentation/examples/_index.md
+"---
- key: ""diet[].food"",
+title: RAG System for Employee Onboarding
- match: { value: ""meat"" },
+weight: 30
- },
+social_preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png
- {
+aliases:
- key: ""diet[].likes"",
+ - /documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/
- match: { value: true },
+---
- },
- ],
- },
+# RAG System for Employee Onboarding
-});
-```
+Public websites are a great way to share information with a wide audience. However, finding the right information can be
+challenging, if you are not familiar with the website's structure or the terminology used. That's what the search bar is
-```rust
+for, but it is not always easy to formulate a query that will return the desired results, if you are not yet familiar
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+with the content. This is even more important in a corporate environment, and for the new employees, who are just
+starting to learn the ropes, and don't even know how to ask the right questions yet. You may have even the best intranet
+pages, but onboarding is more than just reading the documentation, it is about understanding the processes. Semantic
-client
+search can help with finding right resources easier, but wouldn't it be easier to just chat with the website, like you
- .scroll(&ScrollPoints {
+would with a colleague?
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([
- Condition::matches(""diet[].food"", ""meat"".to_string()),
+Technological advancements have made it possible to interact with websites using natural language. This tutorial will
- Condition::matches(""diet[].likes"", true),
+guide you through the process of integrating [Cohere](https://cohere.com/)'s language models with Qdrant to enable
- ])),
+natural language search on your documentation. We are going to use [LangChain](https://langchain.com/) as an
- ..Default::default()
+orchestrator. Everything will be hosted on [Oracle Cloud Infrastructure (OCI)](https://www.oracle.com/cloud/), so you
- })
+can scale your application as needed, and do not send your data to third parties. That is especially important when you
- .await?;
+are working with confidential or sensitive data.
-```
+## Building up the application
-```java
-import java.util.List;
+Our application will consist of two main processes: indexing and searching. Langchain will glue everything together,
+as we will use a few components, including Cohere and Qdrant, as well as some OCI services. Here is a high-level
-import static io.qdrant.client.ConditionFactory.match;
+overview of the architecture:
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+![Architecture diagram of the target system](/documentation/examples/faq-oci-cohere-langchain/architecture-diagram.png)
-import io.qdrant.client.QdrantClient;
-import io.qdrant.client.QdrantGrpcClient;
-import io.qdrant.client.grpc.Points.Filter;
+### Prerequisites
-import io.qdrant.client.grpc.Points.ScrollPoints;
+Before we dive into the implementation, make sure to set up all the necessary accounts and tools.
-QdrantClient client =
- new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+#### Libraries
-client
- .scrollAsync(
+We are going to use a few Python libraries. Of course, Langchain will be our main framework, but the Cohere models on
- ScrollPoints.newBuilder()
+OCI are accessible via the [OCI SDK](https://docs.oracle.com/en-us/iaas/tools/python/2.125.1/). Let's install all the
- .setCollectionName(""{collection_name}"")
+necessary libraries:
- .setFilter(
- Filter.newBuilder()
- .addAllMust(
+```shell
- List.of(matchKeyword(""diet[].food"", ""meat""), match(""diet[].likes"", true)))
+pip install langchain oci qdrant-client langchainhub
- .build())
+```
- .build())
- .get();
-```
+#### Oracle Cloud
-```csharp
+Our application will be fully running on Oracle Cloud Infrastructure (OCI). It's up to you to choose how you want to
-using Qdrant.Client;
+deploy your application. Qdrant Hybrid Cloud will be running in your [Kubernetes cluster running on Oracle Cloud
-using static Qdrant.Client.Grpc.Conditions;
+(OKE)](https://www.oracle.com/cloud/cloud-native/container-engine-kubernetes/), so all the processes might be also
+deployed there. You can get started with signing up for an account on [Oracle Cloud](https://signup.cloud.oracle.com/).
-var client = new QdrantClient(""localhost"", 6334);
+Cohere models are available on OCI as a part of the [Generative AI
+Service](https://www.oracle.com/artificial-intelligence/generative-ai/generative-ai-service/). We need both the
-await client.ScrollAsync(
+[Generation models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-generate.htm) and the
- collectionName: ""{collection_name}"",
+[Embedding models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-embed.htm). Please follow the
- filter: MatchKeyword(""diet[].food"", ""meat"") & Match(""diet[].likes"", true)
+linked tutorials to grasp the basics of using Cohere models there.
-);
-```
+Accessing the models programmatically requires knowing the compartment OCID. Please refer to the [documentation that
+describes how to find it](https://docs.oracle.com/en-us/iaas/Content/GSG/Tasks/contactingsupport_topic-Locating_Oracle_Cloud_Infrastructure_IDs.htm#Finding_the_OCID_of_a_Compartment).
-This happens because both points are matching the two conditions:
+For the further reference, we will assume that the compartment OCID is stored in the environment variable:
-- the ""t-rex"" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes`
+```shell
-- the ""diplodocus"" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes`
+export COMPARTMENT_OCID=""""
+```
-To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter.
+```python
+import os
-Nested object filters allow arrays of objects to be queried independently of each other.
+os.environ[""COMPARTMENT_OCID""] = """"
-It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply.
+```
-The key should point to an array of objects and can be used with or without the bracket notation (""data"" or ""data[]"").
+#### Qdrant Hybrid Cloud
-```http
+Qdrant Hybrid Cloud running on Oracle Cloud helps you build a solution without sending your data to external services. Our documentation provides a step-by-step guide on how to [deploy Qdrant Hybrid Cloud on Oracle
-POST /collections/{collection_name}/points/scroll
+Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure).
-{
- ""filter"": {
- ""must"": [{
+Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well:
- ""nested"": {
- ""key"": ""diet"",
- ""filter"":{
+```shell
- ""must"": [
+export QDRANT_URL=""https://qdrant.example.com""
- {
+export QDRANT_API_KEY=""your-api-key""
- ""key"": ""food"",
+```
- ""match"": {
- ""value"": ""meat""
- }
+*Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/).
- },
- {
- ""key"": ""likes"",
+```shell
- ""match"": {
+export LANGCHAIN_TRACING_V2=true
- ""value"": true
+export LANGCHAIN_API_KEY=""your-api-key""
- }
+export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default""
- }
+```
- ]
- }
- }
+Now you can get started:
- }]
- }
-}
+```python
-```
+import os
-```python
+os.environ[""QDRANT_URL""] = ""https://qdrant.example.com""
-client.scroll(
+os.environ[""QDRANT_API_KEY""] = ""your-api-key""
- collection_name=""{collection_name}"",
+```
- scroll_filter=models.Filter(
- must=[
- models.NestedCondition(
+Let's create the collection that will store the indexed documents. We will use the `qdrant-client` library, and our
- nested=models.Nested(
+collection will be named `oracle-cloud-website`. Our embedding model, `cohere.embed-english-v3.0`, produces embeddings
- key=""diet"",
+of size 1024, and we have to specify that when creating the collection.
- filter=models.Filter(
- must=[
- models.FieldCondition(
+```python
- key=""food"", match=models.MatchValue(value=""meat"")
+from qdrant_client import QdrantClient, models
- ),
- models.FieldCondition(
- key=""likes"", match=models.MatchValue(value=True)
+client = QdrantClient(
- ),
+ location=os.environ.get(""QDRANT_URL""),
- ]
+ api_key=os.environ.get(""QDRANT_API_KEY""),
- ),
+)
- )
+client.create_collection(
- )
+ collection_name=""oracle-cloud-website"",
- ],
+ vectors_config=models.VectorParams(
+
+ size=1024,
+
+ distance=models.Distance.COSINE,
),
@@ -35199,593 +34494,563 @@ client.scroll(
-```typescript
+### Indexing process
-client.scroll(""{collection_name}"", {
- filter: {
- must: [
+We have all the necessary tools set up, so let's start with the indexing process. We will use the Cohere Embedding
- {
+models to convert the text into vectors, and then store them in Qdrant. Langchain is integrated with OCI Generative AI
- nested: {
+Service, so we can easily access the models.
- key: ""diet"",
- filter: {
- must: [
+Our dataset will be fairly simple, as it will consist of the questions and answers from the [Oracle Cloud Free Tier
- {
+FAQ page](https://www.oracle.com/cloud/free/faq/).
- key: ""food"",
- match: { value: ""meat"" },
- },
+![Some examples of the Oracle Cloud FAQ](/documentation/examples/faq-oci-cohere-langchain/oracle-faq.png)
- {
- key: ""likes"",
- match: { value: true },
+Questions and answers are presented in an HTML format, but we don't want to manually extract the text and adapt it for
- },
+each subpage. Instead, we will use the `WebBaseLoader` that just loads the HTML content from given URL and converts it
- ],
+to text.
- },
- },
- },
+```python
- ],
+from langchain_community.document_loaders.web_base import WebBaseLoader
- },
-});
+
+loader = WebBaseLoader(""https://www.oracle.com/cloud/free/faq/"")
+
+documents = loader.load()
```
-```rust
+Our `documents` is a list with just a single element, which is the text of the whole page. We need to split it into
-use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints};
+meaningful parts, so we will use the `RecursiveCharacterTextSplitter` component. It will try to keep all paragraphs (and
+then sentences, and then words) together as long as possible, as those would generically seem to be the strongest
+semantically related pieces of text. The chunk size and overlap are both parameters that can be adjusted to fit the
-client
+specific use case.
- .scroll(&ScrollPoints {
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([NestedCondition {
+```python
- key: ""diet"".to_string(),
+from langchain_text_splitters import RecursiveCharacterTextSplitter
- filter: Some(Filter::must([
- Condition::matches(""food"", ""meat"".to_string()),
- Condition::matches(""likes"", true),
+splitter = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=100)
- ])),
+split_documents = splitter.split_documents(documents)
- }
+```
- .into()])),
- ..Default::default()
- })
+Our documents might be now indexed, but we need to convert them into vectors. Let's configure the embeddings so the
- .await?;
+`cohere.embed-english-v3.0` is used. Not all the regions support the Generative AI Service, so we need to specify the
-```
+region where the models are stored. We will use the `us-chicago-1`, but please check the
+[documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions) for the most up-to-date
+list of supported regions.
-```java
-import java.util.List;
+```python
+from langchain_community.embeddings.oci_generative_ai import OCIGenAIEmbeddings
-import static io.qdrant.client.ConditionFactory.match;
-import static io.qdrant.client.ConditionFactory.matchKeyword;
-import static io.qdrant.client.ConditionFactory.nested;
+embeddings = OCIGenAIEmbeddings(
+ model_id=""cohere.embed-english-v3.0"",
+ service_endpoint=""https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"",
-import io.qdrant.client.grpc.Points.Filter;
+ compartment_id=os.environ.get(""COMPARTMENT_OCID""),
-import io.qdrant.client.grpc.Points.ScrollPoints;
+)
+```
-client
- .scrollAsync(
+Now we can embed the documents and store them in Qdrant. We will create an instance of `Qdrant` and add the split
- ScrollPoints.newBuilder()
+documents to the collection.
- .setCollectionName(""{collection_name}"")
- .setFilter(
- Filter.newBuilder()
+```python
- .addMust(
+from langchain.vectorstores.qdrant import Qdrant
- nested(
- ""diet"",
- Filter.newBuilder()
+qdrant = Qdrant(
- .addAllMust(
+ client=client,
- List.of(
+ collection_name=""oracle-cloud-website"",
- matchKeyword(""food"", ""meat""), match(""likes"", true)))
+ embeddings=embeddings,
- .build()))
+)
- .build())
- .build())
- .get();
+qdrant.add_documents(split_documents, batch_size=20)
```
-```csharp
+Our documents should be now indexed and ready for searching. Let's move to the next step.
-using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+### Speaking to the website
-var client = new QdrantClient(""localhost"", 6334);
+The intended method of interaction with the website is through the chatbot. Large Language Model, in our case [Cohere
+Command](https://cohere.com/command), will be answering user's questions based on the relevant documents that Qdrant
-await client.ScrollAsync(
+will return using the question as a query. Our LLM is also hosted on OCI, so we can access it similarly to the embedding
- collectionName: ""{collection_name}"",
+model:
- filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true))
-);
-```
+```python
+from langchain_community.llms.oci_generative_ai import OCIGenAI
-The matching logic is modified to be applied at the level of an array element within the payload.
+llm = OCIGenAI(
+ model_id=""cohere.command"",
-Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time.
+ service_endpoint=""https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"",
-Parent document is considered to match the condition if at least one element of the array matches the nested filter.
+ compartment_id=os.environ.get(""COMPARTMENT_OCID""),
+)
+```
-**Limitations**
+Connection to Qdrant might be established in the same way as we did during the indexing process. We can use it to create
-The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause.
+a retrieval chain, which implements the question-answering process. The retrieval chain also requires an additional
+chain that will combine retrieved documents before sending them to an LLM.
-```http
-POST /collections/{collection_name}/points/scroll
+```python
-{
+from langchain.chains.combine_documents import create_stuff_documents_chain
- ""filter"": {
+from langchain.chains.retrieval import create_retrieval_chain
- ""must"": [
+from langchain import hub
- ""nested"": {
- {
- ""key"": ""diet"",
+retriever = qdrant.as_retriever()
- ""filter"":{
+combine_docs_chain = create_stuff_documents_chain(
- ""must"": [
+ llm=llm,
- {
+ # Default prompt is loaded from the hub, but we can also modify it
- ""key"": ""food"",
+ prompt=hub.pull(""langchain-ai/retrieval-qa-chat""),
- ""match"": {
+)
- ""value"": ""meat""
+retrieval_qa_chain = create_retrieval_chain(
- }
+ retriever=retriever,
- },
+ combine_docs_chain=combine_docs_chain,
- {
+)
- ""key"": ""likes"",
+response = retrieval_qa_chain.invoke({""input"": ""What is the Oracle Cloud Free Tier?""})
- ""match"": {
+```
- ""value"": true
- }
- }
+The output of the `.invoke` method is a dictionary-like structure with the query and answer, but we can also access the
- ]
+source documents used to generate the response. This might be useful for debugging or for further processing.
- }
- }
- },
+```python
- { ""has_id"": [1] }
+{
- ]
+ 'input': 'What is the Oracle Cloud Free Tier?',
- }
+ 'context': [
-}
+ Document(
-```
+ page_content='* Free Tier is generally available in regions where commercial Oracle Cloud Infrastructure service is available. See the data regions page for detailed service availability (the exact regions available for Free Tier may differ during the sign-up process). The US$300 cloud credit is available in',
+ metadata={
+ 'language': 'en-US',
-```python
+ 'source': 'https://www.oracle.com/cloud/free/faq/',
-client.scroll(
+ 'title': ""FAQ on Oracle's Cloud Free Tier"",
- collection_name=""{collection_name}"",
+ '_id': 'c8cf98e0-4b88-4750-be42-4157495fed2c',
- scroll_filter=models.Filter(
+ '_collection_name': 'oracle-cloud-website'
- must=[
+ }
- models.NestedCondition(
+ ),
- nested=models.Nested(
+ Document(
- key=""diet"",
+ page_content='Oracle Cloud Free Tier allows you to sign up for an Oracle Cloud account which provides a number of Always Free services and a Free Trial with US$300 of free credit to use on all eligible Oracle Cloud Infrastructure services for up to 30 days. The Always Free services are available for an unlimited',
- filter=models.Filter(
+ metadata={
- must=[
+ 'language': 'en-US',
- models.FieldCondition(
+ 'source': 'https://www.oracle.com/cloud/free/faq/',
- key=""food"", match=models.MatchValue(value=""meat"")
+ 'title': ""FAQ on Oracle's Cloud Free Tier"",
- ),
+ '_id': 'dc291430-ff7b-4181-944a-39f6e7a0de69',
- models.FieldCondition(
+ '_collection_name': 'oracle-cloud-website'
- key=""likes"", match=models.MatchValue(value=True)
+ }
- ),
+ ),
- ]
+ Document(
- ),
+ page_content='Oracle Cloud Free Tier does not include SLAs. Community support through our forums is available to all customers. Customers using only Always Free resources are not eligible for Oracle Support. Limited support is available for Oracle Cloud Free Tier with Free Trial credits. After you use all of',
- )
+ metadata={
- ),
+ 'language': 'en-US',
- models.HasIdCondition(has_id=[1]),
+ 'source': 'https://www.oracle.com/cloud/free/faq/',
- ],
+ 'title': ""FAQ on Oracle's Cloud Free Tier"",
- ),
+ '_id': '9e831039-7ccc-47f7-9301-20dbddd2fc07',
-)
+ '_collection_name': 'oracle-cloud-website'
-```
+ }
+ ),
+ Document(
-```typescript
+ page_content='looking to test things before moving to cloud, a student wanting to learn, or an academic developing curriculum in the cloud, Oracle Cloud Free Tier enables you to learn, explore, build and test for free.',
-client.scroll(""{collection_name}"", {
+ metadata={
- filter: {
+ 'language': 'en-US',
- must: [
+ 'source': 'https://www.oracle.com/cloud/free/faq/',
- {
+ 'title': ""FAQ on Oracle's Cloud Free Tier"",
- nested: {
+ '_id': 'e2dc43e1-50ee-4678-8284-6df60a835cf5',
- key: ""diet"",
+ '_collection_name': 'oracle-cloud-website'
- filter: {
+ }
- must: [
+ )
- {
+ ],
- key: ""food"",
+ 'answer': ' Oracle Cloud Free Tier is a subscription that gives you access to Always Free services and a Free Trial with $300 of credit that can be used on all eligible Oracle Cloud Infrastructure services for up to 30 days. \n\nThrough this Free Tier, you can learn, explore, build, and test for free. It is aimed at those who want to experiment with cloud services before making a commitment, as wellTheir use cases range from testing prior to cloud migration to learning and academic curriculum development. '
- match: { value: ""meat"" },
+}
- },
+```
- {
- key: ""likes"",
- match: { value: true },
+#### Other experiments
- },
- ],
- },
+Asking the basic questions is just the beginning. What you want to avoid is a hallucination, where the model generates
- },
+an answer that is not based on the actual content. The default prompt of Langchain should already prevent this, but you
- },
+might still want to check it. Let's ask a question that is not directly answered on the FAQ page:
- {
- has_id: [1],
- },
+```python
- ],
+response = retrieval_qa.invoke({
- },
+ ""input"": ""Is Oracle Generative AI Service included in the free tier?""
-});
+})
```
-```rust
-
-use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPoints};
+Output:
-client
-
- .scroll(&ScrollPoints {
+> Oracle Generative AI Services are not specifically mentioned as being available in the free tier. As per the text, the
- collection_name: ""{collection_name}"".to_string(),
+> $300 free credit can be used on all eligible services for up to 30 days. To confirm if Oracle Generative AI Services
- filter: Some(Filter::must([
+> are included in the free credit offer, it is best to check the official Oracle Cloud website or contact their support.
- NestedCondition {
- key: ""diet"".to_string(),
- filter: Some(Filter::must([
+It seems that Cohere Command model could not find the exact answer in the provided documents, but it tried to interpret
- Condition::matches(""food"", ""meat"".to_string()),
+the context and provide a reasonable answer, without making up the information. This is a good sign that the model is
- Condition::matches(""likes"", true),
+not hallucinating in that case.
- ])),
- }
- .into(),
+## Wrapping up
- Condition::has_id([1]),
- ])),
- ..Default::default()
+This tutorial has shown how to integrate Cohere's language models with Qdrant to enable natural language search on your
- })
+website. We have used Langchain as an orchestrator, and everything was hosted on Oracle Cloud Infrastructure (OCI).
- .await?;
+Real world would require integrating this mechanism into your organization's systems, but we built a solid foundation
-```
+that can be further developed.
+",documentation/examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain.md
+"---
+title: Authentication
+weight: 30
-```java
+---
-import java.util.List;
+# Authenticating to Qdrant Cloud
-import static io.qdrant.client.ConditionFactory.hasId;
-import static io.qdrant.client.ConditionFactory.match;
-import static io.qdrant.client.ConditionFactory.matchKeyword;
+This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key.
-import static io.qdrant.client.ConditionFactory.nested;
-import static io.qdrant.client.PointIdFactory.id;
+## Create API keys
-import io.qdrant.client.grpc.Points.Filter;
-import io.qdrant.client.grpc.Points.ScrollPoints;
+The API key is only shown once after creation. If you lose it, you will need to create a new one.
+However, we recommend rotating the keys from time to time. To create additional API keys do the following.
-client
- .scrollAsync(
+1. Go to the [Cloud Dashboard](https://qdrant.to/cloud).
- ScrollPoints.newBuilder()
+2. Select **Access Management** to display available API keys, or go to the **API Keys** section of the Cluster detail page.
- .setCollectionName(""{collection_name}"")
+3. Click **Create** and choose a cluster name from the dropdown menu.
- .setFilter(
+> **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box.
- Filter.newBuilder()
+4. Click **OK** and retrieve your API key.
- .addMust(
- nested(
- ""diet"",
+## Test cluster access
- Filter.newBuilder()
- .addAllMust(
- List.of(
+After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one:
- matchKeyword(""food"", ""meat""), match(""likes"", true)))
- .build()))
- .addMust(hasId(id(1)))
+```bash
- .build())
+curl \
- .build())
+ -X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \
- .get();
+ --header 'api-key: '
```
+Open Terminal and run the request. You should get a response that looks like this:
+```bash
-```csharp
+{""title"":""qdrant - vector search engine"",""version"":""1.8.1""}
-using Qdrant.Client;
+```
-using static Qdrant.Client.Grpc.Conditions;
+> **Note:** You need to include the API key in the request header for every
-var client = new QdrantClient(""localhost"", 6334);
+> request over REST or gRPC.
-await client.ScrollAsync(
+## Authenticate via SDK
- collectionName: ""{collection_name}"",
- filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true)) & HasId(1)
-);
+Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application.
-```
+Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter.
-### Full Text Match
+```bash
+curl \
+ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
-*Available as of v0.10.0*
+ --header 'api-key: '
-A special case of the `match` condition is the `text` match condition.
+# Alternatively, you can use the `Authorization` header with the `Bearer` prefix
-It allows you to search for a specific substring, token or phrase within the text field.
+curl \
+ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \
+ --header 'Authorization: Bearer '
-Exact texts that will match the condition depend on full-text index configuration.
+```
-Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index).
+```python
-If there is no full-text index for the field, the condition will work as exact substring match.
+from qdrant_client import QdrantClient
-```json
+qdrant_client = QdrantClient(
-{
+ ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- ""key"": ""description"",
+ api_key="""",
- ""match"": {
+)
- ""text"": ""good cheap""
+```
- }
-}
-```
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-```python
-models.FieldCondition(
+const client = new QdrantClient({
- key=""description"",
+ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- match=models.MatchText(text=""good cheap""),
+ apiKey: """",
-)
+});
```
-```typescript
+```rust
-{
+use qdrant_client::Qdrant;
- key: 'description',
- match: {text: 'good cheap'}
-}
+let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"")
+
+ .api_key("""")
+
+ .build()?;
```
-```rust
+```java
+
+import io.qdrant.client.QdrantClient;
-// If the match string contains a white-space, full text match is performed.
+import io.qdrant.client.QdrantGrpcClient;
-// Otherwise a keyword match is performed.
-Condition::matches(""description"", ""good cheap"".to_string())
-```
+QdrantClient client =
+ new QdrantClient(
+ QdrantGrpcClient.newBuilder(
-```java
+ ""xyz-example.eu-central.aws.cloud.qdrant.io"",
-import static io.qdrant.client.ConditionFactory.matchText;
+ 6334,
+ true)
+ .withApiKey("""")
-matchText(""description"", ""good cheap"");
+ .build());
```
@@ -35793,1378 +35058,1188 @@ matchText(""description"", ""good cheap"");
```csharp
-using static Qdrant.Client.Grpc.Conditions;
+using Qdrant.Client;
-MatchText(""description"", ""good cheap"");
+var client = new QdrantClient(
-```
+ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
+ https: true,
+ apiKey: """"
-If the query has several words, then the condition will be satisfied only if all of them are present in the text.
+);
+```
-### Range
+```go
+import ""github.com/qdrant/go-client/qdrant""
-```json
-{
- ""key"": ""price"",
+client, err := qdrant.NewClient(&qdrant.Config{
- ""range"": {
+ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"",
- ""gt"": null,
+ Port: 6334,
- ""gte"": 100.0,
+ APIKey: """",
- ""lt"": null,
+ UseTLS: true,
- ""lte"": 450.0
+})
- }
+```
+",documentation/cloud/authentication.md
+"---
-}
+title: Account Setup
-```
+weight: 10
+aliases:
+---
-```python
-models.FieldCondition(
- key=""price"",
+# Setting up a Qdrant Cloud Account
- range=models.Range(
- gt=None,
- gte=100.0,
+## Registration
- lt=None,
- lte=450.0,
- ),
+There are different ways to register for a Qdrant Cloud account:
-)
-```
+* With an email address and passwordless login via email
+* With a Google account
-```typescript
+* With a GitHub account
-{
+* By connection an enterprise SSO solution
- key: 'price',
- range: {
- gt: null,
+Every account is tied to an email address. You can invite additional users to your account and manage their permissions.
- gte: 100.0,
- lt: null,
- lte: 450.0
+### Email registration
- }
-}
-```
+1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or GitHub credentials.
-```rust
+## Inviting additional users to an account
-Condition::range(
- ""price"",
- Range {
+You can invite additional users to your account, and manage their permissions on the *Account Management* page in the Qdrant Cloud Console.
- gt: None,
- gte: Some(100.0),
- lt: None,
+![Invitations](/documentation/cloud/invitations.png)
- lte: Some(450.0),
- },
-)
+Invited users will receive an email with an invitation link to join Qdrant Cloud. Once they signed up, they can accept the invitation from the Overview page.
-```
+![Accepting invitation](/documentation/cloud/accept-invitation.png)
-```java
-import static io.qdrant.client.ConditionFactory.range;
+## Switching between accounts
-import io.qdrant.client.grpc.Points.Range;
+If you have access to multiple accounts, you can switch between accounts with the account switcher on the top menu bar of the Qdrant Cloud Console.
-range(""price"", Range.newBuilder().setGte(100.0).setLte(450).build());
-```
+![Switching between accounts](/documentation/cloud/account-switcher.png)
-```csharp
+## Account settings
-using static Qdrant.Client.Grpc.Conditions;
+You can configure your account settings in the Qdrant Cloud Console, by clicking on your account picture in the top right corner, and selecting *Profile*.
-Range(""price"", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 });
-```
+The following functionality is available.
-The `range` condition sets the range of possible values for stored payload values.
-If several values are stored, at least one of them should match the condition.
+### Renaming an account
-Comparisons that can be used:
+If you use multiple accounts for different purposes, it is a good idea to give them descriptive names, for example *Development*, *Production*, *Testing*. You can also choose which account should be the default one, when you log in.
-- `gt` - greater than
+![Account management](/documentation/cloud/account-management.png)
-- `gte` - greater than or equal
-- `lt` - less than
-- `lte` - less than or equal
+### Deleting an account
-Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads.
+When you delete an account, all database clusters and associated data will be deleted.
+",documentation/cloud/qdrant-cloud-setup.md
+"---
+title: Create a Cluster
+weight: 20
-### Geo
+---
-#### Geo Bounding Box
+# Creating a Qdrant Cloud Cluster
-```json
+Qdrant Cloud offers two types of clusters: **Free** and **Standard**.
-{
- ""key"": ""location"",
- ""geo_bounding_box"": {
+## Free Clusters
- ""bottom_right"": {
- ""lon"": 13.455868,
- ""lat"": 52.495862
+Free tier clusters are perfect for prototyping and testing. You don't need a credit card to join.
- },
- ""top_left"": {
- ""lon"": 13.403683,
+A free tier cluster only includes 1 single node with the following resources:
- ""lat"": 52.520711
- }
- }
+| Resource | Value |
-}
+|------------|-------|
-```
+| RAM | 1 GB |
+| vCPU | 0.5 |
+| Disk space | 4 GB |
-```python
+| Nodes | 1 |
-models.FieldCondition(
- key=""location"",
- geo_bounding_box=models.GeoBoundingBox(
+This configuration supports serving about 1 M vectors of 768 dimensions. To calculate your needs, refer to our documentation on [Capacity and sizing](/documentation/cloud/capacity-sizing/).
- bottom_right=models.GeoPoint(
- lon=13.455868,
- lat=52.495862,
+The choice of cloud providers and regions is limited.
- ),
- top_left=models.GeoPoint(
- lon=13.403683,
+It includes:
- lat=52.520711,
- ),
- ),
+- Standard Support
-)
+- Basic monitoring
-```
+- Basic log access
+- Basic alerting
+- Version upgrades with downtime
-```typescript
+- Only manual snapshots and restores via API
-{
+- No dedicated resources
- key: 'location',
- geo_bounding_box: {
- bottom_right: {
+If unused, free tier clusters are automatically suspended after 1 week, and deleted after 4 weeks of inactivity if not reactivated.
- lon: 13.455868,
- lat: 52.495862
- },
+You can always upgrade to a standard cluster with more resources and features.
- top_left: {
- lon: 13.403683,
- lat: 52.520711
+## Standard Clusters
- }
- }
-}
+On top of the Free cluster features, Standard clusters offer:
-```
+- Response time and uptime SLAs
-```rust
+- Dedicated resources
-Condition::geo_bounding_box(
+- Backup and disaster recovery
- ""location"",
+- Multi-node clusters for high availability
- GeoBoundingBox {
+- Horizontal and vertical scaling
- bottom_right: Some(GeoPoint {
+- Monitoring and log management
- lon: 13.455868,
+- Zero-downtime upgrades for multi-node clusters with replication
- lat: 52.495862,
- }),
- top_left: Some(GeoPoint {
+You have a broad choice of regions on AWS, Azure and Google Cloud.
- lon: 13.403683,
- lat: 52.520711,
- }),
+For payment information see [**Pricing and Payments**](/documentation/cloud/pricing-payments/).
- },
-)
-```
+## Create a cluster
-```java
+This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster.
-import static io.qdrant.client.ConditionFactory.geoBoundingBox;
+> **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster.
-geoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868);
-```
+1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io/).
+1. Select **Clusters** and then click **+ Create**.
-```csharp
+1. In the **Create a cluster** screen select **Free** or **Standard**
-using static Qdrant.Client.Grpc.Conditions;
+ Most of the remaining configuration options are only available for standard clusters.
+1. Select a provider. Currently, you can deploy to:
-GeoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868);
-```
+ - Amazon Web Services (AWS)
+ - Google Cloud Platform (GCP)
+ - Microsoft Azure
-It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`.
+ - Your own [Hybrid Cloud](/documentation/hybrid-cloud/) Infrastructure
-#### Geo Radius
+1. Choose your data center region or Hybrid Cloud environment.
+1. Configure RAM for each node.
+ > For more information, see our [**Capacity and Sizing**](/documentation/cloud/capacity-sizing/) guidance.
-```json
+1. Choose the number of vCPUs per node. If you add more
-{
+ RAM, the menu provides different options for vCPUs.
- ""key"": ""location"",
+1. Select the number of nodes you want the cluster to be deployed on.
- ""geo_radius"": {
+ > Each node is automatically attached with a disk, that has enough space to store data with Qdrant's default collection configuration.
- ""center"": {
+1. Select additional disk space for your deployment.
- ""lon"": 13.403683,
+ > Depending on your collection configuration, you may need more disk space per RAM. For example, if you configure `on_disk: true` and only use RAM for caching.
- ""lat"": 52.520711
+1. Review your cluster configuration and pricing.
- },
+1. When you're ready, select **Create**. It takes some time to provision your cluster.
- ""radius"": 1000.0
- }
-}
+Once provisioned, you can access your cluster on ports 443 and 6333 (REST) and 6334 (gRPC).
-```
+![Cluster configured in the UI](/docs/cloud/create-cluster-test.png)
-```python
-models.FieldCondition(
- key=""location"",
+You should now see the new cluster in the **Clusters** menu.
- geo_radius=models.GeoRadius(
- center=models.GeoPoint(
- lon=13.403683,
+## Next steps
- lat=52.520711,
- ),
- radius=1000.0,
+You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](/documentation/cloud/authentication/) to create one or more API keys.
- ),
-)
-```
+You can also scale your cluster both horizontally and vertically. Read more in [**Cluster Scaling**](/documentation/cloud/cluster-scaling/).
-```typescript
+If a new Qdrant version becomes available, you can upgrade your cluster. See [**Cluster Upgrades**](/documentation/cloud/cluster-upgrades/).
-{
- key: 'location',
- geo_radius: {
+For more information on creating and restoring backups of a cluster, see [**Backups**](/documentation/cloud/backups/).
+",documentation/cloud/create-cluster.md
+"---
- center: {
+title: Cloud Support
- lon: 13.403683,
+weight: 99
- lat: 52.520711
+aliases:
- },
+---
- radius: 1000.0
- }
-}
+# Qdrant Cloud Support and Troubleshooting
-```
+All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord/). Our Support Engineers are available to help you anytime.
-```rust
-Condition::geo_radius(
- ""location"",
+![Discord](/documentation/cloud/discord.png)
- GeoRadius {
- center: Some(GeoPoint {
- lon: 13.403683,
+Paid customers can also contact support directly. Links to the support portal are available in the Qdrant Cloud Console.
- lat: 52.520711,
- }),
- radius: 1000.0,
+![Support Portal](/documentation/cloud/support-portal.png)
+",documentation/cloud/support.md
+"---
- },
+title: Backup Clusters
-)
+weight: 61
-```
+---
-```java
+# Backing up Qdrant Cloud Clusters
-import static io.qdrant.client.ConditionFactory.geoRadius;
+Qdrant organizes cloud instances as clusters. On occasion, you may need to
-geoRadius(""location"", 52.520711, 13.403683, 1000.0f);
+restore your cluster because of application or system failure.
-```
+You may already have a source of truth for your data in a regular database. If you
-```csharp
+have a problem, you could reindex the data into your Qdrant vector search cluster.
-using static Qdrant.Client.Grpc.Conditions;
+However, this process can take time. For high availability critical projects we
+recommend replication. It guarantees the proper cluster functionality as long as
+at least one replica is running.
-GeoRadius(""location"", 52.520711, 13.403683, 1000.0f);
-```
+For other use-cases such as disaster recovery, you can set up automatic or
+self-service backups.
-It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters.
+## Prerequisites
-If several values are stored, at least one of them should match the condition.
-These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
+You can back up your Qdrant clusters though the Qdrant Cloud
+Dashboard at https://cloud.qdrant.io. This section assumes that you've already
-#### Geo Polygon
+set up your cluster, as described in the following sections:
-Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island.
+- [Create a cluster](/documentation/cloud/create-cluster/)
-When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same.
+- Set up [Authentication](/documentation/cloud/authentication/)
+- Configure one or more [Collections](/documentation/concepts/collections/)
-Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic.
+## Automatic backups
-```json
+You can set up automatic backups of your clusters with our Cloud UI. With the
+procedures listed in this page, you can set up
-{
+snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you
- ""key"": ""location"",
+need. You can restore a cluster from the snapshot of your choice.
- ""geo_polygon"": {
- ""exterior"": {
- ""points"": [
+> Note: When you restore a snapshot, consider the following:
- { ""lon"": -70.0, ""lat"": -70.0 },
+> - The affected cluster is not available while a snapshot is being restored.
- { ""lon"": 60.0, ""lat"": -70.0 },
+> - If you changed the cluster setup after the copy was created, the cluster
- { ""lon"": 60.0, ""lat"": 60.0 },
+ resets to the previous configuration.
- { ""lon"": -70.0, ""lat"": 60.0 },
+> - The previous configuration includes:
- { ""lon"": -70.0, ""lat"": -70.0 }
+> - CPU
- ]
+> - Memory
- },
+> - Node count
- ""interiors"": [
+> - Qdrant version
- {
- ""points"": [
- { ""lon"": -65.0, ""lat"": -65.0 },
+### Configure a backup
- { ""lon"": 0.0, ""lat"": -65.0 },
- { ""lon"": 0.0, ""lat"": 0.0 },
- { ""lon"": -65.0, ""lat"": 0.0 },
+After you have taken the prerequisite steps, you can configure a backup with the
- { ""lon"": -65.0, ""lat"": -65.0 }
+[Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps:
- ]
- }
- ]
+1. Sign in to the dashboard
- }
+1. Select Clusters.
-}
+1. Select the cluster that you want to back up.
-```
+ ![Select a cluster](/documentation/cloud/select-cluster.png)
+1. Find and select the **Backups** tab.
+1. Now you can set up a backup schedule.
-```python
+ The **Days of Retention** is the number of days after a backup snapshot is
-models.FieldCondition(
+ deleted.
- key=""location"",
+1. Alternatively, you can select **Backup now** to take an immediate snapshot.
- geo_polygon=models.GeoPolygon(
- exterior=models.GeoLineString(
- points=[
+![Configure a cluster backup](/documentation/cloud/backup-schedule.png)
- models.GeoPoint(
- lon=-70.0,
- lat=-70.0,
+### Restore a backup
- ),
- models.GeoPoint(
- lon=60.0,
+If you have a backup, it appears in the list of **Available Backups**. You can
- lat=-70.0,
+choose to restore or delete the backups of your choice.
- ),
- models.GeoPoint(
- lon=60.0,
+![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png)
- lat=60.0,
- ),
- models.GeoPoint(
+
- lon=-70.0,
- lat=60.0,
- ),
+## Backups with a snapshot
- models.GeoPoint(
- lon=-70.0,
- lat=-70.0,
+Qdrant also offers a snapshot API which allows you to create a snapshot
- ),
+of a specific collection or your entire cluster. For more information, see our
- ]
+[snapshot documentation](/documentation/concepts/snapshots/).
- ),
- interiors=[
- models.GeoLineString(
+Here is how you can take a snapshot and recover a collection:
- points=[
- models.GeoPoint(
- lon=-65.0,
+1. Take a snapshot:
- lat=-65.0,
+ - For a single node cluster, call the snapshot endpoint on the exposed URL.
- ),
+ - For a multi node cluster call a snapshot on each node of the collection.
- models.GeoPoint(
+ Specifically, prepend `node-{num}-` to your cluster URL.
- lon=0.0,
+ Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0.
- lat=-65.0,
+ - In the response, you'll see the name of the snapshot.
- ),
+2. Delete and recreate the collection.
- models.GeoPoint(
+3. Recover the snapshot:
- lon=0.0,
+ - Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host.
- lat=0.0,
- ),
- models.GeoPoint(
+## Backup considerations
- lon=-65.0,
- lat=0.0,
- ),
+Backups are incremental. For example, if you have two backups, backup number 2
- models.GeoPoint(
+contains only the data that changed since backup number 1. This reduces the
- lon=-65.0,
+total cost of your backups.
- lat=-65.0,
- ),
- ]
+You can create multiple backup schedules.
- )
- ],
- ),
+When you restore a snapshot, any changes made after the date of the snapshot
-)
+are lost.
+",documentation/cloud/backups.md
+"---
-```
+title: Configure Size & Capacity
+weight: 40
+aliases:
-```typescript
+ - capacity
-{
+---
- key: 'location',
- geo_polygon: {
- exterior: {
+# Configuring Qdrant Cloud Cluster Capacity and Size
- points: [
- {
- lon: -70.0,
+We have been asked a lot about the optimal cluster configuration to serve a number of vectors.
- lat: -70.0
+The only right answer is “It depends”.
- },
- {
- lon: 60.0,
+It depends on a number of factors and options you can choose for your collections.
- lat: -70.0
- },
- {
+## Basic configuration
- lon: 60.0,
- lat: 60.0
- },
+If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this:
- {
- lon: -70.0,
- lat: 60.0
+```text
- },
+memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5
- {
+```
- lon: -70.0,
- lat: -70.0
- }
+Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process.
- ]
- },
- interiors: {
+If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM.
- points: [
+Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section.
- {
- lon: -65.0,
- lat: -65.0
- },
- {
+## Storage focused configuration
- lon: 0.0,
- lat: -65.0
- },
+If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage).
- {
+In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM.
- lon: 0.0,
- lat: 0.0
- },
+The amount of available RAM will significantly affect the performance of the search.
- {
+As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower.
- lon: -65.0,
- lat: 0.0
- },
+The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search.
- {
- lon: -65.0,
- lat: -65.0
+## Sub-groups oriented configuration
- }
- ]
- }
- }
-}
+If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values,
-```
+it is recommended to configure memory-map storage.
+For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently.
-```rust
-Condition::geo_polygon(
+In this scenario only the active subset of vectors will be kept in RAM, which allows
- ""location"",
+the fast search for the most active and recent users.
- GeoPolygon {
- exterior: Some(GeoLineString {
- points: vec![
+In this case you can estimate required memory size as follows:
- GeoPoint {
- lon: -70.0,
- lat: -70.0,
+```text
- },
+memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5
- GeoPoint {
+```
- lon: 60.0,
- lat: -70.0,
- },
+## Disk space
- GeoPoint {
- lon: 60.0,
- lat: 60.0,
+Clusters that support vector search require significant disk space. If you're
- },
+running low on disk space in your cluster, you can use the UI at
- GeoPoint {
+[cloud.qdrant.io](https://cloud.qdrant.io/) to **Scale Up** your cluster.
- lon: -70.0,
- lat: 60.0,
- },
+
- lon: -70.0,
- lat: -70.0,
- },
+If you're running low on disk space, consider the following advantages:
- ],
- }),
- interiors: vec![GeoLineString {
+- Larger Datasets: Supports larger datasets. With vector search,
- points: vec![
+larger datasets can improve the relevance and quality of search results.
- GeoPoint {
+- Improved Indexing: Supports the use of indexing strategies such as
- lon: -65.0,
+HNSW (Hierarchical Navigable Small World).
- lat: -65.0,
+- Caching: Improves speed when you cache frequently accessed data on disk.
- },
+- Backups and Redundancy: Allows more frequent backups. Perhaps the most important advantage.
+",documentation/cloud/capacity-sizing.md
+"---
- GeoPoint {
+title: Scale Clusters
- lon: 0.0,
+weight: 50
- lat: -65.0,
+---
- },
- GeoPoint { lon: 0.0, lat: 0.0 },
- GeoPoint {
+# Scaling Qdrant Cloud Clusters
- lon: -65.0,
- lat: 0.0,
- },
+The amount of data is always growing and at some point you might need to upgrade or downgrade the capacity of your cluster.
- GeoPoint {
- lon: -65.0,
- lat: -65.0,
+![Cluster Scaling](/documentation/cloud/cluster-scaling.png)
- },
- ],
- }],
+There are different options for how it can be done.
- },
-)
-```
+## Vertical scaling
-```java
+Vertical scaling is the process of increasing the capacity of a cluster by adding or removing CPU, storage and memory resources on each database node.
-import static io.qdrant.client.ConditionFactory.geoPolygon;
+You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application. If your cluster consists of several nodes each node will need to be scaled to the same size. Please note that vertical cluster scaling will require a short downtime period to restart your cluster. In order to avoid a downtime you can make use of data replication, which can be configured on the collection level. Vertical scaling can be initiated on the cluster detail page via the button ""scale"".
-import io.qdrant.client.grpc.Points.GeoLineString;
-import io.qdrant.client.grpc.Points.GeoPoint;
+If you want to scale your cluster down, the new, smaller memory size must be still sufficient to store all the data in the cluster. Otherwise, the database cluster could run out of memory and crash. Therefore, the new memory size must be at least as large as the current memory usage of the database cluster including a bit of buffer. Qdrant Cloud will automatically prevent you from scaling down the Qdrant datab ase cluster with a too small memory size.
-geoPolygon(
- ""location"",
+Note, that it is not possible to scale down the disk space of the cluster due to technical limitations of the underlying cloud providers.
- GeoLineString.newBuilder()
- .addAllPoints(
- List.of(
+## Horizontal scaling
- GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(),
- GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(),
- GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(),
+Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations. The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded. At some point, adding more resources to a cluster can become impractical or cost-prohibitive.
- GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(),
- GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build()))
- .build(),
+In such cases, horizontal scaling may be a more effective solution.
- List.of(
- GeoLineString.newBuilder()
- .addAllPoints(
+Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them. The horizontal scaling at Qdrant starts on the collection level. You have to choose the number of shards you want to distribute your collection around while creating the collection. Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details.
- List.of(
- GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(),
- GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(),
+After that, you can configure, or change the amount of Qdrant database nodes within a cluster during cluster creation, or on the cluster detail page via ""Scale"" button.
- GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(),
- GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(),
- GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build()))
+Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node. With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling).
- .build()));
-```
+Note, that it is currently not possible to horizontally scale down the cluster in the Qdrant Cloud UI. If you require a horizontal scale down, please open a support ticket.
-```csharp
-using Qdrant.Client.Grpc;
+We will be glad to consult you on an optimal strategy for scaling.
-using static Qdrant.Client.Grpc.Conditions;
+[Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution.
+",documentation/cloud/cluster-scaling.md
+"---
-GeoPolygon(
+title: Monitor Clusters
- field: ""location"",
+weight: 55
- exterior: new GeoLineString
+---
- {
- Points =
- {
+# Monitoring Qdrant Cloud Clusters
- new GeoPoint { Lat = -70.0, Lon = -70.0 },
- new GeoPoint { Lat = 60.0, Lon = -70.0 },
- new GeoPoint { Lat = 60.0, Lon = 60.0 },
+## Telemetry
- new GeoPoint { Lat = -70.0, Lon = 60.0 },
- new GeoPoint { Lat = -70.0, Lon = -70.0 }
- }
+Qdrant Cloud provides you with a set of metrics to monitor the health of your database cluster. You can access these metrics in the Qdrant Cloud Console in the **Metrics** and **Request** sections of the cluster details page.
- },
- interiors: [
- new()
+## Logs
- {
- Points =
- {
+Logs of the database cluster are available in the Qdrant Cloud Console in the **Logs** section of the cluster details page.
- new GeoPoint { Lat = -65.0, Lon = -65.0 },
- new GeoPoint { Lat = 0.0, Lon = -65.0 },
- new GeoPoint { Lat = 0.0, Lon = 0.0 },
+## Alerts
- new GeoPoint { Lat = -65.0, Lon = 0.0 },
- new GeoPoint { Lat = -65.0, Lon = -65.0 }
- }
+You will receive automatic alerts via email before your cluster reaches the currently configured memory or storage limits, including recommendations for scaling your cluster.
+",documentation/cloud/cluster-monitoring.md
+"---
- }
+title: Billing & Payments
- ]
+weight: 65
-);
+aliases:
-```
+ - aws-marketplace
+ - gcp-marketplace
+ - azure-marketplace
-A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors.
+---
-If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset.
+# Qdrant Cloud Billing & Payments
-These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
+Qdrant database clusters in Qdrant Cloud are priced based on CPU, memory, and disk storage usage. To get a clearer idea for the pricing structure, based on the amounts of vectors you want to store, please use our [Pricing Calculator](https://cloud.qdrant.io/calculator).
-### Values count
+## Billing
-In addition to the direct value comparison, it is also possible to filter by the amount of values.
+You can pay for your Qdrant Cloud database clusters either with a credit card or through an AWS, GCP, or Azure Marketplace subscription.
-For example, given the data:
+Your payment method is charged at the beginning of each month for the previous month's usage. There is no difference in pricing between the different payment methods.
-```json
-[
- { ""id"": 1, ""name"": ""product A"", ""comments"": [""Very good!"", ""Excellent""] },
+If you choose to pay through a marketplace, the Qdrant Cloud usage costs are added as usage units to your existing billing for your cloud provider services. A detailed breakdown of your usage is available in the Qdrant Cloud Console.
- { ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] }
-]
-```
+Note: Even if you pay using a marketplace subscription, your database clusters will still be deployed into Qdrant-owned infrastructure. The setup and management of Qdrant database clusters will also still be done via the Qdrant Cloud Console UI.
-We can perform the search only among the items with more than two comments:
+If you wish to deploy Qdrant database clusters into your own environment from Qdrant Cloud then we recommend our [Hybrid Cloud](/documentation/hybrid-cloud/) solution.
-```json
+![Payment Options](/documentation/cloud/payment-options.png)
-{
- ""key"": ""comments"",
- ""values_count"": {
+### Credit Card
- ""gt"": 2
- }
-}
+Credit card payments are processed through Stripe. To set up a credit card, go to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/), select **Stripe** as the payment method, and enter your credit card details.
-```
+### AWS Marketplace
-```python
-models.FieldCondition(
- key=""comments"",
+Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development.
- values_count=models.ValuesCount(gt=2),
-)
-```
+To subscribe:
-```typescript
+1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
-{
+2. Select **AWS Marketplace** as the payment method. You will be redirected to the AWS Marketplace listing for Qdrant.
- key: 'comments',
+3. Click the bright orange button - **View purchase options**.
- values_count: {gt: 2}
+4. On the next screen, under Purchase, click **Subscribe**.
-}
+5. Up top, on the green banner, click **Set up your account**.
-```
+You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
-```rust
-Condition::values_count(
- ""comments"",
+### GCP Marketplace
- ValuesCount {
- gt: Some(2),
- ..Default::default()
+Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant) listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for hosting and application development.
- },
-)
-```
+To subscribe:
-```java
+1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
-import static io.qdrant.client.ConditionFactory.valuesCount;
+2. Select **GCP Marketplace** as the payment method. You will be redirected to the GCP Marketplace listing for Qdrant.
+3. Select **Subscribe**. (If you have already subscribed, select **Manage on Provider**.)
+4. On the next screen, choose options as required, and select **Subscribe**.
-import io.qdrant.client.grpc.Points.ValuesCount;
+5. On the pop-up window that appers, select **Sign up with Qdrant**.
-valuesCount(""comments"", ValuesCount.newBuilder().setGt(2).build());
+You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
-```
+### Azure Marketplace
-```csharp
-using Qdrant.Client.Grpc;
-using static Qdrant.Client.Grpc.Conditions;
+Our [Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/qdrantsolutionsgmbh1698769709989.qdrant-db/selectionMode~/false/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/selectedMenuId/home/launchingContext~/%7B%22galleryItemId%22%3A%22qdrantsolutionsgmbh1698769709989.qdrant-dbqdrant_cloud_unit%22%2C%22source%22%3A%5B%22GalleryFeaturedMenuItemPart%22%2C%22VirtualizedTileDetails%22%5D%2C%22menuItemId%22%3A%22home%22%2C%22subMenuItemId%22%3A%22Search%20results%22%2C%22telemetryId%22%3A%221df5537b-8b29-4200-80ce-0cd38c7e0e56%22%7D/searchTelemetryId/6b44fb90-7b9c-4286-aad8-59f88f3cc2ff) listing streamlines access to Qdrant for users who rely on Microsoft Azure for hosting and application development.
-ValuesCount(""comments"", new ValuesCount { Gt = 2 });
+To subscribe:
-```
+1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/)
-The result would be:
+2. Select **Azure Marketplace** as the payment method. You will be redirected to the Azure Marketplace listing for Qdrant.
+3. Select **Subscribe**.
+4. On the next screen, choose options as required, and select **Review + Subscribe**.
-```json
+5. After reviewing all settings, select **Subscribe**.
-[{ ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] }]
+6. Once the SaaS subscription is created, select **Configure account now**.
-```
+
+You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters.
+",documentation/cloud/pricing-payments.md
+"---
+title: Upgrade Clusters
-If stored value is not an array - it is assumed that the amount of values is equals to 1.
+weight: 55
+---
-### Is Empty
+# Upgrading Qdrant Cloud Clusters
-Sometimes it is also useful to filter out records that are missing some value.
-The `IsEmpty` condition may help you with that:
+As soon as a new Qdrant version is available. Qdrant Cloud will show you an upgrade notification in the Cluster list and on the Cluster details page.
-```json
+To upgrade to a new version, go to the Cluster details page, choose the new version from the version dropdown and click **Upgrade**.
-{
- ""is_empty"": {
- ""key"": ""reports""
+![Cluster Upgrades](/documentation/cloud/cluster-upgrades.png)
- }
-}
-```
+If you have a multi-node cluster and if your collections have a replication factor of at least **2**, the upgrade process will be zero-downtime and done in a rolling fashion. You will be able to use your database cluster normally.
-```python
+If you have a single-node cluster or a collection with a replication factor of **1**, the upgrade process will require a short downtime period to restart your cluster with the new version.
+",documentation/cloud/cluster-upgrades.md
+"---
-models.IsEmptyCondition(
+title: Managed Cloud
- is_empty=models.PayloadField(key=""reports""),
+weight: 8
-)
+aliases:
-```
+ - /documentation/overview/qdrant-alternatives/documentation/cloud/
+---
-```typescript
-{
+# About Qdrant Managed Cloud
- is_empty: {
- key: ""reports"";
- }
+Qdrant Managed Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant database clusters on the cloud. We provide you the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure.
-}
-```
+Transitioning to the Managed Cloud version of Qdrant does not change how you interact with the service. All you need is a [Qdrant Cloud account](https://qdrant.to/cloud/) and an [API key](/documentation/cloud/authentication/) for each request.
-```rust
-Condition::is_empty(""reports"")
+You can also attach your own infrastructure as a Hybrid Cloud Environment. For details, see our [Hybrid Cloud](/documentation/hybrid-cloud/) documentation.
-```
+## Cluster configuration
-```java
-import static io.qdrant.client.ConditionFactory.isEmpty;
+Each database cluster comes pre-configured with the following tools, features, and support services:
-isEmpty(""reports"");
-```
+- Allows the creation of highly available clusters with automatic failover.
+- Supports upgrades to later versions of Qdrant as they are released.
+- Upgrades are zero-downtime on highly available clusters.
-```csharp
-
-using Qdrant.Client.Grpc;
-
-using static Qdrant.Client.Grpc.Conditions;
-
-
-
-IsEmpty(""reports"");
-
-```
-
+- Includes monitoring and logging to observe the health of each cluster.
+- Horizontally and vertically scalable.
-This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value.
+- Available natively on AWS and GCP, and Azure.
+- Available on your own infrastructure and other providers if you use the Hybrid Cloud.
-
+## Getting started with Qdrant Cloud
-### Is Null
+To get started with Qdrant Cloud:
-It is not possible to test for `NULL` values with the match condition.
-We have to use `IsNull` condition instead:
+1. [**Set up an account**](/documentation/cloud/qdrant-cloud-setup/)
+2. [**Create a Qdrant cluster**](/documentation/cloud/create-cluster/)
+",documentation/cloud/_index.md
+"---
+title: Storage
-```json
+weight: 80
-{
+aliases:
- ""is_null"": {
+ - ../storage
- ""key"": ""reports""
+---
- }
-}
-```
+# Storage
-```python
+All data within one collection is divided into segments.
-models.IsNullCondition(
+Each segment has its independent vector and payload storage as well as indexes.
- is_null=models.PayloadField(key=""reports""),
-)
-```
+Data stored in segments usually do not overlap.
+However, storing the same point in different segments will not cause problems since the search contains a deduplication mechanism.
-```typescript
-{
+The segments consist of vector and payload storages, vector and payload [indexes](../indexing/), and id mapper, which stores the relationship between internal and external ids.
- is_null: {
- key: ""reports"";
- }
+A segment can be `appendable` or `non-appendable` depending on the type of storage and index used.
-}
+You can freely add, delete and query data in the `appendable` segment.
-```
+With `non-appendable` segment can only read and delete data.
-```rust
+The configuration of the segments in the collection can be different and independent of one another, but at least one `appendable' segment must be present in a collection.
-Condition::is_null(""reports"")
-```
+## Vector storage
-```java
-import static io.qdrant.client.ConditionFactory.isNull;
+Depending on the requirements of the application, Qdrant can use one of the data storage options.
+The choice has to be made between the search speed and the size of the RAM used.
-isNull(""reports"");
-```
+**In-memory storage** - Stores all vectors in RAM, has the highest speed since disk access is required only for persistence.
-```csharp
+**Memmap storage** - Creates a virtual address space associated with the file on disk. [Wiki](https://en.wikipedia.org/wiki/Memory-mapped_file).
-using Qdrant.Client.Grpc;
+Mmapped files are not directly loaded into RAM. Instead, they use page cache to access the contents of the file.
-using static Qdrant.Client.Grpc.Conditions;
+This scheme allows flexible use of available memory. With sufficient RAM, it is almost as fast as in-memory storage.
-IsNull(""reports"");
-```
+### Configuring Memmap storage
-This condition will match all records where the field `reports` exists and has `NULL` value.
+There are two ways to configure the usage of memmap(also known as on-disk) storage:
+- Set up `on_disk` option for the vectors in the collection create API:
-### Has id
+*Available as of v1.2.0*
-This type of query is not related to payload, but can be very useful in some situations.
-For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points.
```http
-POST /collections/{collection_name}/points/scroll
+PUT /collections/{collection_name}
{
- ""filter"": {
+ ""vectors"": {
- ""must"": [
+ ""size"": 768,
- { ""has_id"": [1,3,5,7,9,11] }
+ ""distance"": ""Cosine"",
- ]
+ ""on_disk"": true
}
- ...
-
}
```
@@ -37173,17 +36248,21 @@ POST /collections/{collection_name}/points/scroll
```python
-client.scroll(
+from qdrant_client import QdrantClient, models
- collection_name=""{collection_name}"",
- scroll_filter=models.Filter(
- must=[
+client = QdrantClient(url=""http://localhost:6333"")
- models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]),
- ],
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(
+
+ size=768, distance=models.Distance.COSINE, on_disk=True
),
@@ -37195,19 +36274,23 @@ client.scroll(
```typescript
-client.scroll(""{collection_name}"", {
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- filter: {
- must: [
- {
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- has_id: [1, 3, 5, 7, 9, 11],
- },
- ],
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 768,
+
+ distance: ""Cosine"",
+
+ on_disk: true,
},
@@ -37219,21 +36302,25 @@ client.scroll(""{collection_name}"", {
```rust
-use qdrant_client::qdrant::{Condition, Filter, ScrollPoints};
+use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
+use qdrant_client::Qdrant;
-client
- .scroll(&ScrollPoints {
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- collection_name: ""{collection_name}"".to_string(),
- filter: Some(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])),
- ..Default::default()
+client
- })
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"")
+
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)),
+
+ )
.await?;
@@ -37243,37 +36330,35 @@ client
```java
-import java.util.List;
-
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-import static io.qdrant.client.ConditionFactory.hasId;
+import io.qdrant.client.grpc.Collections.Distance;
-import static io.qdrant.client.PointIdFactory.id;
+import io.qdrant.client.grpc.Collections.VectorParams;
-import io.qdrant.client.grpc.Points.Filter;
+QdrantClient client =
-import io.qdrant.client.grpc.Points.ScrollPoints;
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
client
- .scrollAsync(
-
- ScrollPoints.newBuilder()
+ .createCollectionAsync(
- .setCollectionName(""{collection_name}"")
+ ""{collection_name}"",
- .setFilter(
+ VectorParams.newBuilder()
- Filter.newBuilder()
+ .setSize(768)
- .addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11))))
+ .setDistance(Distance.Cosine)
- .build())
+ .setOnDisk(true)
.build())
@@ -37287,7 +36372,7 @@ client
using Qdrant.Client;
-using static Qdrant.Client.Grpc.Conditions;
+using Qdrant.Client.Grpc;
@@ -37295,363 +36380,385 @@ var client = new QdrantClient(""localhost"", 6334);
-await client.ScrollAsync(collectionName: ""{collection_name}"", filter: HasId([1, 3, 5, 7, 9, 11]));
+await client.CreateCollectionAsync(
-```
+ ""{collection_name}"",
+ new VectorParams
+ {
-Filtered points would be:
+ Size = 768,
+ Distance = Distance.Cosine,
+ OnDisk = true
-```json
+ }
-[
+);
- { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+```
- { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
- { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" }
-]
+```go
-```
-",documentation/concepts/filtering.md
-"---
+import (
-title: Concepts
+ ""context""
-weight: 21
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
----
+ ""github.com/qdrant/go-client/qdrant""
+)
-# Concepts
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-Think of these concepts as a glossary. Each of these concepts include a link to
+ Port: 6334,
-detailed information, usually with examples. If you're new to AI, these concepts
+})
-can help you learn more about AI and the Qdrant approach.
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-## Collections
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 768,
-[Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search.
+ Distance: qdrant.Distance_Cosine,
+ OnDisk: qdrant.PtrOf(true),
+ }),
-## Payload
+})
+```
-A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors.
+This will create a collection with all vectors immediately stored in memmap storage.
+This is the recommended way, in case your Qdrant instance operates with fast disks and you are working with large collections.
-## Points
-[Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload.
+- Set up `memmap_threshold_kb` option (deprecated). This option will set the threshold after which the segment will be converted to memmap storage.
-## Search
+There are two ways to do this:
-[Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space.
+1. You can set the threshold globally in the [configuration file](../../guides/configuration/). The parameter is called `memmap_threshold_kb`.
+2. You can set the threshold for each collection separately during [creation](../collections/#create-collection) or [update](../collections/#update-collection-parameters).
-## Explore
+```http
-[Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections.
+PUT /collections/{collection_name}
+{
+ ""vectors"": {
-## Filtering
+ ""size"": 768,
+ ""distance"": ""Cosine""
+ },
-[Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more.
+ ""optimizers_config"": {
+ ""memmap_threshold"": 20000
+ }
-## Optimizer
+}
+```
-[Optimizer](/documentation/concepts/optimizer/) describes options to rebuild
-database structures for faster search. They include a vacuum, a merge, and an
+```python
-indexing optimizer.
+from qdrant_client import QdrantClient, models
-## Storage
+client = QdrantClient(url=""http://localhost:6333"")
-[Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper.
+client.create_collection(
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
-## Indexing
+ optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
+)
+```
-[Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index.
+```typescript
-## Snapshots
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-[Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times.
-",documentation/concepts/_index.md
-"---
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-title: HowTos
-weight: 100
-draft: true
+client.createCollection(""{collection_name}"", {
----
+ vectors: {
+ size: 768,
+ distance: ""Cosine"",
-
+ },
-
+ optimizers_config: {
+ memmap_threshold: 20000,
-",documentation/tutorials/how-to.md
-"---
+ },
-title: ""Inference with Mighty""
+});
-short_description: ""Mighty offers a speedy scalable embedding, a perfect fit for the speedy scalable Qdrant search. Let's combine them!""
+```
-description: ""We combine Mighty and Qdrant to create a semantic search service in Rust with just a few lines of code.""
-weight: 17
-author: Andre Bogus
+```rust
-author_link: https://llogiq.github.io
+use qdrant_client::qdrant::{
-date: 2023-06-01T11:24:20+01:00
+ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder,
-keywords:
+};
- - vector search
+use qdrant_client::Qdrant;
- - embeddings
- - mighty
- - rust
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- - semantic search
----
+client
+ .create_collection(
-# Semantic Search with Mighty and Qdrant
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
+ .optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000)),
-Much like Qdrant, the [Mighty](https://max.io/) inference server is written in Rust and promises to offer low latency and high scalability. This brief demo combines Mighty and Qdrant into a simple semantic search service that is efficient, affordable and easy to setup. We will use [Rust](https://rust-lang.org) and our [qdrant\_client crate](https://docs.rs/qdrant_client) for this integration.
+ )
+ .await?;
+```
-## Initial setup
+```java
-For Mighty, start up a [docker container](https://hub.docker.com/layers/maxdotio/mighty-sentence-transformers/0.9.9/images/sha256-0d92a89fbdc2c211d927f193c2d0d34470ecd963e8179798d8d391a4053f6caf?context=explore) with an open port 5050. Just loading the port in a window shows the following:
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CreateCollection;
-```json
+import io.qdrant.client.grpc.Collections.Distance;
-{
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
- ""name"": ""sentence-transformers/all-MiniLM-L6-v2"",
+import io.qdrant.client.grpc.Collections.VectorParams;
- ""architectures"": [
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- ""BertModel""
- ],
- ""model_type"": ""bert"",
+QdrantClient client =
- ""max_position_embeddings"": 512,
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""labels"": null,
- ""named_entities"": null,
- ""image_size"": null,
+client
- ""source"": ""https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2""
+ .createCollectionAsync(
-}
+ CreateCollection.newBuilder()
-```
+ .setCollectionName(""{collection_name}"")
+
+ .setVectorsConfig(
+ VectorsConfig.newBuilder()
+ .setParams(
-Note that this uses the `MiniLM-L6-v2` model from Hugging Face. As per their website, the model ""maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search"". The distance measure to use is cosine similarity.
+ VectorParams.newBuilder()
+ .setSize(768)
+ .setDistance(Distance.Cosine)
-Verify that mighty works by calling `curl https://:5050/sentence-transformer?q=hello+mighty`. This will give you a result like (formatted via `jq`):
+ .build())
+ .build())
+ .setOptimizersConfig(
-```json
+ OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
-{
+ .build())
- ""outputs"": [
+ .get();
- [
+```
- -0.05019686743617058,
- 0.051746174693107605,
- 0.048117730766534805,
+```csharp
- ... (381 values skipped)
+using Qdrant.Client;
- ]
+using Qdrant.Client.Grpc;
- ],
- ""shape"": [
- 1,
+var client = new QdrantClient(""localhost"", 6334);
- 384
- ],
- ""texts"": [
+await client.CreateCollectionAsync(
- ""Hello mighty""
+ collectionName: ""{collection_name}"",
- ],
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- ""took"": 77
+ optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }
-}
+);
```
-For Qdrant, follow our [cloud documentation](../../cloud/cloud-quick-start/) to spin up a [free tier](https://cloud.qdrant.io/). Make sure to retrieve an API key.
+```go
+import (
+ ""context""
-## Implement model API
+ ""github.com/qdrant/go-client/qdrant""
-For mighty, you will need a way to emit HTTP(S) requests. This version uses the [reqwest](https://docs.rs/reqwest) crate, so add the following to your `Cargo.toml`'s dependencies section:
+)
-```toml
+client, err := qdrant.NewClient(&qdrant.Config{
-[dependencies]
+ Host: ""localhost"",
-reqwest = { version = ""0.11.18"", default-features = false, features = [""json"", ""rustls-tls""] }
+ Port: 6334,
-```
+})
-Mighty offers a variety of model APIs which will download and cache the model on first use. For semantic search, use the `sentence-transformer` API (as in the above `curl` command). The Rust code to make the call is:
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
-```rust
+ Size: 768,
-use anyhow::anyhow;
+ Distance: qdrant.Distance_Cosine,
-use reqwest::Client;
+ }),
-use serde::Deserialize;
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{
-use serde_json::Value as JsonValue;
+ MaxSegmentSize: qdrant.PtrOf(uint64(20000)),
+ },
+})
-#[derive(Deserialize)]
+```
-struct EmbeddingsResponse {
- pub outputs: Vec>,
-}
+The rule of thumb to set the memmap threshold parameter is simple:
-pub async fn get_mighty_embedding(
+- if you have a balanced use scenario - set memmap threshold the same as `indexing_threshold` (default is 20000). In this case the optimizer will not make any extra runs and will optimize all thresholds at once.
- client: &Client,
+- if you have a high write load and low RAM - set memmap threshold lower than `indexing_threshold` to e.g. 10000. In this case the optimizer will convert the segments to memmap storage first and will only apply indexing after that.
- url: &str,
- text: &str
-) -> anyhow::Result> {
+In addition, you can use memmap storage not only for vectors, but also for HNSW index.
- let response = client.get(url).query(&[(""text"", text)]).send().await?;
+To enable this, you need to set the `hnsw_config.on_disk` parameter to `true` during collection [creation](../collections/#create-a-collection) or [updating](../collections/#update-collection-parameters).
- if !response.status().is_success() {
+```http
- return Err(anyhow!(
+PUT /collections/{collection_name}
- ""Mighty API returned status code {}"",
+{
- response.status()
+ ""vectors"": {
- ));
+ ""size"": 768,
- }
+ ""distance"": ""Cosine""
+
+ },
+ ""optimizers_config"": {
+ ""memmap_threshold"": 20000
- let embeddings: EmbeddingsResponse = response.json().await?;
+ },
- // ignore multiple embeddings at the moment
+ ""hnsw_config"": {
- embeddings.get(0).ok_or_else(|| anyhow!(""mighty returned empty embedding""))
+ ""on_disk"": true
+
+ }
}
@@ -37659,4959 +36766,5302 @@ pub async fn get_mighty_embedding(
-Note that mighty can return multiple embeddings (if the input is too long to fit the model, it is automatically split).
+```python
+from qdrant_client import QdrantClient, models
-## Create embeddings and run a query
+client = QdrantClient(url=""http://localhost:6333"")
-Use this code to create embeddings both for insertion and search. On the Qdrant side, take the embedding and run a query:
+client.create_collection(
+ collection_name=""{collection_name}"",
-```rust
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
-use anyhow::anyhow;
+ optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
-use qdrant_client::prelude::*;
+ hnsw_config=models.HnswConfigDiff(on_disk=True),
+)
+```
-pub const SEARCH_LIMIT: u64 = 5;
-const COLLECTION_NAME: &str = ""mighty"";
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-pub async fn qdrant_search_embeddings(
- qdrant_client: &QdrantClient,
- vector: Vec,
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-) -> anyhow::Result> {
- qdrant_client
- .search_points(&SearchPoints {
+client.createCollection(""{collection_name}"", {
- collection_name: COLLECTION_NAME.to_string(),
+ vectors: {
- vector,
+ size: 768,
- limit: SEARCH_LIMIT,
+ distance: ""Cosine"",
- with_payload: Some(true.into()),
+ },
- ..Default::default()
+ optimizers_config: {
- })
+ memmap_threshold: 20000,
- .await
+ },
- .map_err(|err| anyhow!(""Failed to search Qdrant: {}"", err))
+ hnsw_config: {
-}
+ on_disk: true,
-```
+ },
+});
+```
-You can convert the [`ScoredPoint`](https://docs.rs/qdrant-client/latest/qdrant_client/qdrant/struct.ScoredPoint.html)s to fit your desired output format.",documentation/tutorials/mighty.md
-"---
-title: Bulk Upload Vectors
-weight: 13
+```rust
----
+use qdrant_client::qdrant::{
+ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, OptimizersConfigDiffBuilder,
+ VectorParamsBuilder,
-# Bulk upload a large number of vectors
+};
+use qdrant_client::Qdrant;
-Uploading a large-scale dataset fast might be a challenge, but Qdrant has a few tricks to help you with that.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-The first important detail about data uploading is that the bottleneck is usually located on the client side, not on the server side.
-This means that if you are uploading a large dataset, you should prefer a high-performance client library.
+client
+ .create_collection(
+ CreateCollectionBuilder::new(""{collection_name}"")
-We recommend using our [Rust client library](https://github.com/qdrant/rust-client) for this purpose, as it is the fastest client library available for Qdrant.
+ .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine))
+ .optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000))
+ .hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)),
-If you are not using Rust, you might want to consider parallelizing your upload process.
+ )
+ .await?;
+```
-## Disable indexing during upload
+```java
-In case you are doing an initial upload of a large dataset, you might want to disable indexing during upload.
+import io.qdrant.client.QdrantClient;
-It will enable to avoid unnecessary indexing of vectors, which will be overwritten by the next batch.
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CreateCollection;
+import io.qdrant.client.grpc.Collections.Distance;
-To disable indexing during upload, set `indexing_threshold` to `0`:
+import io.qdrant.client.grpc.Collections.HnswConfigDiff;
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
+import io.qdrant.client.grpc.Collections.VectorParams;
-```http
+import io.qdrant.client.grpc.Collections.VectorsConfig;
-PUT /collections/{collection_name}
-{
- ""vectors"": {
+QdrantClient client =
- ""size"": 768,
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""distance"": ""Cosine""
- },
- ""optimizers_config"": {
+client
- ""indexing_threshold"": 0
+ .createCollectionAsync(
- }
+ CreateCollection.newBuilder()
-}
+ .setCollectionName(""{collection_name}"")
-```
+ .setVectorsConfig(
+ VectorsConfig.newBuilder()
+ .setParams(
-```python
-
-from qdrant_client import QdrantClient, models
-
+ VectorParams.newBuilder()
+ .setSize(768)
-client = QdrantClient(""localhost"", port=6333)
+ .setDistance(Distance.Cosine)
+ .build())
+ .build())
-client.create_collection(
+ .setOptimizersConfig(
- collection_name=""{collection_name}"",
+ OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build())
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build())
- optimizers_config=models.OptimizersConfigDiff(
+ .build())
- indexing_threshold=0,
+ .get();
- ),
+```
-)
-```
+```csharp
+using Qdrant.Client;
-```typescript
+using Qdrant.Client.Grpc;
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+var client = new QdrantClient(""localhost"", 6334);
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+await client.CreateCollectionAsync(
-client.createCollection(""{collection_name}"", {
+ collectionName: ""{collection_name}"",
- vectors: {
+ vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine },
- size: 768,
+ optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 },
- distance: ""Cosine"",
+ hnswConfig: new HnswConfigDiff { OnDisk = true }
- },
+);
- optimizers_config: {
+```
- indexing_threshold: 0,
- },
-});
+```go
-```
+import (
+ ""context""
-After upload is done, you can enable indexing by setting `indexing_threshold` to a desired value (default is 20000):
+ ""github.com/qdrant/go-client/qdrant""
+)
-```http
-PATCH /collections/{collection_name}
-{
+client, err := qdrant.NewClient(&qdrant.Config{
- ""optimizers_config"": {
+ Host: ""localhost"",
- ""indexing_threshold"": 20000
+ Port: 6334,
- }
+})
-}
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+ CollectionName: ""{collection_name}"",
-```python
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
-from qdrant_client import QdrantClient, models
+ Size: 768,
+ Distance: qdrant.Distance_Cosine,
+ }),
-client = QdrantClient(""localhost"", port=6333)
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{
+ MaxSegmentSize: qdrant.PtrOf(uint64(20000)),
+ },
-client.update_collection(
+ HnswConfig: &qdrant.HnswConfigDiff{
- collection_name=""{collection_name}"",
+ OnDisk: qdrant.PtrOf(true),
- optimizer_config=models.OptimizersConfigDiff(indexing_threshold=20000),
+ },
-)
+})
```
-```typescript
+## Payload storage
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+Qdrant supports two types of payload storages: InMemory and OnDisk.
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+InMemory payload storage is organized in the same way as in-memory vectors.
-client.updateCollection(""{collection_name}"", {
+The payload data is loaded into RAM at service startup while disk and [RocksDB](https://rocksdb.org/) are used for persistence only.
- optimizers_config: {
+This type of storage works quite fast, but it may require a lot of space to keep all the data in RAM, especially if the payload has large values attached - abstracts of text or even images.
- indexing_threshold: 20000,
- },
-});
+In the case of large payload values, it might be better to use OnDisk payload storage.
-```
+This type of storage will read and write payload directly to RocksDB, so it won't require any significant amount of RAM to store.
+The downside, however, is the access latency.
+If you need to query vectors with some payload-based conditions - checking values stored on disk might take too much time.
-## Upload directly to disk
+In this scenario, we recommend creating a payload index for each field used in filtering conditions to avoid disk access.
+Once you create the field index, Qdrant will preserve all values of the indexed field in RAM regardless of the payload storage type.
-When the vectors you upload do not all fit in RAM, you likely want to use
-[memmap](../../concepts/storage/#configuring-memmap-storage)
+You can specify the desired type of payload storage with [configuration file](../../guides/configuration/) or with collection parameter `on_disk_payload` during [creation](../collections/#create-collection) of the collection.
-support.
+## Versioning
-During collection
-[creation](../../concepts/collections/#create-collection),
-memmaps may be enabled on a per-vector basis using the `on_disk` parameter. This
+To ensure data integrity, Qdrant performs all data changes in 2 stages.
-will store vector data directly on disk at all times. It is suitable for
+In the first step, the data is written to the Write-ahead-log(WAL), which orders all operations and assigns them a sequential number.
-ingesting a large amount of data, essential for the billion scale benchmark.
+Once a change has been added to the WAL, it will not be lost even if a power loss occurs.
-Using `memmap_threshold_kb` is not recommended in this case. It would require
+Then the changes go into the segments.
-the [optimizer](../../concepts/optimizer/) to constantly
+Each segment stores the last version of the change applied to it as well as the version of each individual point.
-transform in-memory segments into memmap segments on disk. This process is
+If the new change has a sequential number less than the current version of the point, the updater will ignore the change.
-slower, and the optimizer can be a bottleneck when ingesting a large amount of
+This mechanism allows Qdrant to safely and efficiently restore the storage from the WAL in case of an abnormal shutdown.
+",documentation/concepts/storage.md
+"---
-data.
+title: Explore
+weight: 55
+aliases:
-Read more about this in
+ - ../explore
-[Configuring Memmap Storage](../../concepts/storage/#configuring-memmap-storage).
+---
-## Parallel upload into multiple shards
+# Explore the data
-In Qdrant, each collection is split into shards. Each shard has a separate Write-Ahead-Log (WAL), which is responsible for ordering operations.
+After mastering the concepts in [search](../search/), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning.
-By creating multiple shards, you can parallelize upload of a large dataset. From 2 to 4 shards per one machine is a reasonable number.
+## Recommendation API
-```http
-PUT /collections/{collection_name}
-{
+In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points.
- ""vectors"": {
- ""size"": 768,
- ""distance"": ""Cosine""
+REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/recommend-points)
- },
- ""shard_number"": 2
-}
+```http
-```
+POST /collections/{collection_name}/points/query
+{
+ ""query"": {
-```python
+ ""recommend"": {
-from qdrant_client import QdrantClient, models
+ ""positive"": [100, 231],
+ ""negative"": [718, [0.2, 0.3, 0.4, 0.5]],
+ ""strategy"": ""average_vector""
-client = QdrantClient(""localhost"", port=6333)
+ }
+ },
+ ""filter"": {
-client.create_collection(
+ ""must"": [
- collection_name=""{collection_name}"",
+ {
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+ ""key"": ""city"",
- shard_number=2,
+ ""match"": {
-)
+ ""value"": ""London""
-```
+ }
+ }
+ ]
-```typescript
+ }
-import { QdrantClient } from ""@qdrant/js-client-rest"";
+}
+```
-const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+```python
+from qdrant_client import QdrantClient, models
-client.createCollection(""{collection_name}"", {
- vectors: {
- size: 768,
+client = QdrantClient(url=""http://localhost:6333"")
- distance: ""Cosine"",
- },
- shard_number: 2,
+client.query_points(
-});
+ collection_name=""{collection_name}"",
-```
-",documentation/tutorials/bulk-upload.md
-"---
+ query=models.RecommendQuery(
-title: Aleph Alpha Search
+ recommend=models.RecommendInput(
-weight: 16
+ positive=[100, 231],
----
+ negative=[718, [0.2, 0.3, 0.4, 0.5]],
+ strategy=models.RecommendStrategy.AVERAGE_VECTOR,
+ )
-# Multimodal Semantic Search with Aleph Alpha
+ ),
+ query_filter=models.Filter(
+ must=[
-| Time: 30 min | Level: Beginner | | |
+ models.FieldCondition(
-| --- | ----------- | ----------- |----------- |
+ key=""city"",
+ match=models.MatchValue(
+ value=""London"",
-This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks.
+ ),
+ )
+ ]
-In most cases, semantic search is limited to homogenous data types for both documents and queries (text-text, image-image, audio-audio, etc.). With the recent growth of multimodal architectures, it is now possible to encode different data types into the same latent space. That opens up some great possibilities, as you can finally explore non-textual data, for example visual, with text queries.
+ ),
+ limit=3,
+)
-In the past, this would require labelling every image with a description of what it presents. Right now, you can rely on vector embeddings, which can represent all
+```
-the inputs in the same space.
+```typescript
-*Figure 1: Two examples of text-image pairs presenting a similar object, encoded by a multimodal network into the same
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-2D latent space. Both texts are examples of English [pangrams](https://en.wikipedia.org/wiki/Pangram).
-https://deepai.org generated the images with pangrams used as input prompts.*
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-![](/docs/integrations/aleph-alpha/2d_text_image_embeddings.png)
+client.query(""{collection_name}"", {
+ query: {
+ recommend: {
+ positive: [100, 231],
-## Sample dataset
+ negative: [718, [0.2, 0.3, 0.4, 0.5]],
+ strategy: ""average_vector""
+ }
-You will be using [COCO](https://cocodataset.org/), a large-scale object detection, segmentation, and captioning dataset. It provides
+ },
-various splits, 330,000 images in total. For demonstration purposes, this tutorials uses the
+ filter: {
-[2017 validation split](http://images.cocodataset.org/zips/train2017.zip) that contains 5000 images from different
+ must: [
-categories with total size about 19GB.
+ {
-```terminal
+ key: ""city"",
-wget http://images.cocodataset.org/zips/train2017.zip
+ match: {
-```
+ value: ""London"",
+ },
+ },
-## Prerequisites
+ ],
+ },
+ limit: 3
-There is no need to curate your datasets and train the models. [Aleph Alpha](https://www.aleph-alpha.com/), already has multimodality and multilinguality already built-in. There is an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that simplifies the integration.
+});
+```
-In order to enable the search capabilities, you need to build the search index to query on. For this example,
-you are going to vectorize the images and store their embeddings along with the filenames. You can then return the most
+```rust
-similar files for given query.
+use qdrant_client::qdrant::{
+ Condition, Filter, QueryPointsBuilder, RecommendInputBuilder, RecommendStrategy,
+};
-There are two things you need to set up before you start:
+use qdrant_client::Qdrant;
-1. You need to have a Qdrant instance running. If you want to launch it locally,
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- [Docker is the fastest way to do that](https://qdrant.tech/documentation/quick_start/#installation).
+
-2. You need to have a registered [Aleph Alpha account](https://app.aleph-alpha.com/).
+client
-3. Upon registration, create an API key (see: [API Tokens](https://app.aleph-alpha.com/profile)).
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(
-Now you can store the Aleph Alpha API key in a variable and choose the model your are going to use.
+ RecommendInputBuilder::default()
+ .add_positive(100)
+ .add_positive(231)
-```python
+ .add_positive(vec![0.2, 0.3, 0.4, 0.5])
-aa_token = ""<< your_token >>""
+ .add_negative(718)
-model = ""luminous-base""
+ .strategy(RecommendStrategy::AverageVector)
-```
+ .build(),
+ )
+ .limit(3)
-## Vectorize the dataset
+ .filter(Filter::must([Condition::matches(
+ ""city"",
+ ""London"".to_string(),
-In this example, images have been extracted and are stored in the `val2017` directory:
+ )])),
+ )
+ .await?;
-```python
+```
-from aleph_alpha_client import (
- Prompt,
- AsyncClient,
+```java
- SemanticEmbeddingRequest,
+import java.util.List;
- SemanticRepresentation,
- Image,
-)
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.QueryPoints;
-from glob import glob
+import io.qdrant.client.grpc.Points.RecommendInput;
+import io.qdrant.client.grpc.Points.RecommendStrategy;
+import io.qdrant.client.grpc.Points.Filter;
-ids, vectors, payloads = [], [], []
-async with AsyncClient(token=aa_token) as client:
- for i, image_path in enumerate(glob(""./val2017/*.jpg"")):
+import static io.qdrant.client.ConditionFactory.matchKeyword;
- # Convert the JPEG file into the embedding by calling
+import static io.qdrant.client.VectorInputFactory.vectorInput;
- # Aleph Alpha API
+import static io.qdrant.client.QueryFactory.recommend;
- prompt = Image.from_file(image_path)
- prompt = Prompt.from_image(prompt)
- query_params = {
+QdrantClient client =
- ""prompt"": prompt,
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""representation"": SemanticRepresentation.Symmetric,
- ""compress_to_size"": 128,
- }
+client.queryAsync(QueryPoints.newBuilder()
- query_request = SemanticEmbeddingRequest(**query_params)
+ .setCollectionName(""{collection_name}"")
- query_response = await client.semantic_embed(request=query_request, model=model)
+ .setQuery(recommend(RecommendInput.newBuilder()
+ .addAllPositive(List.of(vectorInput(100), vectorInput(200), vectorInput(100.0f, 231.0f)))
+ .addAllNegative(List.of(vectorInput(718), vectorInput(0.2f, 0.3f, 0.4f, 0.5f)))
- # Finally store the id, vector and the payload
+ .setStrategy(RecommendStrategy.AverageVector)
- ids.append(i)
+ .build()))
- vectors.append(query_response.embedding)
+ .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")))
- payloads.append({""filename"": image_path})
+ .setLimit(3)
+
+ .build()).get();
```
-## Load embeddings into Qdrant
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Add all created embeddings, along with their ids and payloads into the `COCO` collection.
+using static Qdrant.Client.Grpc.Conditions;
-```python
+var client = new QdrantClient(""localhost"", 6334);
-import qdrant_client
-from qdrant_client.http.models import Batch, VectorParams, Distance
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
-qdrant_client = qdrant_client.QdrantClient()
+ query: new RecommendInput {
-qdrant_client.recreate_collection(
+ Positive = { 100, 231 },
- collection_name=""COCO"",
+ Negative = { 718 }
- vectors_config=VectorParams(
+ },
- size=len(vectors[0]),
+ filter: MatchKeyword(""city"", ""London""),
- distance=Distance.COSINE,
+ limit: 3
- ),
+);
-)
+```
-qdrant_client.upsert(
- collection_name=""COCO"",
- points=Batch(
+```go
- ids=ids,
+import (
- vectors=vectors,
+ ""context""
- payloads=payloads,
- ),
+
+ ""github.com/qdrant/go-client/qdrant""
)
-```
+client, err := qdrant.NewClient(&qdrant.Config{
-## Query the database
+ Host: ""localhost"",
+ Port: 6334,
+})
-The `luminous-base`, model can provide you the vectors for both texts and images, which means you can run both
-text queries and reverse image search. Assume you want to find images similar to the one below:
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-![An image used to query the database](/docs/integrations/aleph-alpha/visual_search_query.png)
+ Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
+ Positive: []*qdrant.VectorInput{
+ qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
-With the following code snippet create its vector embedding and then perform the lookup in Qdrant:
+ qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
+ },
+ Negative: []*qdrant.VectorInput{
-```python
+ qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
-async with AsyncCliet(token=aa_token) as client:
+ },
- prompt = ImagePrompt.from_file(""query.jpg"")
+ }),
- prompt = Prompt.from_image(prompt)
+ Filter: &qdrant.Filter{
+ Must: []*qdrant.Condition{
+ qdrant.NewMatch(""city"", ""London""),
- query_params = {
+ },
- ""prompt"": prompt,
+ },
- ""representation"": SemanticRepresentation.Symmetric,
+})
- ""compress_to_size"": 128,
+```
- }
- query_request = SemanticEmbeddingRequest(**query_params)
- query_response = await client.semantic_embed(request=query_request, model=model)
+Example result of this API would be
- results = qdrant.search(
+```json
- collection_name=""COCO"",
+{
- query_vector=query_response.embedding,
+ ""result"": [
- limit=3,
+ { ""id"": 10, ""score"": 0.81 },
- )
+ { ""id"": 14, ""score"": 0.75 },
- print(results)
+ { ""id"": 11, ""score"": 0.73 }
-```
+ ],
+ ""status"": ""ok"",
+ ""time"": 0.001
-Here are the results:
+}
+```
-![Visual search results](/docs/integrations/aleph-alpha/visual_search_results.png)
+The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case.
-**Note:** AlephAlpha models can provide embeddings for English, French, German, Italian
-and Spanish. Your search is not only multimodal, but also multilingual, without any need for translations.
+### Average vector strategy
-```python
+The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation.
-text = ""Surfing""
+The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula:
-async with AsyncClient(token=aa_token) as client:
- query_params = {
- ""prompt"": Prompt.from_text(text),
+```rust
- ""representation"": SemanticRepresentation.Symmetric,
+avg_positive + avg_positive - avg_negative
- ""compres_to_size"": 128,
+```
- }
- query_request = SemanticEmbeddingRequest(**query_params)
- query_response = await client.semantic_embed(request=query_request, model=model)
+In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`.
- results = qdrant.search(
+This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `""strategy"": ""average_vector""` in the recommendation request.
- collection_name=""COCO"",
- query_vector=query_response.embedding,
- limit=3,
+### Best score strategy
- )
- print(results)
-```
+*Available as of v1.6.0*
-Here are the top 3 results for “Surfing”:
+A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one.
+The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula:
-![Text search results](/docs/integrations/aleph-alpha/text_search_results.png)
-",documentation/tutorials/aleph-alpha-search.md
-"---
-title: Measure retrieval quality
+```rust
-weight: 21
+let score = if best_positive_score > best_negative_score {
----
+ best_positive_score
+} else {
+ -(best_negative_score * best_negative_score)
-# Measure retrieval quality
+};
+```
-| Time: 30 min | Level: Intermediate | | |
-|--------------|---------------------|--|----|
+
-Semantic search pipelines are as good as the embeddings they use. If your model cannot properly represent input data, similar objects might
-be far away from each other in the vector space. No surprise, that the search results will be poor in this case. There is, however, another
-component of the process which can also degrade the quality of the search results. It is the ANN algorithm itself.
+Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic.
-In this tutorial, we will show how to measure the quality of the semantic retrieval and how to tune the parameters of the HNSW, the ANN
+
-## Embeddings quality
+To use this algorithm, you need to set `""strategy"": ""best_score""` in the recommendation request.
-The quality of the embeddings is a topic for a separate tutorial. In a nutshell, it is usually measured and compared by benchmarks, such as
-[Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard). The evaluation process itself is pretty
+#### Using only negative examples
-straightforward and is based on a ground truth dataset built by humans. We have a set of queries and a set of the documents we would expect
-to receive for each of them. In the evaluation process, we take a query, find the most similar documents in the vector space and compare
-them with the ground truth. In that setup, **finding the most similar documents is implemented as full kNN search, without any approximation**.
+A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one.
-As a result, we can measure the quality of the embeddings themselves, without the influence of the ANN algorithm.
+Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning.
-## Retrieval quality
+### Multiple vectors
-Embeddings quality is indeed the most important factor in the semantic search quality. However, vector search engines, such as Qdrant, do not
-perform pure kNN search. Instead, they use **Approximate Nearest Neighbors** (ANN) algorithms, which are much faster than the exact search,
-but can return suboptimal results. We can also **measure the retrieval quality of that approximation** which also contributes to the overall
+*Available as of v0.10.0*
-search quality.
+If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request:
-### Quality metrics
+```http
-There are various ways of how quantify the quality of semantic search. Some of them, such as [Precision@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k),
+POST /collections/{collection_name}/points/query
-are based on the number of relevant documents in the top-k search results. Others, such as [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank),
+{
-take into account the position of the first relevant document in the search results. [DCG and NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)
+ ""query"": {
-metrics are, in turn, based on the relevance score of the documents.
+ ""recommend"": {
+ ""positive"": [100, 231],
+ ""negative"": [718]
-If we treat the search pipeline as a whole, we could use them all. The same is true for the embeddings quality evaluation. However, for the
+ }
-ANN algorithm itself, anything based on the relevance score or ranking is not applicable. Ranking in vector search relies on the distance
+ },
-between the query and the document in the vector space, however distance is not going to change due to approximation, as the function is
+ ""using"": ""image"",
-still the same.
+ ""limit"": 10
+}
+```
-Therefore, it only makes sense to measure the quality of the ANN algorithm by the number of relevant documents in the top-k search results,
-such as `precision@k`. It is calculated as the number of relevant documents in the top-k search results divided by `k`. In case of testing
-just the ANN algorithm, we can use the exact kNN search as a ground truth, with `k` being fixed. It will be a measure on **how well the ANN
+```python
-algorithm approximates the exact search**.
+client.query_points(
+ collection_name=""{collection_name}"",
+ query=models.RecommendQuery(
-## Measure the quality of the search results
+ recommend=models.RecommendInput(
+ positive=[100, 231],
+ negative=[718],
-Let's build a quality evaluation of the ANN algorithm in Qdrant. We will, first, call the search endpoint in a standard way to obtain
+ )
-the approximate search results. Then, we will call the exact search endpoint to obtain the exact matches, and finally compare both results
+ ),
-in terms of precision.
+ using=""image"",
+ limit=10,
+)
-Before we start, let's create a collection, fill it with some data and then start our evaluation. We will use the same dataset as in the
+```
-[Loading a dataset from Hugging Face hub](/documentation/tutorials/huggingface-datasets/) tutorial, `Qdrant/arxiv-titles-instructorxl-embeddings`
-from the [Hugging Face hub](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings). Let's download it in a streaming
-mode, as we are only going to use part of it.
+```typescript
+client.query(""{collection_name}"", {
+ query: {
-```python
+ recommend: {
-from datasets import load_dataset
+ positive: [100, 231],
+ negative: [718],
+ }
-dataset = load_dataset(
+ },
- ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+ using: ""image"",
-)
+ limit: 10
+
+});
```
-We need some data to be indexed and another set for the testing purposes. Let's get the first 50000 items for the training and the next 1000
+```rust
-for the testing.
+use qdrant_client::qdrant::{QueryPointsBuilder, RecommendInputBuilder};
-```python
+client
-dataset_iterator = iter(dataset)
+ .query(
-train_dataset = [next(dataset_iterator) for _ in range(60000)]
+ QueryPointsBuilder::new(""{collection_name}"")
-test_dataset = [next(dataset_iterator) for _ in range(1000)]
+ .query(
-```
+ RecommendInputBuilder::default()
+ .add_positive(100)
+ .add_positive(231)
-Now, let's create a collection and index the training data. This collection will be created with the default configuration. Please be aware that
+ .add_negative(718)
-it might be different from your collection settings, and it's always important to test exactly the same configuration you are going to use later
+ .build(),
-in production.
+ )
+ .limit(10)
+ .using(""image""),
-
+```java
+import java.util.List;
-```python
-from qdrant_client import QdrantClient, models
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.RecommendInput;
-client = QdrantClient(""http://localhost:6333"")
-client.create_collection(
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+import static io.qdrant.client.VectorInputFactory.vectorInput;
- vectors_config=models.VectorParams(
+import static io.qdrant.client.QueryFactory.recommend;
- size=768, # Size of the embeddings generated by InstructorXL model
- distance=models.Distance.COSINE,
- ),
+client.queryAsync(QueryPoints.newBuilder()
-)
+ .setCollectionName(""{collection_name}"")
-```
+ .setQuery(recommend(RecommendInput.newBuilder()
+ .addAllPositive(List.of(vectorInput(100), vectorInput(231)))
+ .addAllNegative(List.of(vectorInput(718)))
-We are now ready to index the training data. Uploading the records is going to trigger the indexing process, which will build the HNSW graph.
-
-The indexing process may take some time, depending on the size of the dataset, but your data is going to be available for search immediately
+ .build()))
-after receiving the response from the `upsert` endpoint. **As long as the indexing is not finished, and HNSW not built, Qdrant will perform
+ .setUsing(""image"")
-the exact search**. We have to wait until the indexing is finished to be sure that the approximate search is performed.
+ .setLimit(10)
+ .build()).get();
+```
-```python
-client.upload_records(
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+```csharp
- records=[
+using Qdrant.Client;
- models.Record(
+using Qdrant.Client.Grpc;
- id=item[""id""],
- vector=item[""vector""],
- payload=item,
+var client = new QdrantClient(""localhost"", 6334);
- )
- for item in train_dataset
- ]
+await client.QueryAsync(
-)
+ collectionName: ""{collection_name}"",
+ query: new RecommendInput {
+ Positive = { 100, 231 },
-while True:
+ Negative = { 718 }
- collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"")
+ },
- if collection_info.status == models.CollectionStatus.GREEN:
+ usingVector: ""image"",
- # Collection status is green, which means the indexing is finished
+ limit: 10
- break
+);
```
-## Standard mode vs exact search
-
-
-
-Qdrant has a built-in exact search mode, which can be used to measure the quality of the search results. In this mode, Qdrant performs a
+```go
-full kNN search for each query, without any approximation. It is not suitable for production use with high load, but it is perfect for the
+import (
-evaluation of the ANN algorithm and its parameters. It might be triggered by setting the `exact` parameter to `True` in the search request.
+ ""context""
-We are simply going to use all the examples from the test dataset as queries and compare the results of the approximate search with the
-results of the exact search. Let's create a helper function with `k` being a parameter, so we can calculate the `precision@k` for different
-values of `k`.
+ ""github.com/qdrant/go-client/qdrant""
+)
-```python
-def avg_precision_at_k(k: int):
+client, err := qdrant.NewClient(&qdrant.Config{
- precisions = []
+ Host: ""localhost"",
- for item in test_dataset:
+ Port: 6334,
- ann_result = client.search(
+})
- collection_name=""arxiv-titles-instructorxl-embeddings"",
- query_vector=item[""vector""],
- limit=k,
+client.Query(context.Background(), &qdrant.QueryPoints{
- )
+ CollectionName: ""{collection_name}"",
-
+ Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
- knn_result = client.search(
+ Positive: []*qdrant.VectorInput{
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+ qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
- query_vector=item[""vector""],
+ qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
- limit=k,
+ },
- search_params=models.SearchParams(
+ Negative: []*qdrant.VectorInput{
- exact=True, # Turns on the exact search mode
+ qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
- ),
+ },
- )
+ }),
+ Using: qdrant.PtrOf(""image""),
+})
- # We can calculate the precision@k by comparing the ids of the search results
+```
- ann_ids = set(item.id for item in ann_result)
- knn_ids = set(item.id for item in knn_result)
- precision = len(ann_ids.intersection(knn_ids)) / k
+Parameter `using` specifies which stored vectors to use for the recommendation.
- precisions.append(precision)
-
- return sum(precisions) / len(precisions)
+### Lookup vectors from another collection
-```
+*Available as of v0.11.6*
-Calculating the `precision@5` is as simple as calling the function with the corresponding parameter:
+If you have collections with vectors of the same dimensionality,
-```python
+and you want to look for recommendations in one collection based on the vectors of another collection,
-print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"")
+you can use the `lookup_from` parameter.
-```
+It might be useful, e.g. in the item-to-user recommendations scenario.
-Response:
+Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections.
-```text
+```http
-avg(precision@5) = 0.9935999999999995
+POST /collections/{collection_name}/points/query
-```
+{
+ ""query"": {
+ ""recommend"": {
-As we can see, the precision of the approximate search vs exact search is pretty high. There are, however, some scenarios when we
+ ""positive"": [100, 231],
-need higher precision and can accept higher latency. HNSW is pretty tunable, and we can increase the precision by changing its parameters.
+ ""negative"": [718]
-
+ }
-## Tweaking the HNSW parameters
+ },
+ ""limit"": 10,
+ ""lookup_from"": {
-HNSW is a hierarchical graph, where each node has a set of links to other nodes. The number of edges per node is called the `m` parameter.
+ ""collection"": ""{external_collection_name}"",
-The larger the value of it, the higher the precision of the search, but more space required. The `ef_construct` parameter is the number of
+ ""vector"": ""{external_vector_name}""
-neighbours to consider during the index building. Again, the larger the value, the higher the precision, but the longer the indexing time.
+ }
-The default values of these parameters are `m=16` and `ef_construct=100`. Let's try to increase them to `m=32` and `ef_construct=200` and
+}
-see how it affects the precision. Of course, we need to wait until the indexing is finished before we can perform the search.
+```
```python
-client.update_collection(
+client.query_points(
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+ collection_name=""{collection_name}"",
- hnsw_config=models.HnswConfigDiff(
+ query=models.RecommendQuery(
- m=32, # Increase the number of edges per node from the default 16 to 32
+ recommend=models.RecommendInput(
- ef_construct=200, # Increase the number of neighbours from the default 100 to 200
+ positive=[100, 231],
- )
+ negative=[718],
-)
+ )
+ ),
+ using=""image"",
-while True:
+ limit=10,
- collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"")
+ lookup_from=models.LookupLocation(
- if collection_info.status == models.CollectionStatus.GREEN:
+ collection=""{external_collection_name}"", vector=""{external_vector_name}""
- # Collection status is green, which means the indexing is finished
+ ),
- break
+)
```
-The same function can be used to calculate the average `precision@5`:
+```typescript
+client.query(""{collection_name}"", {
+ query: {
-```python
+ recommend: {
-print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"")
+ positive: [100, 231],
-```
+ negative: [718],
+ }
+
+ },
+ using: ""image"",
-Response:
+ limit: 10,
+ lookup_from: {
+ collection: ""{external_collection_name}"",
-```text
+ vector: ""{external_vector_name}""
-avg(precision@5) = 0.9969999999999998
+ }
+
+});
```
-The precision has obviously increased, and we know how to control it. However, there is a trade-off between the precision and the search
+```rust
-latency and memory requirements. In some specific cases, we may want to increase the precision as much as possible, so now we know how
+use qdrant_client::qdrant::{LookupLocationBuilder, QueryPointsBuilder, RecommendInputBuilder};
-to do it.
+client
-## Wrapping up
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(
-Assessing the quality of retrieval is a critical aspect of evaluating semantic search performance. It is imperative to measure retrieval quality when aiming for optimal quality of.
+ RecommendInputBuilder::default()
-your search results. Qdrant provides a built-in exact search mode, which can be used to measure the quality of the ANN algorithm itself,
+ .add_positive(100)
-even in an automated way, as part of your CI/CD pipeline.
+ .add_positive(231)
+ .add_negative(718)
+ .build(),
-Again, **the quality of the embeddings is the most important factor**. HNSW does a pretty good job in terms of precision, and it is
+ )
-parameterizable and tunable, when required. There are some other ANN algorithms available out there, such as [IVF*](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes#cell-probe-methods-indexivf-indexes),
+ .limit(10)
-but they usually [perform worse than HNSW in terms of quality and performance](https://nirantk.com/writing/pgvector-vs-qdrant/#correctness).
-",documentation/tutorials/retrieval-quality.md
-"---
+ .using(""image"")
-title: Neural Search Service
+ .lookup_from(
-weight: 1
+ LookupLocationBuilder::new(""{external_collection_name}"")
----
+ .vector_name(""{external_vector_name}""),
+ ),
+ )
-# Create a Simple Neural Search Service
+ .await?;
+```
-| Time: 30 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) |
-| --- | ----------- | ----------- |----------- |
+```java
+import java.util.List;
-This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry.
+import io.qdrant.client.grpc.Points.LookupLocation;
+import io.qdrant.client.grpc.Points.QueryPoints;
-A neural search service uses artificial neural networks to improve the accuracy and relevance of search results. Besides offering simple keyword results, this system can retrieve results by meaning. It can understand and interpret complex search queries and provide more contextually relevant output, effectively enhancing the user's search experience.
+import io.qdrant.client.grpc.Points.RecommendInput;
-
+client.queryAsync(QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+ .setQuery(recommend(RecommendInput.newBuilder()
+ .addAllPositive(List.of(vectorInput(100), vectorInput(231)))
+ .addAllNegative(List.of(vectorInput(718)))
-## Workflow
+ .build()))
+ .setUsing(""image"")
+ .setLimit(10)
-To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI.
+ .setLookupFrom(
+ LookupLocation.newBuilder()
+ .setCollectionName(""{external_collection_name}"")
-![Neural Search Workflow](/docs/workflow-neural-search.png)
+ .setVectorName(""{external_vector_name}"")
+ .build())
+ .build()).get();
-> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
+```
-## Prerequisites
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-To complete this tutorial, you will need:
+var client = new QdrantClient(""localhost"", 6334);
-- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
-- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
-- Python version >=3.8
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
+ query: new RecommendInput {
-## Prepare sample dataset
+ Positive = { 100, 231 },
+ Negative = { 718 }
+ },
-To conduct a neural search on startup descriptions, you must first encode the description data into vectors. To process text, you can use a pre-trained models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) or sentence transformers. The [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library lets you conveniently download and use many pre-trained models, such as DistilBERT, MPNet, etc.
+ usingVector: ""image"",
+ limit: 10,
+ lookupFrom: new LookupLocation
-1. First you need to download the dataset.
+ {
+ CollectionName = ""{external_collection_name}"",
+ VectorName = ""{external_vector_name}"",
-```bash
+ }
-wget https://storage.googleapis.com/generall-shared-data/startups_demo.json
+);
```
-2. Install the SentenceTransformer library as well as other relevant packages.
-
+```go
+import (
-```bash
+ ""context""
-pip install sentence-transformers numpy pandas tqdm
-```
+ ""github.com/qdrant/go-client/qdrant""
+)
-3. Import all relevant models.
+client, err := qdrant.NewClient(&qdrant.Config{
-```python
+ Host: ""localhost"",
-from sentence_transformers import SentenceTransformer
+ Port: 6334,
-import numpy as np
+})
-import json
-import pandas as pd
-from tqdm.notebook import tqdm
+client.Query(context.Background(), &qdrant.QueryPoints{
-```
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
+ Positive: []*qdrant.VectorInput{
-You will be using a pre-trained model called `all-MiniLM-L6-v2`.
+ qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
-This is a performance-optimized sentence embedding model and you can read more about it and other available models [here](https://www.sbert.net/docs/pretrained_models.html).
+ qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
+ },
+ Negative: []*qdrant.VectorInput{
+ qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
+ },
-4. Download and create a pre-trained sentence encoder.
+ }),
+ Using: qdrant.PtrOf(""image""),
+ LookupFrom: &qdrant.LookupLocation{
-```python
+ CollectionName: ""{external_collection_name}"",
-model = SentenceTransformer(
+ VectorName: qdrant.PtrOf(""{external_vector_name}""),
- ""all-MiniLM-L6-v2"", device=""cuda""
+ },
-) # or device=""cpu"" if you don't have a GPU
+})
```
-5. Read the raw data file.
-
-```python
+Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists.
-df = pd.read_json(""./startups_demo.json"", lines=True)
+These vectors then used to perform the recommendation in the current collection, comparing against the ""using"" or default vector.
-```
-6. Encode all startup descriptions to create an embedding vector for each. Internally, the `encode` function will split the input into batches, which will significantly speed up the process.
-```python
+## Batch recommendation API
-vectors = model.encode(
- [row.alt + "". "" + row.description for row in df.itertuples()],
- show_progress_bar=True,
+*Available as of v0.10.0*
-)
-```
-All of the descriptions are now converted into vectors. There are 40474 vectors of 384 dimensions. The output layer of the model has this dimension
+Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests.
-```python
+```http
-vectors.shape
+POST /collections/{collection_name}/query/batch
-# > (40474, 384)
+{
-```
+ ""searches"": [
+ {
+ ""query"": {
-7. Download the saved vectors into a new file named `startup_vectors.npy`
+ ""recommend"": {
+ ""positive"": [100, 231],
+ ""negative"": [718]
-```python
+ }
-np.save(""startup_vectors.npy"", vectors, allow_pickle=False)
+ },
-```
+ ""filter"": {
+ ""must"": [
+ {
-## Run Qdrant in Docker
+ ""key"": ""city"",
+ ""match"": {
+ ""value"": ""London""
-Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
+ }
+ }
+ ]
-> **Note:** Before you begin, create a project directory and a virtual python environment in it.
+ },
+ ""limit"": 10
+ },
-1. Download the Qdrant image from DockerHub.
+ {
+ ""query"": {
+ ""recommend"": {
-```bash
+ ""positive"": [200, 67],
-docker pull qdrant/qdrant
+ ""negative"": [300]
-```
+ }
-2. Start Qdrant inside of Docker.
+ },
+ ""filter"": {
+ ""must"": [
-```bash
+ {
-docker run -p 6333:6333 \
+ ""key"": ""city"",
- -v $(pwd)/qdrant_storage:/qdrant/storage \
+ ""match"": {
- qdrant/qdrant
+ ""value"": ""London""
-```
+ }
-You should see output like this
+ }
+ ]
+ },
-```text
+ ""limit"": 10
-...
+ }
-[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
+ ]
-[2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333
+}
```
-Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
+```python
+from qdrant_client import QdrantClient, models
-All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
+client = QdrantClient(url=""http://localhost:6333"")
-## Upload data to Qdrant
+filter_ = models.Filter(
+ must=[
-1. Install the official Python client to best interact with Qdrant.
+ models.FieldCondition(
+ key=""city"",
+ match=models.MatchValue(
-```bash
+ value=""London"",
-pip install qdrant-client
+ ),
-```
+ )
+ ]
+)
-At this point, you should have startup records in the `startups_demo.json` file, encoded vectors in `startup_vectors.npy` and Qdrant running on a local machine.
+recommend_queries = [
-Now you need to write a script to upload all startup data and vectors into the search engine.
+ models.QueryRequest(
+ query=models.RecommendQuery(
+ recommend=models.RecommendInput(positive=[100, 231], negative=[718])
-2. Create a client object for Qdrant.
+ ),
+ filter=filter_,
+ limit=3,
-```python
+ ),
-# Import client library
+ models.QueryRequest(
-from qdrant_client import QdrantClient
+ query=models.RecommendQuery(
-from qdrant_client.models import VectorParams, Distance
+ recommend=models.RecommendInput(positive=[200, 67], negative=[300])
+ ),
+ filter=filter_,
-qdrant_client = QdrantClient(""http://localhost:6333"")
+ limit=3,
-```
+ ),
+]
-3. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
+client.query_batch_points(
+ collection_name=""{collection_name}"", requests=recommend_queries
-```python
+)
-qdrant_client.recreate_collection(
+```
- collection_name=""startups"",
- vectors_config=VectorParams(size=384, distance=Distance.COSINE),
-)
+```typescript
-```
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-
+ },
+ ],
+};
-4. Create an iterator over the startup data and vectors.
+const searches = [
-The Qdrant client library defines a special function that allows you to load datasets into the service.
+ {
-However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input.
+ query: {
+ recommend: {
+ positive: [100, 231],
-```python
+ negative: [718]
-fd = open(""./startups_demo.json"")
+ }
+ },
+ filter,
-# payload is now an iterator over startup data
+ limit: 3,
-payload = map(json.loads, fd)
+ },
+ {
+ query: {
-# Load all vectors into memory, numpy array works as iterable for itself.
+ recommend: {
-# Other option would be to use Mmap, if you don't want to load all data into RAM
+ positive: [200, 67],
-vectors = np.load(""./startup_vectors.npy"")
+ negative: [300]
-```
+ }
+ },
+ filter,
-5. Upload the data
+ limit: 3,
+ },
+];
-```python
-qdrant_client.upload_collection(
- collection_name=""startups"",
+client.queryBatch(""{collection_name}"", {
- vectors=vectors,
+ searches,
- payload=payload,
+});
- ids=None, # Vector ids will be assigned automatically
+```
- batch_size=256, # How many vectors will be uploaded in a single request?
-)
-```
+```rust
+use qdrant_client::qdrant::{
+ Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder,
-Vectors are now uploaded to Qdrant.
+ RecommendInputBuilder,
+};
+use qdrant_client::Qdrant;
-## Build the search API
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-Now that all the preparations are complete, let's start building a neural search class.
+let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]);
-In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries.
+let recommend_queries = vec![
-1. Create a file named `neural_searcher.py` and specify the following.
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(
+ RecommendInputBuilder::default()
-```python
+ .add_positive(100)
-from qdrant_client import QdrantClient
+ .add_positive(231)
-from sentence_transformers import SentenceTransformer
+ .add_negative(718)
+ .build(),
+ )
+ .filter(filter.clone())
+ .build(),
-class NeuralSearcher:
+ QueryPointsBuilder::new(""{collection_name}"")
- def __init__(self, collection_name):
+ .query(
- self.collection_name = collection_name
+ RecommendInputBuilder::default()
- # Initialize encoder model
+ .add_positive(200)
- self.model = SentenceTransformer(""all-MiniLM-L6-v2"", device=""cpu"")
+ .add_positive(67)
- # initialize Qdrant client
+ .add_negative(300)
- self.qdrant_client = QdrantClient(""http://localhost:6333"")
+ .build(),
-```
+ )
+ .filter(filter)
+ .build(),
-2. Write the search function.
+];
-```python
+client
-def search(self, text: str):
+ .query_batch(QueryBatchPointsBuilder::new(
- # Convert text query into vector
+ ""{collection_name}"",
- vector = self.model.encode(text).tolist()
+ recommend_queries,
+ ))
+ .await?;
- # Use `vector` for search for closest vectors in the collection
+```
- search_result = self.qdrant_client.search(
- collection_name=self.collection_name,
- query_vector=vector,
+```java
- query_filter=None, # If you don't want any filters for now
+import java.util.List;
- limit=5, # 5 the most closest results is enough
- )
- # `search_result` contains found vector ids with similarity scores along with the stored payload
+import io.qdrant.client.QdrantClient;
- # In this function you are interested in payload only
+import io.qdrant.client.QdrantGrpcClient;
- payloads = [hit.payload for hit in search_result]
+import io.qdrant.client.grpc.Points.Filter;
- return payloads
+import io.qdrant.client.grpc.Points.QueryPoints;
-```
+import io.qdrant.client.grpc.Points.RecommendInput;
-3. Add search filters.
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+import static io.qdrant.client.VectorInputFactory.vectorInput;
+import static io.qdrant.client.QueryFactory.recommend;
-With Qdrant it is also feasible to add some conditions to the search.
-For example, if you wanted to search for startups in a certain city, the search query could look like this:
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-```python
-from qdrant_client.models import Filter
+Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build();
- ...
+List recommendQueries = List.of(
+ QueryPoints.newBuilder()
- city_of_interest = ""Berlin""
+ .setCollectionName(""{collection_name}"")
+ .setQuery(recommend(
+ RecommendInput.newBuilder()
- # Define a filter for cities
+ .addAllPositive(List.of(vectorInput(100), vectorInput(231)))
- city_filter = Filter(**{
+ .addAllNegative(List.of(vectorInput(731)))
- ""must"": [{
+ .build()))
- ""key"": ""city"", # Store city information in a field of the same name
+ .setFilter(filter)
- ""match"": { # This condition checks if payload field has the requested value
+ .setLimit(3)
- ""value"": city_of_interest
+ .build(),
- }
+ QueryPoints.newBuilder()
- }]
+ .setCollectionName(""{collection_name}"")
- })
+ .setQuery(recommend(
+ RecommendInput.newBuilder()
+ .addAllPositive(List.of(vectorInput(200), vectorInput(67)))
- search_result = self.qdrant_client.search(
+ .addAllNegative(List.of(vectorInput(300)))
- collection_name=self.collection_name,
+ .build()))
- query_vector=vector,
+ .setFilter(filter)
- query_filter=city_filter,
+ .setLimit(3)
- limit=5
+ .build());
- )
+
- ...
+client.queryBatchAsync(""{collection_name}"", recommendQueries).get();
```
-You have now created a class for neural search queries. Now wrap it up into a service.
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-## Deploy the search with FastAPI
+using static Qdrant.Client.Grpc.Conditions;
-To build the service you will use the FastAPI framework.
+var client = new QdrantClient(""localhost"", 6334);
-1. Install FastAPI.
+var filter = MatchKeyword(""city"", ""london"");
-To install it, use the command
+await client.QueryBatchAsync(
+ collectionName: ""{collection_name}"",
+ queries:
-```bash
+ [
-pip install fastapi uvicorn
+ new QueryPoints()
-```
+ {
+ CollectionName = ""{collection_name}"",
+ Query = new RecommendInput {
-2. Implement the service.
+ Positive = { 100, 231 },
+ Negative = { 718 },
+ },
-Create a file named `service.py` and specify the following.
+ Limit = 3,
+ Filter = filter,
+ },
-The service will have only one API endpoint and will look like this:
+ new QueryPoints()
+ {
+ CollectionName = ""{collection_name}"",
-```python
+ Query = new RecommendInput {
-from fastapi import FastAPI
+ Positive = { 200, 67 },
+ Negative = { 300 },
+ },
-# The file where NeuralSearcher is stored
+ Limit = 3,
-from neural_searcher import NeuralSearcher
+ Filter = filter,
+ }
+ ]
-app = FastAPI()
+);
+```
-# Create a neural searcher instance
-neural_searcher = NeuralSearcher(collection_name=""startups"")
+```go
+import (
+ ""context""
-@app.get(""/api/search"")
+ ""github.com/qdrant/go-client/qdrant""
-def search_startup(q: str):
+)
- return {""result"": neural_searcher.search(text=q)}
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-if __name__ == ""__main__"":
+})
- import uvicorn
+filter := qdrant.Filter{
- uvicorn.run(app, host=""0.0.0.0"", port=8000)
+ Must: []*qdrant.Condition{
-```
+ qdrant.NewMatch(""city"", ""London""),
+ },
+}
-3. Run the service.
+client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{
+ CollectionName: ""{collection_name}"",
+ QueryPoints: []*qdrant.QueryPoints{
-```bash
+ {
-python service.py
+ CollectionName: ""{collection_name}"",
-```
+ Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
+ Positive: []*qdrant.VectorInput{
+ qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
-4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
+ qdrant.NewVectorInputID(qdrant.NewIDNum(231)),
+ },
+ Negative: []*qdrant.VectorInput{
-You should be able to see a debug interface for your service.
+ qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
+ },
+ },
-![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
+ ),
+ Filter: &filter,
+ },
-Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
+ {
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{
-## Next steps
+ Positive: []*qdrant.VectorInput{
+ qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
+ qdrant.NewVectorInputID(qdrant.NewIDNum(67)),
-The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo).
+ },
-You can try it to get an intuition for cases when the neural search is useful.
+ Negative: []*qdrant.VectorInput{
-The demo contains a switch that selects between neural and full-text searches.
+ qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
-You can turn the neural search on and off to compare your result with a regular full-text search.
+ },
+ },
+ ),
-> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
+ Filter: &filter,
+ },
+ },
-Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
-",documentation/tutorials/neural-search.md
-"---
+},
-title: Semantic Search 101
+)
-weight: -100
+```
----
+The result of this API contains one array per recommendation requests.
-# Semantic Search for Beginners
+```json
-| Time: 5 - 15 min | Level: Beginner | | |
+{
-| --- | ----------- | ----------- |----------- |
+ ""result"": [
+ [
+ { ""id"": 10, ""score"": 0.81 },
-
+ { ""id"": 14, ""score"": 0.75 },
+ { ""id"": 11, ""score"": 0.73 }
+ ],
-## Overview
+ [
+ { ""id"": 1, ""score"": 0.92 },
+ { ""id"": 3, ""score"": 0.89 },
-If you are new to vector databases, this tutorial is for you. In 5 minutes you will build a semantic search engine for science fiction books. After you set it up, you will ask the engine about an impending alien threat. Your creation will recommend books as preparation for a potential space attack.
+ { ""id"": 9, ""score"": 0.75 }
+ ]
+ ],
-Before you begin, you need to have a [recent version of Python](https://www.python.org/downloads/) installed. If you don't know how to run this code in a virtual environment, follow Python documentation for [Creating Virtual Environments](https://docs.python.org/3/tutorial/venv.html#creating-virtual-environments) first.
+ ""status"": ""ok"",
+ ""time"": 0.001
+}
-This tutorial assumes you're in the bash shell. Use the Python documentation to activate a virtual environment, with commands such as:
+```
-```bash
+## Discovery API
-source tutorial-env/bin/activate
-```
+*Available as of v1.7*
-## 1. Installation
+REST API Schema definition available [here](https://api.qdrant.tech/api-reference/search/discover-points)
-You need to process your data so that the search engine can work with it. The [Sentence Transformers](https://www.sbert.net/) framework gives you access to common Large Language Models that turn raw data into embeddings.
+In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones).
-```bash
-pip install -U sentence-transformers
+The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs.
-```
+Discovery API lets you do two new types of search:
-Once encoded, this data needs to be kept somewhere. Qdrant lets you store data as embeddings. You can also use Qdrant to run search queries against this data. This means that you can ask the engine to give you relevant answers that go way beyond keyword matching.
+- **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context.
+- **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized
-```bash
-pip install -U qdrant-client
+The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data.
-```
+
-
+### Discovery search
-### Import the models
+This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed.
-Once the two main frameworks are defined, you need to specify the exact models this engine will use. Before you do, activate the Python prompt (`>>>`) with the `python` command.
+![Discovery search](/docs/discovery-search.png)
-```python
+The formula for the discovery score can be expressed as:
-from qdrant_client import models, QdrantClient
-from sentence_transformers import SentenceTransformer
-```
+$$
+\text{rank}(v^+, v^-) = \begin{cases}
+ 1, &\quad s(v^+) \geq s(v^-) \\\\
-The [Sentence Transformers](https://www.sbert.net/index.html) framework contains many embedding models. However, [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) is the fastest encoder for this tutorial.
+ -1, &\quad s(v^+) < s(v^-)
+\end{cases}
+$$
-```python
+where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as:
-encoder = SentenceTransformer(""all-MiniLM-L6-v2"")
+$$
-```
+ \text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-),
+$$
+where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second.
-## 2. Add the dataset
+Example:
-[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) will encode the data you provide. Here you will list all the science fiction books in your library. Each book has metadata, a name, author, publication year and a short description.
+```http
-```python
+POST /collections/{collection_name}/points/query
-documents = [
+{
- {
+ ""query"": {
- ""name"": ""The Time Machine"",
+ ""discover"": {
- ""description"": ""A man travels through time and witnesses the evolution of humanity."",
+ ""target"": [0.2, 0.1, 0.9, 0.7],
- ""author"": ""H.G. Wells"",
+ ""context"": [
- ""year"": 1895,
+ {
- },
+ ""positive"": 100,
- {
+ ""negative"": 718
- ""name"": ""Ender's Game"",
+ },
- ""description"": ""A young boy is trained to become a military leader in a war against an alien race."",
+ {
- ""author"": ""Orson Scott Card"",
+ ""positive"": 200,
- ""year"": 1985,
+ ""negative"": 300
- },
+ }
- {
+ ]
- ""name"": ""Brave New World"",
+ }
- ""description"": ""A dystopian society where people are genetically engineered and conditioned to conform to a strict social hierarchy."",
+ },
- ""author"": ""Aldous Huxley"",
+ ""limit"": 10
- ""year"": 1932,
+}
- },
+```
- {
- ""name"": ""The Hitchhiker's Guide to the Galaxy"",
- ""description"": ""A comedic science fiction series following the misadventures of an unwitting human and his alien friend."",
+```python
- ""author"": ""Douglas Adams"",
+from qdrant_client import QdrantClient, models
- ""year"": 1979,
- },
- {
+client = QdrantClient(url=""http://localhost:6333"")
- ""name"": ""Dune"",
- ""description"": ""A desert planet is the site of political intrigue and power struggles."",
- ""author"": ""Frank Herbert"",
+discover_queries = [
- ""year"": 1965,
+ models.QueryRequest(
- },
+ query=models.DiscoverQuery(
- {
+ discover=models.DiscoverInput(
- ""name"": ""Foundation"",
+ target=[0.2, 0.1, 0.9, 0.7],
- ""description"": ""A mathematician develops a science to predict the future of humanity and works to save civilization from collapse."",
+ context=[
- ""author"": ""Isaac Asimov"",
+ models.ContextPair(
- ""year"": 1951,
+ positive=100,
- },
+ negative=718,
- {
+ ),
- ""name"": ""Snow Crash"",
+ models.ContextPair(
- ""description"": ""A futuristic world where the internet has evolved into a virtual reality metaverse."",
+ positive=200,
- ""author"": ""Neal Stephenson"",
+ negative=300,
- ""year"": 1992,
+ ),
- },
+ ],
- {
+ )
- ""name"": ""Neuromancer"",
+ ),
- ""description"": ""A hacker is hired to pull off a near-impossible hack and gets pulled into a web of intrigue."",
+ limit=10,
- ""author"": ""William Gibson"",
+ ),
- ""year"": 1984,
+]
- },
+```
- {
- ""name"": ""The War of the Worlds"",
- ""description"": ""A Martian invasion of Earth throws humanity into chaos."",
+```typescript
- ""author"": ""H.G. Wells"",
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- ""year"": 1898,
- },
- {
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- ""name"": ""The Hunger Games"",
- ""description"": ""A dystopian society where teenagers are forced to fight to the death in a televised spectacle."",
- ""author"": ""Suzanne Collins"",
+client.query(""{collection_name}"", {
- ""year"": 2008,
+ query: {
- },
+ discover: {
- {
+ target: [0.2, 0.1, 0.9, 0.7],
- ""name"": ""The Andromeda Strain"",
+ context: [
- ""description"": ""A deadly virus from outer space threatens to wipe out humanity."",
+ {
- ""author"": ""Michael Crichton"",
+ positive: 100,
- ""year"": 1969,
+ negative: 718,
- },
+ },
- {
+ {
- ""name"": ""The Left Hand of Darkness"",
+ positive: 200,
- ""description"": ""A human ambassador is sent to a planet where the inhabitants are genderless and can change gender at will."",
+ negative: 300,
- ""author"": ""Ursula K. Le Guin"",
+ },
- ""year"": 1969,
+ ],
- },
+ }
- {
+ },
- ""name"": ""The Three-Body Problem"",
+ limit: 10,
- ""description"": ""Humans encounter an alien civilization that lives in a dying system."",
+});
- ""author"": ""Liu Cixin"",
+```
- ""year"": 2008,
- },
-]
+```rust
-```
+use qdrant_client::qdrant::{ContextInputBuilder, DiscoverInputBuilder, QueryPointsBuilder};
+use qdrant_client::Qdrant;
-## 3. Define storage location
+client
+ .query(
-You need to tell Qdrant where to store embeddings. This is a basic demo, so your local computer will use its memory as temporary storage.
+ QueryPointsBuilder::new(""{collection_name}"").query(
+ DiscoverInputBuilder::new(
+ vec![0.2, 0.1, 0.9, 0.7],
-```python
+ ContextInputBuilder::default()
-qdrant = QdrantClient("":memory:"")
+ .add_pair(100, 718)
-```
+ .add_pair(200, 300),
+ )
+ .build(),
-## 4. Create a collection
+ ),
+ )
+ .await?;
-All data in Qdrant is organized by collections. In this case, you are storing books, so we are calling it `my_books`.
+```
-```python
+```java
-qdrant.recreate_collection(
+import java.util.List;
- collection_name=""my_books"",
- vectors_config=models.VectorParams(
- size=encoder.get_sentence_embedding_dimension(), # Vector size is defined by used model
+import io.qdrant.client.QdrantClient;
- distance=models.Distance.COSINE,
+import io.qdrant.client.QdrantGrpcClient;
- ),
+import io.qdrant.client.grpc.Points.ContextInput;
-)
+import io.qdrant.client.grpc.Points.ContextInputPair;
-```
+import io.qdrant.client.grpc.Points.DiscoverInput;
+import io.qdrant.client.grpc.Points.QueryPoints;
-- Use `recreate_collection` if you are experimenting and running the script several times. This function will first try to remove an existing collection with the same name.
+import static io.qdrant.client.VectorInputFactory.vectorInput;
+import static io.qdrant.client.QueryFactory.discover;
-- The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. 384 is the encoder output dimensionality. You can also use model.get_sentence_embedding_dimension() to get the dimensionality of the model you are using.
+QdrantClient client =
-- The `distance` parameter lets you specify the function used to measure the distance between two points.
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+client.queryAsync(QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
-## 5. Upload data to collection
+ .setQuery(discover(DiscoverInput.newBuilder()
+ .setTarget(vectorInput(0.2f, 0.1f, 0.9f, 0.7f))
+ .setContext(ContextInput.newBuilder()
-Tell the database to upload `documents` to the `my_books` collection. This will give each record an id and a payload. The payload is just the metadata from the dataset.
+ .addAllPairs(List.of(
+ ContextInputPair.newBuilder()
+ .setPositive(vectorInput(100))
-```python
+ .setNegative(vectorInput(718))
-qdrant.upload_points(
+ .build(),
- collection_name=""my_books"",
+ ContextInputPair.newBuilder()
- points=[
+ .setPositive(vectorInput(200))
- models.PointStruct(
+ .setNegative(vectorInput(300))
- id=idx, vector=encoder.encode(doc[""description""]).tolist(), payload=doc
+ .build()))
- )
+ .build())
- for idx, doc in enumerate(documents)
+ .build()))
- ],
+ .setLimit(10)
-)
+ .build()).get();
```
-## 6. Ask the engine a question
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Now that the data is stored in Qdrant, you can ask it questions and receive semantically relevant results.
+var client = new QdrantClient(""localhost"", 6334);
-```python
-hits = qdrant.search(
- collection_name=""my_books"",
+await client.QueryAsync(
- query_vector=encoder.encode(""alien invasion"").tolist(),
+ collectionName: ""{collection_name}"",
- limit=3,
+ query: new DiscoverInput {
-)
+ Target = new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-for hit in hits:
+ Context = new ContextInput {
- print(hit.payload, ""score:"", hit.score)
+ Pairs = {
-```
+ new ContextInputPair {
+ Positive = 100,
+ Negative = 718
-**Response:**
+ },
+ new ContextInputPair {
+ Positive = 200,
-The search engine shows three of the most likely responses that have to do with the alien invasion. Each of the responses is assigned a score to show how close the response is to the original inquiry.
+ Negative = 300
+ },
+ }
-```text
+ },
-{'name': 'The War of the Worlds', 'description': 'A Martian invasion of Earth throws humanity into chaos.', 'author': 'H.G. Wells', 'year': 1898} score: 0.570093257022374
+ },
-{'name': ""The Hitchhiker's Guide to the Galaxy"", 'description': 'A comedic science fiction series following the misadventures of an unwitting human and his alien friend.', 'author': 'Douglas Adams', 'year': 1979} score: 0.5040468703143637
+ limit: 10
-{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
+);
```
-### Narrow down the query
+```go
+import (
+ ""context""
-How about the most recent book from the early 2000s?
+ ""github.com/qdrant/go-client/qdrant""
-```python
+)
-hits = qdrant.search(
- collection_name=""my_books"",
- query_vector=encoder.encode(""alien invasion"").tolist(),
+client, err := qdrant.NewClient(&qdrant.Config{
- query_filter=models.Filter(
+ Host: ""localhost"",
- must=[models.FieldCondition(key=""year"", range=models.Range(gte=2000))]
+ Port: 6334,
- ),
+})
- limit=1,
-)
-for hit in hits:
+client.Query(context.Background(), &qdrant.QueryPoints{
- print(hit.payload, ""score:"", hit.score)
+ CollectionName: ""{collection_name}"",
-```
+ Query: qdrant.NewQueryDiscover(&qdrant.DiscoverInput{
+ Target: qdrant.NewVectorInput(0.2, 0.1, 0.9, 0.7),
+ Context: &qdrant.ContextInput{
-**Response:**
+ Pairs: []*qdrant.ContextInputPair{
+ {
+ Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
-The query has been narrowed down to one result from 2008.
+ Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
+ },
+ {
-```text
+ Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
-{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
+ Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
-```
+ },
+ },
+ },
-## Next Steps
+ }),
+})
+```
-Congratulations, you have just created your very first search engine! Trust us, the rest of Qdrant is not that complicated, either. For your next tutorial you should try building an actual [Neural Search Service with a complete API and a dataset](../../tutorials/neural-search/).
+
-weight: 19
----
+### Context search
-# Loading a dataset from Hugging Face hub
+Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples.
-[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
-datasets. [Qdrant](https://huggingface.co/Qdrant) also publishes datasets along with the
+![Context search](/docs/context-search.png)
-embeddings that you can use to practice with Qdrant and build your applications based on semantic
-search. **Please [let us know](https://qdrant.to/discord) if you'd like to see a specific dataset!**
+We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities.
-## arxiv-titles-instructorxl-embeddings
+$$
+\text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0)
-[This dataset](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) contains
+$$
-embeddings generated from the paper titles only. Each vector has a payload with the title used to
-create it, along with the DOI (Digital Object Identifier).
+Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function.
-```json
-{
+Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases.
- ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"",
- ""DOI"": ""1612.05191""
-}
+Example:
-```
+```http
-You can find a detailed description of the dataset in the [Practice Datasets](/documentation/datasets/#journal-article-titles)
+POST /collections/{collection_name}/points/query
-section. If you prefer loading the dataset from a Qdrant snapshot, it also linked there.
+{
+ ""query"": {
+ ""context"": [
-Loading the dataset is as simple as using the `load_dataset` function from the `datasets` library:
+ {
+ ""positive"": 100,
+ ""negative"": 718
-```python
+ },
-from datasets import load_dataset
+ {
+ ""positive"": 200,
+ ""negative"": 300
-dataset = load_dataset(""Qdrant/arxiv-titles-instructorxl-embeddings"")
+ }
-```
+ ]
+ },
+ ""limit"": 10
-
+}
+```
-The dataset contains 2,250,000 vectors. This is how you can check the list of the features in the dataset:
+```python
+from qdrant_client import QdrantClient, models
-```python
-dataset.features
-```
+client = QdrantClient(url=""http://localhost:6333"")
-### Streaming the dataset
+discover_queries = [
+ models.QueryRequest(
+ query=models.ContextQuery(
-Dataset streaming lets you work with a dataset without downloading it. The data is streamed as
+ context=[
-you iterate over the dataset. You can read more about it in the [Hugging Face
+ models.ContextPair(
-documentation](https://huggingface.co/docs/datasets/stream).
+ positive=100,
+ negative=718,
+ ),
-```python
+ models.ContextPair(
-from datasets import load_dataset
+ positive=200,
+ negative=300,
+ ),
-dataset = load_dataset(
+ ],
- ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+ ),
-)
+ limit=10,
+
+ ),
+
+]
```
-### Loading the dataset into Qdrant
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-You can load the dataset into Qdrant using the [Python SDK](https://github.com/qdrant/qdrant-client).
-The embeddings are already precomputed, so you can store them in a collection, that we're going
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-to create in a second:
+client.query(""{collection_name}"", {
-```python
+ query: {
-from qdrant_client import QdrantClient, models
+ context: [
+ {
+ positive: 100,
-client = QdrantClient(""http://localhost:6333"")
+ negative: 718,
+ },
+ {
-client.create_collection(
+ positive: 200,
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+ negative: 300,
- vectors_config=models.VectorParams(
+ },
- size=768,
+ ]
- distance=models.Distance.COSINE,
+ },
- ),
+ limit: 10,
-)
+});
```
-It is always a good idea to use batching, while loading a large dataset, so let's do that.
+```rust
-We are going to need a helper function to split the dataset into batches:
+use qdrant_client::qdrant::{ContextInputBuilder, QueryPointsBuilder};
+use qdrant_client::Qdrant;
-```python
-from itertools import islice
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-def batched(iterable, n):
+client
- iterator = iter(iterable)
+ .query(
- while batch := list(islice(iterator, n)):
+ QueryPointsBuilder::new(""{collection_name}"").query(
- yield batch
+ ContextInputBuilder::default()
-```
+ .add_pair(100, 718)
+ .add_pair(200, 300)
+ .build(),
-If you are a happy user of Python 3.12+, you can use the [`batched` function from the `itertools`
+ ),
-](https://docs.python.org/3/library/itertools.html#itertools.batched) package instead.
+ )
+ .await?;
+```
-No matter what Python version you are using, you can use the `upsert` method to load the dataset,
-batch by batch, into Qdrant:
+```java
+import java.util.List;
-```python
-batch_size = 100
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-for batch in batched(dataset, batch_size):
+import io.qdrant.client.grpc.Points.ContextInput;
- ids = [point.pop(""id"") for point in batch]
+import io.qdrant.client.grpc.Points.ContextInputPair;
- vectors = [point.pop(""vector"") for point in batch]
+import io.qdrant.client.grpc.Points.QueryPoints;
- client.upsert(
+import static io.qdrant.client.VectorInputFactory.vectorInput;
- collection_name=""arxiv-titles-instructorxl-embeddings"",
+import static io.qdrant.client.QueryFactory.context;
- points=models.Batch(
- ids=ids,
- vectors=vectors,
+QdrantClient client =
- payloads=batch,
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ),
- )
-```
+client.queryAsync(QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
+ .setQuery(context(ContextInput.newBuilder()
-Your collection is ready to be used for search! Please [let us know using Discord](https://qdrant.to/discord)
+ .addAllPairs(List.of(
-if you would like to see more datasets published on Hugging Face hub.
-",documentation/tutorials/huggingface-datasets.md
-"---
+ ContextInputPair.newBuilder()
-title: Neural Search with Fastembed
+ .setPositive(vectorInput(100))
-weight: 2
+ .setNegative(vectorInput(718))
----
+ .build(),
+ ContextInputPair.newBuilder()
+ .setPositive(vectorInput(200))
-# Create a Neural Search Service with Fastembed
+ .setNegative(vectorInput(300))
+ .build()))
+ .build()))
-| Time: 20 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/) |
+ .setLimit(10)
-| --- | ----------- | ----------- |----------- |
+ .build()).get();
+```
-This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query.
-The website contains the company names, descriptions, locations, and a picture for each entry.
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Alternatively, you can use datasources such as [Crunchbase](https://www.crunchbase.com/), but that would require obtaining an API key from them.
+var client = new QdrantClient(""localhost"", 6334);
-Our neural search service will use [Fastembed](https://github.com/qdrant/fastembed) package to generate embeddings of text descriptions and [FastAPI](https://fastapi.tiangolo.com/) to serve the search API.
-Fastembed natively integrates with Qdrant client, so you can easily upload the data into Qdrant and perform search queries.
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
+ query: new ContextInput {
+ Pairs = {
-
+ },
+ new ContextInputPair {
+ Positive = 200,
+ Negative = 300
+ },
-## Workflow
+ }
+ },
+ limit: 10
-To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it.
+);
-First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI.
+```
-![Neural Search Workflow](/docs/workflow-neural-search.png)
+```go
+import (
+ ""context""
-> **Note**: The code for this tutorial can be found here: [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/).
+ ""github.com/qdrant/go-client/qdrant""
-## Prerequisites
+)
-To complete this tutorial, you will need:
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
+})
-- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
-- Python version >=3.8
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-## Prepare sample dataset
+ Query: qdrant.NewQueryContext(&qdrant.ContextInput{
+ Pairs: []*qdrant.ContextInputPair{
+ {
-To conduct a neural search on startup descriptions, you must first encode the description data into vectors.
+ Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)),
-Fastembed integration into qdrant client combines encoding and uploading into a single step.
+ Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)),
+ },
+ {
-It also takes care of batching and parallelization, so you don't have to worry about it.
+ Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)),
+ Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)),
+ },
-Let's start by downloading the data and installing the necessary packages.
+ },
+ }),
+})
+```
-1. First you need to download the dataset.
+
+",documentation/concepts/explore.md
+"---
-Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
+title: Optimizer
+weight: 70
+aliases:
-> **Note:** Before you begin, create a project directory and a virtual python environment in it.
+ - ../optimizer
+---
-1. Download the Qdrant image from DockerHub.
+# Optimizer
-```bash
-docker pull qdrant/qdrant
+It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely.
-```
-2. Start Qdrant inside of Docker.
+Storage optimization in Qdrant occurs at the segment level (see [storage](../storage/)).
+In this case, the segment to be optimized remains readable for the time of the rebuild.
-```bash
-docker run -p 6333:6333 \
- -v $(pwd)/qdrant_storage:/qdrant/storage \
+![Segment optimization](/docs/optimization.svg)
- qdrant/qdrant
-```
-You should see output like this
+The availability is achieved by wrapping the segment into a proxy that transparently handles data changes.
+Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates.
-```text
-...
+## Vacuum Optimizer
-[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
-[2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333
-```
+The simplest example of a case where you need to rebuild a segment repository is to remove points.
+Like many other databases, Qdrant does not delete entries immediately after a query.
+Instead, it marks records as deleted and ignores them for future queries.
-Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
+This strategy allows us to minimize disk access - one of the slowest operations.
-All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
+However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system.
+To avoid these adverse effects, Vacuum Optimizer is used.
+It is used if the segment has accumulated too many deleted records.
-## Upload data to Qdrant
+The criteria for starting the optimizer are defined in the configuration file.
-1. Install the official Python client to best interact with Qdrant.
+Here is an example of parameter values:
-```bash
-pip install qdrant-client[fastembed]
-```
+```yaml
+storage:
+ optimizers:
-Note, that you need to install the `fastembed` extra to enable Fastembed integration.
+ # The minimal fraction of deleted vectors in a segment, required to perform segment optimization
-At this point, you should have startup records in the `startups_demo.json` file and Qdrant running on a local machine.
+ deleted_threshold: 0.2
+ # The minimal number of vectors in a segment, required to perform segment optimization
+ vacuum_min_vector_number: 1000
-Now you need to write a script to upload all startup data and vectors into the search engine.
+```
-2. Create a client object for Qdrant.
+## Merge Optimizer
-```python
+The service may require the creation of temporary segments.
-# Import client library
+Such segments, for example, are created as copy-on-write segments during optimization itself.
-from qdrant_client import QdrantClient
+It is also essential to have at least one small segment that Qdrant will use to store frequently updated data.
-qdrant_client = QdrantClient(""http://localhost:6333"")
+On the other hand, too many small segments lead to suboptimal search performance.
-```
+There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created.
-3. Select model to encode your data.
+The criteria for starting the optimizer are defined in the configuration file.
-You will be using a pre-trained model called `sentence-transformers/all-MiniLM-L6-v2`.
+Here is an example of parameter values:
-```python
-qdrant_client.set_model(""sentence-transformers/all-MiniLM-L6-v2"")
-```
+```yaml
+storage:
+ optimizers:
+ # If the number of segments exceeds this value, the optimizer will merge the smallest segments.
+ max_segment_number: 5
-4. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
+```
-```python
+## Indexing Optimizer
-qdrant_client.recreate_collection(
- collection_name=""startups"",
- vectors_config=qdrant_client.get_fastembed_vector_params(),
+Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records.
-)
+So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan.
-```
+The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached.
-Note, that we use `get_fastembed_vector_params` to get the vector size and distance function from the model.
-This method automatically generates configuration, compatible with the model you are using.
-Without fastembed integration, you would need to specify the vector size and distance function manually. Read more about it [here](/documentation/tutorials/neural-search).
+The criteria for starting the optimizer are defined in the configuration file.
-Additionally, you can specify extended configuration for our vectors, like `quantization_config` or `hnsw_config`.
+Here is an example of parameter values:
+```yaml
+storage:
-5. Read data from the file.
+ optimizers:
+ # Maximum size (in kilobytes) of vectors to store in-memory per segment.
+ # Segments larger than this threshold will be stored as read-only memmaped file.
-```python
+ # Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value.
-payload_path = os.path.join(DATA_DIR, ""startups_demo.json"")
+ # To disable memmap storage, set this to `0`.
-metadata = []
+ # Note: 1Kb = 1 vector of size 256
-documents = []
+ memmap_threshold_kb: 200000
-with open(payload_path) as fd:
+ # Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing
- for line in fd:
+ # Default value is 20,000, based on .
- obj = json.loads(line)
+ # To disable vector indexing, set to `0`.
- documents.append(obj.pop(""description""))
+ # Note: 1kB = 1 vector of size 256.
- metadata.append(obj)
+ indexing_threshold_kb: 20000
```
-In this block of code, we read data we read data from `startups_demo.json` file and split it into 2 lists: `documents` and `metadata`.
+In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections/).
-Documents are the raw text descriptions of startups. Metadata is the payload associated with each startup, such as the name, location, and picture.
-
-We will use `documents` to encode the data into vectors.
+Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index.",documentation/concepts/optimizer.md
+"---
+title: Search
+weight: 50
-6. Encode and upload data.
+aliases:
+ - ../search
+---
-```python
-client.add(
- collection_name=""startups"",
+# Similarity search
- documents=documents,
- metadata=metadata,
- parallel=0, # Use all available CPU cores to encode data
+Searching for the nearest vectors is at the core of many representational learning applications.
-)
+Modern neural networks are trained to transform objects into vectors so that objects close in the real world appear close in vector space.
-```
+It could be, for example, texts with similar meanings, visually similar pictures, or songs of the same genre.
-The `add` method will encode all documents and upload them to Qdrant.
+{{< figure src=""/docs/encoders.png"" caption=""This is how vector similarity works"" width=""70%"" >}}
-This is one of two fastembed-specific methods, that combines encoding and uploading into a single step.
+## Query API
-The `parallel` parameter controls the number of CPU cores used to encode data.
+*Available as of v1.10.0*
-Additionally, you can specify ids for each document, if you want to use them later to update or delete documents.
-If you don't specify ids, they will be generated automatically and returned as a result of the `add` method.
+Qdrant provides a single interface for all kinds of search and exploration requests - the `Query API`.
+Here is a reference list of what kind of queries you can perform with the `Query API` in Qdrant:
-You can monitor the progress of the encoding by passing tqdm progress bar to the `add` method.
+Depending on the `query` parameter, Qdrant might prefer different strategies for the search.
-```python
-from tqdm import tqdm
+| | |
+| --- | --- |
-client.add(
+| Nearest Neighbors Search | Vector Similarity Search, also known as k-NN |
- collection_name=""startups"",
+| Search By Id | Search by an already stored vector - skip embedding model inference |
- documents=documents,
+| [Recommendations](../explore/#recommendation-api) | Provide positive and negative examples |
- metadata=metadata,
+| [Discovery Search](../explore/#discovery-api) | Guide the search using context as a one-shot training set |
- ids=tqdm(range(len(documents))),
+| [Scroll](../points/#scroll-points) | Get all points with optional filtering |
-)
+| [Grouping](../search/#grouping-api) | Group results by a certain field |
-```
+| [Order By](../hybrid-queries/#re-ranking-with-stored-values) | Order points by payload key |
+| [Hybrid Search](../hybrid-queries/#hybrid-search) | Combine multiple queries to get better results |
+| [Multi-Stage Search](../hybrid-queries/#multi-stage-queries) | Optimize performance for large embeddings |
-> **Note**: See the full code for this step [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py).
+| [Random Sampling](#random-sampling) | Get random points from the collection |
+**Nearest Neighbors Search**
-## Build the search API
+```http
+POST /collections/{collection_name}/points/query
-Now that all the preparations are complete, let's start building a neural search class.
+{
+ ""query"": [0.2, 0.1, 0.9, 0.7] // <--- Dense vector
+}
-In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries.
+```
-Fastembed integration into qdrant client combines encoding and uploading into a single method call.
+```python
+client.query_points(
+ collection_name=""{collection_name}"",
-1. Create a file named `neural_searcher.py` and specify the following.
+ query=[0.2, 0.1, 0.9, 0.7], # <--- Dense vector
+)
+```
-```python
-from qdrant_client import QdrantClient
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-class NeuralSearcher:
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
- def __init__(self, collection_name):
- self.collection_name = collection_name
- # initialize Qdrant client
+client.query(""{collection_name}"", {
- self.qdrant_client = QdrantClient(""http://localhost:6333"")
+ query: [0.2, 0.1, 0.9, 0.7], // <--- Dense vector
- self.qdrant_client.set_model(""sentence-transformers/all-MiniLM-L6-v2"")
+});
```
-2. Write the search function.
-
+```rust
+use qdrant_client::Qdrant;
-```python
+use qdrant_client::qdrant::{Condition, Filter, Query, QueryPointsBuilder};
-def search(self, text: str):
- search_result = self.qdrant_client.query(
- collection_name=self.collection_name,
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- query_text=text,
- query_filter=None, # If you don't want any filters for now
- limit=5, # 5 the closest results are enough
+client
- )
+ .query(
- # `search_result` contains found vector ids with similarity scores along with the stored payload
+ QueryPointsBuilder::new(""{collection_name}"")
- # In this function you are interested in payload only
+ .query(Query::new_nearest(vec![0.2, 0.1, 0.9, 0.7]))
- metadata = [hit.metadata for hit in search_result]
+ )
- return metadata
+ .await?;
```
-3. Add search filters.
+```java
+import java.util.List;
-With Qdrant it is also feasible to add some conditions to the search.
-For example, if you wanted to search for startups in a certain city, the search query could look like this:
+import static io.qdrant.client.QueryFactory.nearest;
-```python
+import io.qdrant.client.QdrantClient;
-from qdrant_client.models import Filter
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.QueryPoints;
- ...
+QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- city_of_interest = ""Berlin""
+client.queryAsync(QueryPoints.newBuilder()
+ .setCollectionName(""{collectionName}"")
- # Define a filter for cities
+ .setQuery(nearest(List.of(0.2f, 0.1f, 0.9f, 0.7f)))
- city_filter = Filter(**{
+ .build()).get();
- ""must"": [{
+```
- ""key"": ""city"", # Store city information in a field of the same name
- ""match"": { # This condition checks if payload field has the requested value
- ""value"": ""city_of_interest""
+```csharp
- }
+using Qdrant.Client;
- }]
- })
+var client = new QdrantClient(""localhost"", 6334);
- search_result = self.qdrant_client.query(
- collection_name=self.collection_name,
+await client.QueryAsync(
- query_text=text,
+ collectionName: ""{collection_name}"",
- query_filter=city_filter,
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }
- limit=5
+);
- )
+```
- ...
-```
+```go
+import (
-You have now created a class for neural search queries. Now wrap it up into a service.
+ ""context""
-## Deploy the search with FastAPI
+ ""github.com/qdrant/go-client/qdrant""
+)
-To build the service you will use the FastAPI framework.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-1. Install FastAPI.
+ Port: 6334,
+})
-To install it, use the command
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-```bash
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-pip install fastapi uvicorn
+})
```
-2. Implement the service.
+**Search By Id**
-Create a file named `service.py` and specify the following.
+```http
+POST /collections/{collection_name}/points/query
+{
-The service will have only one API endpoint and will look like this:
+ ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"" // <--- point id
+}
+```
-```python
-from fastapi import FastAPI
+```python
+client.query_points(
-# The file where NeuralSearcher is stored
+ collection_name=""{collection_name}"",
-from neural_searcher import NeuralSearcher
+ query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id
+)
+```
-app = FastAPI()
+```typescript
-# Create a neural searcher instance
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-neural_searcher = NeuralSearcher(collection_name=""startups"")
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-@app.get(""/api/search"")
+client.query(""{collection_name}"", {
-def search_startup(q: str):
+ query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
- return {""result"": neural_searcher.search(text=q)}
+});
+```
+```rust
-if __name__ == ""__main__"":
+use qdrant_client::Qdrant;
- import uvicorn
+use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder};
- uvicorn.run(app, host=""0.0.0.0"", port=8000)
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-```
+client
-3. Run the service.
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
-```bash
+ )
-python service.py
+ .await?;
```
-4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
+```java
+import java.util.UUID;
-You should be able to see a debug interface for your service.
+import static io.qdrant.client.QueryFactory.nearest;
-![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
+import io.qdrant.client.grpc.Points.QueryPoints;
-## Next steps
+QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo).
+client.queryAsync(QueryPoints.newBuilder()
-You can try it to get an intuition for cases when the neural search is useful.
+ .setCollectionName(""{collectionName}"")
-The demo contains a switch that selects between neural and full-text searches.
+ .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
-You can turn the neural search on and off to compare your result with a regular full-text search.
+ .build()).get();
+```
-> **Note**: The code for this tutorial can be found here: [Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/).
+```csharp
+using Qdrant.Client;
-Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
-",documentation/tutorials/neural-search-fastembed.md
-"---
-title: Asynchronous API
-weight: 14
+var client = new QdrantClient(""localhost"", 6334);
----
+await client.QueryAsync(
-# Using Qdrant asynchronously
+ collectionName: ""{collection_name}"",
+ query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")
+);
-Asynchronous programming is being broadly adopted in the Python ecosystem. Tools such as FastAPI [have embraced this new
+```
-paradigm](https://fastapi.tiangolo.com/async/), but it is also becoming a standard for ML models served as SaaS. For example, the Cohere SDK
-[provides an async client](https://cohere-sdk.readthedocs.io/en/latest/cohere.html#asyncclient) next to its synchronous counterpart.
+```go
+import (
-Databases are often launched as separate services and are accessed via a network. All the interactions with them are IO-bound and can
+ ""context""
-be performed asynchronously so as not to waste time actively waiting for a server response. In Python, this is achieved by
-using [`async/await`](https://docs.python.org/3/library/asyncio-task.html) syntax. That lets the interpreter switch to another task
-while waiting for a response from the server.
+ ""github.com/qdrant/go-client/qdrant""
+)
-## When to use async API
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-There is no need to use async API if the application you are writing will never support multiple users at once (e.g it is a script that runs once per day). However, if you are writing a web service that multiple users will use simultaneously, you shouldn't be
+ Port: 6334,
-blocking the threads of the web server as it limits the number of concurrent requests it can handle. In this case, you should use
+})
-the async API.
+client.Query(context.Background(), &qdrant.QueryPoints{
-Modern web frameworks like [FastAPI](https://fastapi.tiangolo.com/) and [Quart](https://quart.palletsprojects.com/en/latest/) support
+ CollectionName: ""{collection_name}"",
-async API out of the box. Mixing asynchronous code with an existing synchronous codebase might be a challenge. The `async/await` syntax
+ Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")),
-cannot be used in synchronous functions. On the other hand, calling an IO-bound operation synchronously in async code is considered
+})
-an antipattern. Therefore, if you build an async web service, exposed through an [ASGI](https://asgi.readthedocs.io/en/latest/) server,
+```
-you should use the async API for all the interactions with Qdrant.
+## Metrics
-
+In Qdrant terms, these ways are called metrics.
+The choice of metric depends on the vectors obtained and, in particular, on the neural network encoder training method.
-### Using Qdrant asynchronously
+Qdrant supports these most popular types of metrics:
-The simplest way of running asynchronous code is to use define `async` function and use the `asyncio.run` in the following way to run it:
+* Dot product: `Dot` -
+* Cosine similarity: `Cosine` -
-```python
+* Euclidean distance: `Euclid` -
-from qdrant_client import models
+* Manhattan distance: `Manhattan`*- *Available as of v1.7
-import qdrant_client
+The most typical metric used in similarity learning models is the cosine metric.
-import asyncio
+![Embeddings](/docs/cos.png)
-async def main():
+Qdrant counts this metric in 2 steps, due to which a higher search speed is achieved.
- client = qdrant_client.AsyncQdrantClient(""localhost"")
+The first step is to normalize the vector when adding it to the collection.
+It happens only once for each vector.
- # Create a collection
- await client.create_collection(
+The second step is the comparison of vectors.
- collection_name=""my_collection"",
+In this case, it becomes equivalent to dot production - a very fast operation due to SIMD.
- vectors_config=models.VectorParams(size=4, distance=models.Distance.COSINE),
- )
+Depending on the query configuration, Qdrant might prefer different strategies for the search.
+Read more about it in the [query planning](#query-planning) section.
- # Insert a vector
- await client.upsert(
- collection_name=""my_collection"",
+## Search API
- points=[
- models.PointStruct(
- id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+Let's look at an example of a search query.
- payload={
- ""color"": ""red"",
- },
+REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/query-points)
- vector=[0.9, 0.1, 0.1, 0.5],
- ),
- ],
+```http
- )
+POST /collections/{collection_name}/points/query
+{
+ ""query"": [0.2, 0.1, 0.9, 0.79],
- # Search for nearest neighbors
+ ""filter"": {
- points = await client.search(
+ ""must"": [
- collection_name=""my_collection"",
+ {
- query_vector=[0.9, 0.1, 0.1, 0.5],
+ ""key"": ""city"",
- limit=2,
+ ""match"": {
- )
+ ""value"": ""London""
+ }
+ }
- # Your async code using AsyncQdrantClient might be put here
+ ]
- # ...
+ },
+ ""params"": {
+ ""hnsw_ef"": 128,
+ ""exact"": false
+ },
-asyncio.run(main())
+ ""limit"": 3
+
+}
```
-The `AsyncQdrantClient` provides the same methods as the synchronous counterpart `QdrantClient`. If you already have a synchronous
+```python
-codebase, switching to async API is as simple as replacing `QdrantClient` with `AsyncQdrantClient` and adding `await` before each
+from qdrant_client import QdrantClient, models
-method call.
+client = QdrantClient(url=""http://localhost:6333"")
-
+client.query_points(
+
+ collection_name=""{collection_name}"",
+ query=[0.2, 0.1, 0.9, 0.7],
+ query_filter=models.Filter(
-## Supported Python libraries
+ must=[
+ models.FieldCondition(
+ key=""city"",
-Qdrant integrates with numerous Python libraries. Until recently, only [Langchain](https://python.langchain.com) provided async Python API support.
+ match=models.MatchValue(
-Qdrant is the only vector database with full coverage of async API in Langchain. Their documentation [describes how to use
+ value=""London"",
-it](https://python.langchain.com/docs/modules/data_connection/vectorstores/#asynchronous-operations).
-",documentation/tutorials/async-api.md
-"---
+ ),
-title: Create and restore from snapshot
+ )
-weight: 14
+ ]
----
+ ),
+ search_params=models.SearchParams(hnsw_ef=128, exact=False),
+ limit=3,
-# Create and restore collections from snapshot
+)
+```
-| Time: 20 min | Level: Beginner | | |
-|--------------|-----------------|--|----|
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-A collection is a basic unit of data storage in Qdrant. It contains vectors, their IDs, and payloads. However, keeping the search efficient requires additional data structures to be built on top of the data. Building these data structures may take a while, especially for large collections.
-That's why using snapshots is the best way to export and import Qdrant collections, as they contain all the bits and pieces required to restore the entire collection efficiently.
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-This tutorial will show you how to create a snapshot of a collection and restore it. Since working with snapshots in a distributed environment might be thought to be a bit more complex, we will use a 3-node Qdrant cluster. However, the same approach applies to a single-node setup.
+client.query(""{collection_name}"", {
+ query: [0.2, 0.1, 0.9, 0.7],
+ filter: {
-
+ must: [
+ {
+ key: ""city"",
-## Prerequisites
+ match: {
+ value: ""London"",
+ },
-Let's assume you already have a running Qdrant instance or a cluster. If not, you can follow the [installation guide](/documentation/guides/installation) to set up a local Qdrant instance or use [Qdrant Cloud](https://cloud.qdrant.io/) to create a cluster in a few clicks.
+ },
+ ],
+ },
-Once the cluster is running, let's install the required dependencies:
+ params: {
+ hnsw_ef: 128,
+ exact: false,
-```shell
+ },
-pip install qdrant-client datasets
+ limit: 3,
+
+});
```
-### Establish a connection to Qdrant
+```rust
+use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder, SearchParamsBuilder};
+use qdrant_client::Qdrant;
-We are going to use the Python SDK and raw HTTP calls to interact with Qdrant. Since we are going to use a 3-node cluster, we need to know the URLs of all the nodes. For the simplicity, let's keep them all in constants, along with the API key, so we can refer to them later:
+client
-```python
+ .query(
-QDRANT_MAIN_URL = ""https://my-cluster.com:6333""
+ QueryPointsBuilder::new(""{collection_name}"")
-QDRANT_NODES = (
+ .query(vec![0.2, 0.1, 0.9, 0.7])
- ""https://node-0.my-cluster.com:6333"",
+ .limit(3)
- ""https://node-1.my-cluster.com:6333"",
+ .filter(Filter::must([Condition::matches(
- ""https://node-2.my-cluster.com:6333"",
+ ""city"",
-)
+ ""London"".to_string(),
-QDRANT_API_KEY = ""my-api-key""
+ )]))
-```
+ .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)),
+ )
+ .await?;
-
+```
-We can now create a client instance:
+```java
+import java.util.List;
-```python
-from qdrant_client import QdrantClient
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+import static io.qdrant.client.QueryFactory.nearest;
-client = QdrantClient(QDRANT_MAIN_URL, api_key=QDRANT_API_KEY)
-```
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.Filter;
-First of all, we are going to create a collection from a precomputed dataset. If you already have a collection, you can skip this step and start by [creating a snapshot](#create-and-download-snapshots).
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.SearchParams;
-
- (Optional) Create collection and import data
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-### Load the dataset
+client.queryAsync(QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
-We are going to use a dataset with precomputed embeddings, available on Hugging Face Hub. The dataset is called [Qdrant/arxiv-titles-instructorxl-embeddings](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) and was created using the [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. It contains 2.25M embeddings for the titles of the papers from the [arXiv](https://arxiv.org/) dataset.
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build())
+ .setParams(SearchParams.newBuilder().setExact(false).setHnswEf(128).build())
-Loading the dataset is as simple as:
+ .setLimit(3)
+ .build()).get();
+```
-```python
-from datasets import load_dataset
+```csharp
+using Qdrant.Client;
-dataset = load_dataset(
+using Qdrant.Client.Grpc;
- ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+using static Qdrant.Client.Grpc.Conditions;
-)
-```
+var client = new QdrantClient(""localhost"", 6334);
-We used the streaming mode, so the dataset is not loaded into memory. Instead, we can iterate through it and extract the id and vector embedding:
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
-```python
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-for payload in dataset:
+ filter: MatchKeyword(""city"", ""London""),
- id = payload.pop(""id"")
+ searchParams: new SearchParams { Exact = false, HnswEf = 128 },
- vector = payload.pop(""vector"")
+ limit: 3
- print(id, vector, payload)
+);
```
-A single payload looks like this:
+```go
+import (
+ ""context""
-```json
-{
- 'title': 'Dynamics of partially localized brane systems',
+ ""github.com/qdrant/go-client/qdrant""
- 'DOI': '1109.1415'
+)
-}
-```
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
+})
-### Create a collection
+client.Query(context.Background(), &qdrant.QueryPoints{
-First things first, we need to create our collection. We're not going to play with the configuration of it, but it makes sense to do it right now.
+ CollectionName: ""{collection_name}"",
-The configuration is also a part of the collection snapshot.
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ Filter: &qdrant.Filter{
+ Must: []*qdrant.Condition{
-```python
+ qdrant.NewMatch(""city"", ""London""),
-from qdrant_client import models
+ },
+ },
+ Params: &qdrant.SearchParams{
-client.recreate_collection(
+ Exact: qdrant.PtrOf(false),
- collection_name=""test_collection"",
+ HnswEf: qdrant.PtrOf(uint64(128)),
- vectors_config=models.VectorParams(
+ },
- size=768, # Size of the embedding vector generated by the InstructorXL model
+})
- distance=models.Distance.COSINE
+```
- ),
-)
-```
+In this example, we are looking for vectors similar to vector `[0.2, 0.1, 0.9, 0.7]`.
+Parameter `limit` (or its alias - `top`) specifies the amount of most similar results we would like to retrieve.
-### Upload the dataset
+Values under the key `params` specify custom parameters for the search.
+Currently, it could be:
-Calculating the embeddings is usually a bottleneck of the vector search pipelines, but we are happy to have them in place already. Since the goal of this tutorial is to show how to create a snapshot, **we are going to upload only a small part of the dataset**.
+* `hnsw_ef` - value that specifies `ef` parameter of the HNSW algorithm.
-```python
+* `exact` - option to not use the approximate search (ANN). If set to true, the search may run for a long as it performs a full scan to retrieve exact results.
-ids, vectors, payloads = [], [], []
+* `indexed_only` - With this option you can disable the search in those segments where vector index is not built yet. This may be useful if you want to minimize the impact to the search performance whilst the collection is also being updated. Using this option may lead to a partial result if the collection is not fully indexed yet, consider using it only if eventual consistency is acceptable for your use case.
-for payload in dataset:
- id = payload.pop(""id"")
- vector = payload.pop(""vector"")
+Since the `filter` parameter is specified, the search is performed only among those points that satisfy the filter condition.
+See details of possible filters and their work in the [filtering](../filtering/) section.
- ids.append(id)
- vectors.append(vector)
+Example result of this API would be
- payloads.append(payload)
+```json
- # We are going to upload only 1000 vectors
+{
- if len(ids) == 1000:
+ ""result"": [
- break
+ { ""id"": 10, ""score"": 0.81 },
+ { ""id"": 14, ""score"": 0.75 },
+ { ""id"": 11, ""score"": 0.73 }
-client.upsert(
+ ],
- collection_name=""test_collection"",
+ ""status"": ""ok"",
- points=models.Batch(
+ ""time"": 0.001
- ids=ids,
+}
- vectors=vectors,
+```
- payloads=payloads,
- ),
-)
+The `result` contains ordered by `score` list of found point ids.
-```
+Note that payload and vector data is missing in these results by default.
-Our collection is now ready to be used for search. Let's create a snapshot of it.
+See [payload and vector in the result](#payload-and-vector-in-the-result) on how
+to include it.
-
+*Available as of v0.10.0*
-If you already have a collection, you can skip the previous step and start by [creating a snapshot](#create-and-download-snapshots).
+If the collection was created with multiple vectors, the name of the vector to use for searching should be provided:
-## Create and download snapshots
+```http
+POST /collections/{collection_name}/points/query
-Qdrant exposes an HTTP endpoint to request creating a snapshot, but we can also call it with the Python SDK.
+{
-Our setup consists of 3 nodes, so we need to call the endpoint **on each of them** and create a snapshot on each node. While using Python SDK, that means creating a separate client instance for each node.
+ ""query"": [0.2, 0.1, 0.9, 0.7],
+ ""using"": ""image"",
+ ""limit"": 3
+}
+```
-
+```python
+from qdrant_client import QdrantClient
-```python
-snapshot_urls = []
+client = QdrantClient(url=""http://localhost:6333"")
-for node_url in QDRANT_NODES:
- node_client = QdrantClient(node_url, api_key=QDRANT_API_KEY)
- snapshot_info = node_client.create_snapshot(collection_name=""test_collection"")
+client.query_points(
+ collection_name=""{collection_name}"",
+ query=[0.2, 0.1, 0.9, 0.7],
- snapshot_url = f""{node_url}/collections/test_collection/snapshots/{snapshot_info.name}""
+ using=""image"",
- snapshot_urls.append(snapshot_url)
+ limit=3,
-```
+)
+```
-```http
-// for `https://node-0.my-cluster.com:6333`
+```typescript
-POST /collections/test_collection/snapshots
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-// for `https://node-1.my-cluster.com:6333`
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-POST /collections/test_collection/snapshots
+client.query(""{collection_name}"", {
-// for `https://node-2.my-cluster.com:6333`
+ query: [0.2, 0.1, 0.9, 0.7],
-POST /collections/test_collection/snapshots
+ using: ""image"",
-```
+ limit: 3,
+});
+```
-
- Response
+```rust
+use qdrant_client::qdrant::QueryPointsBuilder;
-```json
+use qdrant_client::Qdrant;
-{
- ""result"": {
- ""name"": ""test_collection-559032209313046-2024-01-03-13-20-11.snapshot"",
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- ""creation_time"": ""2024-01-03T13:20:11"",
- ""size"": 18956800
- },
+client
- ""status"": ""ok"",
+ .query(
- ""time"": 0.307644965
+ QueryPointsBuilder::new(""{collection_name}"")
-}
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-```
+ .limit(3)
-
+ .using(""image""),
+ )
+ .await?;
+```
+```java
-Once we have the snapshot URLs, we can download them. Please make sure to include the API key in the request headers.
+import java.util.List;
-Downloading the snapshot **can be done only through the HTTP API**, so we are going to use the `requests` library.
+import io.qdrant.client.QdrantClient;
-```python
+import io.qdrant.client.QdrantGrpcClient;
-import requests
+import io.qdrant.client.grpc.Points.QueryPoints;
-import os
+import static io.qdrant.client.QueryFactory.nearest;
-# Create a directory to store snapshots
-os.makedirs(""snapshots"", exist_ok=True)
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-local_snapshot_paths = []
-for snapshot_url in snapshot_urls:
- snapshot_name = os.path.basename(snapshot_url)
+client.queryAsync(QueryPoints.newBuilder()
- local_snapshot_path = os.path.join(""snapshots"", snapshot_name)
+ .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setUsing(""image"")
- response = requests.get(
+ .setLimit(3)
- snapshot_url, headers={""api-key"": QDRANT_API_KEY}
+ .build()).get();
- )
+```
- with open(local_snapshot_path, ""wb"") as f:
- response.raise_for_status()
- f.write(response.content)
+```csharp
+using Qdrant.Client;
- local_snapshot_paths.append(local_snapshot_path)
-```
+var client = new QdrantClient(""localhost"", 6334);
-Alternatively, you can use the `wget` command:
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-```bash
+ usingVector: ""image"",
-wget https://node-0.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313046-2024-01-03-13-20-11.snapshot \
+ limit: 3
- --header=""api-key: ${QDRANT_API_KEY}"" \
+);
- -O node-0-shapshot.snapshot
+```
-wget https://node-1.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313047-2024-01-03-13-20-12.snapshot \
+```go
- --header=""api-key: ${QDRANT_API_KEY}"" \
+import (
- -O node-1-shapshot.snapshot
+ ""context""
-wget https://node-2.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313048-2024-01-03-13-20-13.snapshot \
+ ""github.com/qdrant/go-client/qdrant""
- --header=""api-key: ${QDRANT_API_KEY}"" \
+)
- -O node-2-shapshot.snapshot
-```
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-The snapshots are now stored locally. We can use them to restore the collection to a different Qdrant instance, or treat them as a backup. We will create another collection using the same data on the same cluster.
+ Port: 6334,
+})
-## Restore from snapshot
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-Our brand-new snapshot is ready to be restored. Typically, it is used to move a collection to a different Qdrant instance, but we are going to use it to create a new collection on the same cluster.
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-It is just going to have a different name, `test_collection_import`. We do not need to create a collection first, as it is going to be created automatically.
+ Using: qdrant.PtrOf(""image""),
+})
+```
-Restoring collection is also done separately on each node, but our Python SDK does not support it yet. We are going to use the HTTP API instead,
-and send a request to each node using `requests` library.
+Search is processing only among vectors with the same name.
-```python
-for node_url, snapshot_path in zip(QDRANT_NODES, local_snapshot_paths):
+*Available as of v1.7.0*
- snapshot_name = os.path.basename(snapshot_path)
- requests.post(
- f""{node_url}/collections/test_collection_import/snapshots/upload?priority=snapshot"",
+If the collection was created with sparse vectors, the name of the sparse vector to use for searching should be provided:
- headers={
- ""api-key"": QDRANT_API_KEY,
- },
+You can still use payload filtering and other features of the search API with sparse vectors.
- files={""snapshot"": (snapshot_name, open(snapshot_path, ""rb""))},
- )
-```
+There are however important differences between dense and sparse vector search:
-Alternatively, you can use the `curl` command:
+| Index| Sparse Query | Dense Query |
+| --- | --- | --- |
+| Scoring Metric | Default is `Dot product`, no need to specify it | `Distance` has supported metrics e.g. Dot, Cosine |
-```bash
+| Search Type | Always exact in Qdrant | HNSW is an approximate NN |
-curl -X POST 'https://node-0.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+| Return Behaviour | Returns only vectors with non-zero values in the same indices as the query vector | Returns `limit` vectors |
- -H 'api-key: ${QDRANT_API_KEY}' \
- -H 'Content-Type:multipart/form-data' \
- -F 'snapshot=@node-0-shapshot.snapshot'
+In general, the speed of the search is proportional to the number of non-zero values in the query vector.
-curl -X POST 'https://node-1.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+```http
- -H 'api-key: ${QDRANT_API_KEY}' \
+POST /collections/{collection_name}/points/query
- -H 'Content-Type:multipart/form-data' \
+{
- -F 'snapshot=@node-1-shapshot.snapshot'
+ ""query"": {
+ ""indices"": [6, 7],
+ ""values"": [1, 2]
-curl -X POST 'https://node-2.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+ },
- -H 'api-key: ${QDRANT_API_KEY}' \
+ ""using"": ""text"",
- -H 'Content-Type:multipart/form-data' \
+ ""limit"": 3
- -F 'snapshot=@node-2-shapshot.snapshot'
+}
```
+```python
+from qdrant_client import QdrantClient, models
-**Important:** We selected `priority=snapshot` to make sure that the snapshot is preferred over the data stored on the node. You can read mode about the priority in the [documentation](/documentation/concepts/snapshots/#snapshot-priority).
-",documentation/tutorials/create-snapshot.md
-"---
-title: Multitenancy with LlamaIndex
-weight: 18
+client = QdrantClient(url=""http://localhost:6333"")
----
+client.query_points(
-# Multitenancy with LlamaIndex
+ collection_name=""{collection_name}"",
+ query=models.SparseVector(
+ indices=[1, 7],
-If you are building a service that serves vectors for many independent users, and you want to isolate their
+ values=[2.0, 1.0],
-data, the best practice is to use a single collection with payload-based partitioning. This approach is
+ ),
-called **multitenancy**. Our guide on the [Separate Partitions](/documentation/guides/multiple-partitions/) describes
+ using=""text"",
-how to set it up in general, but if you use [LlamaIndex](/documentation/integrations/llama-index/) as a
+ limit=3,
-backend, you may prefer reading a more specific instruction. So here it is!
+)
+```
-## Prerequisites
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-This tutorial assumes that you have already installed Qdrant and LlamaIndex. If you haven't, please run the
-following commands:
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-```bash
-pip install qdrant-client llama-index
+client.query(""{collection_name}"", {
-```
+ query: {
+ indices: [1, 7],
+ values: [2.0, 1.0]
-We are going to use a local Docker-based instance of Qdrant. If you want to use a remote instance, please
+ },
-adjust the code accordingly. Here is how we can start a local instance:
+ using: ""text"",
+ limit: 3,
+});
-```bash
+```
-docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest
-```
+```rust
+use qdrant_client::qdrant::QueryPointsBuilder;
-## Setting up LlamaIndex pipeline
+use qdrant_client::Qdrant;
-We are going to implement an end-to-end example of multitenant application using LlamaIndex. We'll be
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-indexing the documentation of different Python libraries, and we definitely don't want any users to see the
-results coming from a library they are not interested in. In real case scenarios, this is even more dangerous,
-as the documents may contain sensitive information.
+client
+ .query(
+ QueryPointsBuilder::new(""{collection_name}"")
-### Creating vector store
+ .query(vec![(1, 2.0), (7, 1.0)])
+ .limit(3)
+ .using(""text""),
-[QdrantVectorStore](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo.html) is a
+ )
-wrapper around Qdrant that provides all the necessary methods to work with your vector database in LlamaIndex.
+ .await?;
-Let's create a vector store for our collection. It requires setting a collection name and passing an instance
+```
-of `QdrantClient`.
+```java
-```python
+import java.util.List;
-from qdrant_client import QdrantClient
-from llama_index.vector_stores import QdrantVectorStore
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-client = QdrantClient(""http://localhost:6333"")
+import io.qdrant.client.grpc.Points.QueryPoints;
-vector_store = QdrantVectorStore(
+import static io.qdrant.client.QueryFactory.nearest;
- collection_name=""my_collection"",
- client=client,
-)
+QdrantClient client =
-```
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-### Defining chunking strategy and embedding model
+client.queryAsync(
+ QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
-Any semantic search application requires a way to convert text queries into vectors - an embedding model.
+ .setUsing(""text"")
-`ServiceContext` is a bundle of commonly used resources used during the indexing and querying stage in any
+ .setQuery(nearest(List.of(2.0f, 1.0f), List.of(1, 7)))
-LlamaIndex application. We can also use it to set up an embedding model - in our case, a local
+ .setLimit(3)
-[BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5).
+ .build())
-set up
+ .get();
+```
-```python
-from llama_index import ServiceContext
+```csharp
+using Qdrant.Client;
-service_context = ServiceContext.from_defaults(
- embed_model=""local:BAAI/bge-small-en-v1.5"",
+var client = new QdrantClient(""localhost"", 6334);
-)
-```
+await client.QueryAsync(
+ collectionName: ""{collection_name}"",
-We can also control how our documents are split into chunks, or nodes using LLamaIndex's terminology.
+ query: new (float, uint)[] { (2.0f, 1), (1.0f, 2) },
-The `SimpleNodeParser` splits documents into fixed length chunks with an overlap. The defaults are
+ usingVector: ""text"",
-reasonable, but we can also adjust them if we want to. Both values are defined in tokens.
+ limit: 3
+);
+```
-```python
-from llama_index.node_parser import SimpleNodeParser
+```go
+import (
-node_parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=32)
+ ""context""
-```
+ ""github.com/qdrant/go-client/qdrant""
-Now we also need to inform the `ServiceContext` about our choices:
+)
-```python
+client, err := qdrant.NewClient(&qdrant.Config{
-service_context = ServiceContext.from_defaults(
+ Host: ""localhost"",
- embed_model=""local:BAAI/bge-large-en-v1.5"",
+ Port: 6334,
- node_parser=node_parser,
+})
-)
-```
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
-Both embedding model and selected node parser will be implicitly used during the indexing and querying.
+ Query: qdrant.NewQuerySparse(
+ []uint32{1, 2},
+ []float32{2.0, 1.0}),
-### Combining everything together
+ Using: qdrant.PtrOf(""text""),
+})
+```
-The last missing piece, before we can start indexing, is the `VectorStoreIndex`. It is a wrapper around
-`VectorStore` that provides a convenient interface for indexing and querying. It also requires a
-`ServiceContext` to be initialized.
+### Filtering results by score
-```python
+In addition to payload filtering, it might be useful to filter out results with a low similarity score.
-from llama_index import VectorStoreIndex
+For example, if you know the minimal acceptance score for your model and do not want any results which are less similar than the threshold.
+In this case, you can use `score_threshold` parameter of the search query.
+It will exclude all results with a score worse than the given.
-index = VectorStoreIndex.from_vector_store(
- vector_store=vector_store, service_context=service_context
-)
+
-```
+### Payload and vector in the result
-## Indexing documents
+By default, retrieval methods do not return any stored information such as
-No matter how our documents are generated, LlamaIndex will automatically split them into nodes, if
+payload and vectors. Additional parameters `with_vectors` and `with_payload`
-required, encode using selected embedding model, and then store in the vector store. Let's define
+alter this behavior.
-some documents manually and insert them into Qdrant collection. Our documents are going to have
-a single metadata attribute - a library name they belong to.
+Example:
-```python
-from llama_index.schema import Document
+```http
+POST /collections/{collection_name}/points/query
+{
-documents = [
+ """": [0.2, 0.1, 0.9, 0.7],
- Document(
+ ""with_vectors"": true,
- text=""LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models."",
+ ""with_payload"": true
- metadata={
+}
- ""library"": ""llama-index"",
+```
- },
- ),
- Document(
+```python
- text=""Qdrant is a vector database & vector similarity search engine."",
+client.query_points(
- metadata={
+ collection_name=""{collection_name}"",
- ""library"": ""qdrant"",
+ query=[0.2, 0.1, 0.9, 0.7],
- },
+ with_vectors=True,
- ),
+ with_payload=True,
-]
+)
```
-Now we can index them using our `VectorStoreIndex`:
+```typescript
+client.query(""{collection_name}"", {
+ query: [0.2, 0.1, 0.9, 0.7],
-```python
+ with_vector: true,
-for document in documents:
+ with_payload: true,
- index.insert(document)
+});
```
-### Performance considerations
+```rust
+use qdrant_client::qdrant::QueryPointsBuilder;
+use qdrant_client::Qdrant;
-Our documents have been split into nodes, encoded using the embedding model, and stored in the vector
-store. However, we don't want to allow our users to search for all the documents in the collection,
-but only for the documents that belong to a library they are interested in. For that reason, we need
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-to set up the Qdrant [payload index](/documentation/concepts/indexing/#payload-index), so the search
-is more efficient.
+client
+ .query(
-```python
+ QueryPointsBuilder::new(""{collection_name}"")
-from qdrant_client import models
+ .query(vec![0.2, 0.1, 0.9, 0.7])
+ .limit(3)
+ .with_payload(true)
-client.create_payload_index(
+ .with_vectors(true),
- collection_name=""my_collection"",
+ )
- field_name=""metadata.library"",
+ .await?;
- field_type=models.PayloadSchemaType.KEYWORD,
+```
-)
-```
+```java
+
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-The payload index is not the only thing we want to change. Since none of the search
+import io.qdrant.client.WithVectorsSelectorFactory;
-queries will be executed on the whole collection, we can also change its configuration, so the HNSW
+import io.qdrant.client.grpc.Points.QueryPoints;
-graph is not built globally. This is also done due to [performance reasons](/documentation/guides/multiple-partitions/#calibrate-performance).
-**You should not be changing these parameters, if you know there will be some global search operations
-done on the collection.**
+import static io.qdrant.client.QueryFactory.nearest;
+import static io.qdrant.client.WithPayloadSelectorFactory.enable;
-```python
-client.update_collection(
- collection_name=""my_collection"",
- hnsw_config=models.HnswConfigDiff(payload_m=16, m=0),
+QdrantClient client =
-)
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-```
+client.queryAsync(
-Once both operations are completed, we can start searching for our documents.
+ QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-
+ .setWithPayload(enable(true))
+ .setWithVectors(WithVectorsSelectorFactory.enable(true))
+ .setLimit(3)
-## Querying documents with constraints
+ .build())
+ .get();
+```
-Let's assume we are searching for some information about large language models, but are only allowed to
-use Qdrant documentation. LlamaIndex has a concept of retrievers, responsible for finding the most
-relevant nodes for a given query. Our `VectorStoreIndex` can be used as a retriever, with some additional
+```csharp
-constraints - in our case value of the `library` metadata attribute.
+using Qdrant.Client;
-```python
+var client = new QdrantClient(""localhost"", 6334);
-from llama_index.vector_stores.types import MetadataFilters, ExactMatchFilter
+await client.QueryAsync(
-qdrant_retriever = index.as_retriever(
+ collectionName: ""{collection_name}"",
- filters=MetadataFilters(
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
- filters=[
+ payloadSelector: true,
- ExactMatchFilter(
+ vectorsSelector: true,
- key=""library"",
+ limit: 3
- value=""qdrant"",
+);
- )
+```
- ]
- )
-)
+```go
+import (
+ ""context""
-nodes_with_scores = qdrant_retriever.retrieve(""large language models"")
-for node in nodes_with_scores:
- print(node.text, node.score)
+ ""github.com/qdrant/go-client/qdrant""
-# Output: Qdrant is a vector database & vector similarity search engine. 0.60551536
+)
-```
+client, err := qdrant.NewClient(&qdrant.Config{
-The description of Qdrant was the best match, even though it didn't mention large language models
+ Host: ""localhost"",
-at all. However, it was the only document that belonged to the `qdrant` library, so there was no
+ Port: 6334,
-other choice. Let's try to search for something that is not present in the collection.
+})
-Let's define another retrieve, this time for the `llama-index` library:
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-```python
+ WithPayload: qdrant.NewWithPayload(true),
-llama_index_retriever = index.as_retriever(
+ WithVectors: qdrant.NewWithVectors(true),
- filters=MetadataFilters(
+})
- filters=[
+```
- ExactMatchFilter(
- key=""library"",
- value=""llama-index"",
+You can use `with_payload` to scope to or filter a specific payload subset.
- )
+You can even specify an array of items to include, such as `city`,
- ]
+`village`, and `town`:
- )
-)
+```http
+POST /collections/{collection_name}/points/query
-nodes_with_scores = llama_index_retriever.retrieve(""large language models"")
+{
-for node in nodes_with_scores:
+ ""query"": [0.2, 0.1, 0.9, 0.7],
- print(node.text, node.score)
+ ""with_payload"": [""city"", ""village"", ""town""]
-# Output: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. 0.63576734
+}
```
-The results returned by both retrievers are different, due to the different constraints, so we implemented
+```python
-a real multitenant search application!",documentation/tutorials/llama-index-multitenancy.md
-"---
+from qdrant_client import QdrantClient
-title: Tutorials
-weight: 23
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
+client = QdrantClient(url=""http://localhost:6333"")
-is_empty: false
-aliases:
- - how-to
+client.query_points(
- - tutorials
+ collection_name=""{collection_name}"",
----
+ query=[0.2, 0.1, 0.9, 0.7],
+ with_payload=[""city"", ""village"", ""town""],
+)
-# Tutorials
+```
-These tutorials demonstrate different ways you can build vector search into your applications.
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-| Tutorial | Description | Stack |
-|------------------------------------------------------------------------|-------------------------------------------------------------------|----------------------------|
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-| [Configure Optimal Use](../tutorials/optimize/) | Configure Qdrant collections for best resource use. | Qdrant |
-| [Separate Partitions](../tutorials/multiple-partitions/) | Serve vectors for many independent users. | Qdrant |
-| [Bulk Upload Vectors](../tutorials/bulk-upload/) | Upload a large scale dataset. | Qdrant |
+client.query(""{collection_name}"", {
-| [Create Dataset Snapshots](../tutorials/create-snapshot/) | Turn a dataset into a snapshot by exporting it from a collection. | Qdrant |
+ query: [0.2, 0.1, 0.9, 0.7],
-| [Semantic Search for Beginners](../tutorials/search-beginners/) | Create a simple search engine locally in minutes. | Qdrant |
+ with_payload: [""city"", ""village"", ""town""],
-| [Simple Neural Search](../tutorials/neural-search/) | Build and deploy a neural search that browses startup data. | Qdrant, BERT, FastAPI |
+});
-| [Aleph Alpha Search](../tutorials/aleph-alpha-search/) | Build a multimodal search that combines text and image data. | Qdrant, Aleph Alpha |
+```
-| [Mighty Semantic Search](../tutorials/mighty/) | Build a simple semantic search with an on-demand NLP service. | Qdrant, Mighty |
-| [Asynchronous API](../tutorials/async-api/) | Communicate with Qdrant server asynchronously with Python SDK. | Qdrant, Python |
-| [Multitenancy with LlamaIndex](../tutorials/llama-index-multitenancy/) | Handle data coming from multiple users in LlamaIndex. | Qdrant, Python, LlamaIndex |
+```rust
-| [HuggingFace datasets](../tutorials/huggingface-datasets/) | Load a Hugging Face dataset to Qdrant | Qdrant, Python, datasets |
+use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder};
-| [Measure retrieval quality](../tutorials/retrieval-quality/) | Measure and fine-tune the retrieval quality | Qdrant, Python, datasets |
+use qdrant_client::Qdrant;
-| [Troubleshooting](../tutorials/common-errors/) | Solutions to common errors and fixes | Qdrant |
-",documentation/tutorials/_index.md
-"---
-title: Airbyte
-weight: 1000
+client
-aliases: [ ../integrations/airbyte/ ]
+ .query(
----
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(vec![0.2, 0.1, 0.9, 0.7])
+ .limit(3)
-# Airbyte
+ .with_payload(SelectorOptions::Include(
+ vec![
+ ""city"".to_string(),
-[Airbyte](https://airbyte.com/) is an open-source data integration platform that helps you replicate your data
+ ""village"".to_string(),
-between different systems. It has a [growing list of connectors](https://docs.airbyte.io/integrations) that can
+ ""town"".to_string(),
-be used to ingest data from multiple sources. Building data pipelines is also crucial for managing the data in
+ ]
-Qdrant, and Airbyte is a great tool for this purpose.
+ .into(),
+ ))
+ .with_vectors(true),
-Airbyte may take care of the data ingestion from a selected source, while Qdrant will help you to build a search
+ )
-engine on top of it. There are three supported modes of how the data can be ingested into Qdrant:
+ .await?;
+```
-* **Full Refresh Sync**
-* **Incremental - Append Sync**
+```java
-* **Incremental - Append + Deduped**
+import java.util.List;
-You can read more about these modes in the [Airbyte documentation](https://docs.airbyte.io/integrations/destinations/qdrant).
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.QueryPoints;
-## Prerequisites
+import static io.qdrant.client.QueryFactory.nearest;
-Before you start, make sure you have the following:
+import static io.qdrant.client.WithPayloadSelectorFactory.include;
-1. Airbyte instance, either [Open Source](https://airbyte.com/solutions/airbyte-open-source),
+QdrantClient client =
- [Self-Managed](https://airbyte.com/solutions/airbyte-enterprise), or [Cloud](https://airbyte.com/solutions/airbyte-cloud).
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-2. Running instance of Qdrant. It has to be accessible by URL from the machine where Airbyte is running.
- You can follow the [installation guide](/documentation/guides/installation/) to set up Qdrant.
+client.queryAsync(
+ QueryPoints.newBuilder()
-## Setting up Qdrant as a destination
+ .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setWithPayload(include(List.of(""city"", ""village"", ""town"")))
-Once you have a running instance of Airbyte, you can set up Qdrant as a destination directly in the UI.
+ .setLimit(3)
-Airbyte's Qdrant destination is connected with a single collection in Qdrant.
+ .build())
+ .get();
+```
-![Airbyte Qdrant destination](/documentation/frameworks/airbyte/qdrant-destination.png)
+```csharp
-### Text processing
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Airbyte has some built-in mechanisms to transform your texts into embeddings. You can choose how you want to
-chunk your fields into pieces before calculating the embeddings, but also which fields should be used to
+var client = new QdrantClient(""localhost"", 6334);
-create the point payload.
+await client.QueryAsync(
-![Processing settings](/documentation/frameworks/airbyte/processing.png)
+ collectionName: ""{collection_name}"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ payloadSelector: new WithPayloadSelector
-### Embeddings
+ {
+ Include = new PayloadIncludeSelector
+ {
-You can choose the model that will be used to calculate the embeddings. Currently, Airbyte supports multiple
+ Fields = { new string[] { ""city"", ""village"", ""town"" } }
-models, including OpenAI and Cohere.
+ }
+ },
+ limit: 3
-![Embeddings settings](/documentation/frameworks/airbyte/embedding.png)
+);
+```
-Using some precomputed embeddings from your data source is also possible. In this case, you can pass the field
-name containing the embeddings and their dimensionality.
+```go
+import (
+ ""context""
-![Precomputed embeddings settings](/documentation/frameworks/airbyte/precomputed-embedding.png)
+ ""github.com/qdrant/go-client/qdrant""
-### Qdrant connection details
+)
-Finally, we can configure the target Qdrant instance and collection. In case you use the built-in authentication
+client, err := qdrant.NewClient(&qdrant.Config{
-mechanism, here is where you can pass the token.
+ Host: ""localhost"",
+ Port: 6334,
+})
-![Qdrant connection details](/documentation/frameworks/airbyte/qdrant-config.png)
+client.Query(context.Background(), &qdrant.QueryPoints{
-Once you confirm creating the destination, Airbyte will test if a specified Qdrant cluster is accessible and
+ CollectionName: ""{collection_name}"",
-might be used as a destination.
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ WithPayload: qdrant.NewWithPayloadInclude(""city"", ""village"", ""town""),
+})
-## Setting up connection
+```
-Airbyte combines sources and destinations into a single entity called a connection. Once you have a destination
+Or use `include` or `exclude` explicitly. For example, to exclude `city`:
-configured and a source, you can create a connection between them. It doesn't matter what source you use, as
-long as Airbyte supports it. The process is pretty straightforward, but depends on the source you use.
+```http
+POST /collections/{collection_name}/points/query
-![Airbyte connection](/documentation/frameworks/airbyte/connection.png)
+{
+ ""query"": [0.2, 0.1, 0.9, 0.7],
+ ""with_payload"": {
-More information about creating connections can be found in the
+ ""exclude"": [""city""]
-[Airbyte documentation](https://docs.airbyte.com/understanding-airbyte/connections/).
-",documentation/frameworks/airbyte.md
-"---
+ }
-title: Stanford DSPy
+}
-weight: 1500
+```
-aliases: [ ../integrations/dspy/ ]
----
+```python
+from qdrant_client import QdrantClient, models
-# Stanford DSPy
+client = QdrantClient(url=""http://localhost:6333"")
-[DSPy](https://github.com/stanfordnlp/dspy) is the framework for solving advanced tasks with language models (LMs) and retrieval models (RMs). It unifies techniques for prompting and fine-tuning LMs — and approaches for reasoning, self-improvement, and augmentation with retrieval and tools.
+client.query_points(
-- Provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax.
+ collection_name=""{collection_name}"",
+ query=[0.2, 0.1, 0.9, 0.7],
+ with_payload=models.PayloadSelectorExclude(
-- Introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program.
+ exclude=[""city""],
+ ),
+)
-Qdrant can be used as a retrieval mechanism in the DSPy flow.
+```
-## Installation
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-For the Qdrant retrieval integration, include `dspy-ai` with the `qdrant` extra:
-```bash
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-pip install dspy-ai[qdrant]
-```
+client.query(""{collection_name}"", {
+ query: [0.2, 0.1, 0.9, 0.7],
-## Usage
+ with_payload: {
+ exclude: [""city""],
+ },
-We can configure `DSPy` settings to use the Qdrant retriever model like so:
+});
-```python
+```
-import dspy
-from dspy.retrieve.qdrant_rm import QdrantRM
+```rust
+use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder};
-from qdrant_client import QdrantClient
+use qdrant_client::Qdrant;
-turbo = dspy.OpenAI(model=""gpt-3.5-turbo"")
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-qdrant_client = QdrantClient() # Defaults to a local instance at http://localhost:6333/
-qdrant_retriever_model = QdrantRM(""collection-name"", qdrant_client, k=3)
+client
+ .query(
-dspy.settings.configure(lm=turbo, rm=qdrant_retriever_model)
+ QueryPointsBuilder::new(""{collection_name}"")
-```
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-Using the retriever is pretty simple. The `dspy.Retrieve(k)` module will search for the top-k passages that match a given query.
+ .limit(3)
+ .with_payload(SelectorOptions::Exclude(vec![""city"".to_string()].into()))
+ .with_vectors(true),
-```python
+ )
-retrieve = dspy.Retrieve(k=3)
+ .await?;
-question = ""Some question about my data""
+```
-topK_passages = retrieve(question).passages
+```java
-print(f""Top {retrieve.k} passages for question: {question} \n"", ""\n"")
+import java.util.List;
-for idx, passage in enumerate(topK_passages):
+import io.qdrant.client.QdrantClient;
- print(f""{idx+1}]"", passage, ""\n"")
+import io.qdrant.client.QdrantGrpcClient;
-```
+import io.qdrant.client.grpc.Points.QueryPoints;
-With Qdrant configured as the retriever for contexts, you can set up a DSPy module like so:
+import static io.qdrant.client.QueryFactory.nearest;
-```python
+import static io.qdrant.client.WithPayloadSelectorFactory.exclude;
-class RAG(dspy.Module):
- def __init__(self, num_passages=3):
- super().__init__()
+QdrantClient client =
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- self.retrieve = dspy.Retrieve(k=num_passages)
- ...
+client.queryAsync(
+ QueryPoints.newBuilder()
+ .setCollectionName(""{collection_name}"")
- def forward(self, question):
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
- context = self.retrieve(question).passages
+ .setWithPayload(exclude(List.of(""city"")))
- ...
+ .setLimit(3)
+ .build())
+ .get();
```
-With the generic RAG blueprint now in place, you can add the many interactions offered by DSPy with context retrieval powered by Qdrant.
-
+```csharp
+using Qdrant.Client;
-## Next steps
+using Qdrant.Client.Grpc;
-Find DSPy usage docs and examples [here](https://github.com/stanfordnlp/dspy#4-documentation--tutorials).
-",documentation/frameworks/dspy.md
-"---
+var client = new QdrantClient(""localhost"", 6334);
-title: Apache Spark
-weight: 1400
-aliases: [ ../integrations/spark/ ]
+await client.QueryAsync(
----
+ collectionName: ""{collection_name}"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ payloadSelector: new WithPayloadSelector
-# Apache Spark
+ {
+ Exclude = new PayloadExcludeSelector { Fields = { new string[] { ""city"" } } }
+ },
-[Spark](https://spark.apache.org/) is a leading distributed computing framework that empowers you to work with massive datasets efficiently. When it comes to leveraging the power of Spark for your data processing needs, the [Qdrant-Spark Connector](https://github.com/qdrant/qdrant-spark) is to be considered. This connector enables Qdrant to serve as a storage destination in Spark, offering a seamless bridge between the two.
+ limit: 3
+);
+```
-## Installation
+```go
-You can set up the Qdrant-Spark Connector in a few different ways, depending on your preferences and requirements.
+import (
+ ""context""
-### GitHub Releases
+ ""github.com/qdrant/go-client/qdrant""
+)
-The simplest way to get started is by downloading pre-packaged JAR file releases from the [Qdrant-Spark GitHub releases page](https://github.com/qdrant/qdrant-spark/releases). These JAR files come with all the necessary dependencies to get you going.
+client, err := qdrant.NewClient(&qdrant.Config{
-### Building from Source
+ Host: ""localhost"",
+ Port: 6334,
+})
-If you prefer to build the JAR from source, you'll need [JDK 8](https://www.azul.com/downloads/#zulu) and [Maven](https://maven.apache.org/) installed on your system. Once you have the prerequisites in place, navigate to the project's root directory and run the following command:
+client.Query(context.Background(), &qdrant.QueryPoints{
-```bash
+ CollectionName: ""{collection_name}"",
-mvn package
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-```
+ WithPayload: qdrant.NewWithPayloadExclude(""city""),
-This command will compile the source code and generate a fat JAR, which will be stored in the `target` directory by default.
+})
+```
-### Maven Central
+It is possible to target nested fields using a dot notation:
+* `payload.nested_field` - for a nested field
-For Java and Scala projects, you can also obtain the Qdrant-Spark Connector from [Maven Central](https://central.sonatype.com/artifact/io.qdrant/spark).
+* `payload.nested_array[].sub_field` - for projecting nested fields within an array
-```xml
+Accessing array elements by index is currently not supported.
-
- io.qdrant
- spark
+## Batch search API
- 2.0.0
-
-```
+*Available as of v0.10.0*
-## Getting Started
+The batch search API enables to perform multiple search requests via a single request.
-After successfully installing the Qdrant-Spark Connector, you can start integrating Qdrant with your Spark applications. Below, we'll walk through the basic steps of creating a Spark session with Qdrant support and loading data into Qdrant.
+Its semantic is straightforward, `n` batched search requests are equivalent to `n` singular search requests.
-### Creating a single-node Spark session with Qdrant Support
+This approach has several advantages. Logically, fewer network connections are required which can be very beneficial on its own.
-To begin, import the necessary libraries and create a Spark session with Qdrant support. Here's how:
+More importantly, batched requests will be efficiently processed via the query planner which can detect and optimize requests if they have the same `filter`.
-```python
+This can have a great effect on latency for non trivial filters as the intermediary results can be shared among the request.
-from pyspark.sql import SparkSession
+In order to use it, simply pack together your search requests. All the regular attributes of a search request are of course available.
-spark = SparkSession.builder.config(
- ""spark.jars"",
- ""spark-2.0.jar"", # Specify the downloaded JAR file
+```http
- )
+POST /collections/{collection_name}/points/query/batch
- .master(""local[*]"")
+{
- .appName(""qdrant"")
+ ""searches"": [
- .getOrCreate()
+ {
-```
+ ""query"": [0.2, 0.1, 0.9, 0.7],
+ ""filter"": {
+ ""must"": [
-```scala
+ {
-import org.apache.spark.sql.SparkSession
+ ""key"": ""city"",
+ ""match"": {
+ ""value"": ""London""
-val spark = SparkSession.builder
+ }
- .config(""spark.jars"", ""spark-2.0.jar"") // Specify the downloaded JAR file
+ }
- .master(""local[*]"")
+ ]
- .appName(""qdrant"")
+ },
- .getOrCreate()
+ ""limit"": 3
-```
+ },
+ {
+ ""query"": [0.5, 0.3, 0.2, 0.3],
-```java
+ ""filter"": {
-import org.apache.spark.sql.SparkSession;
+ ""must"": [
+ {
+ ""key"": ""city"",
-public class QdrantSparkJavaExample {
+ ""match"": {
- public static void main(String[] args) {
+ ""value"": ""London""
- SparkSession spark = SparkSession.builder()
+ }
- .config(""spark.jars"", ""spark-2.0.jar"") // Specify the downloaded JAR file
+ }
- .master(""local[*]"")
+ ]
- .appName(""qdrant"")
+ },
- .getOrCreate();
+ ""limit"": 3
- ...
+ }
- }
+ ]
}
@@ -42619,802 +42069,865 @@ public class QdrantSparkJavaExample {
-### Loading Data into Qdrant
+```python
+from qdrant_client import QdrantClient, models
-
+client = QdrantClient(url=""http://localhost:6333"")
-Here's how you can use the Qdrant-Spark Connector to upsert data:
+filter_ = models.Filter(
+ must=[
-```python
+ models.FieldCondition(
-
+ key=""city"",
- .write
+ match=models.MatchValue(
- .format(""io.qdrant.spark.Qdrant"")
+ value=""London"",
- .option(""qdrant_url"", ) # REST URL of the Qdrant instance
+ ),
- .option(""collection_name"", ) # Name of the collection to write data into
+ )
- .option(""embedding_field"", ) # Name of the field holding the embeddings
+ ]
- .option(""schema"", .schema.json()) # JSON string of the dataframe schema
+)
- .mode(""append"")
- .save()
-```
+search_queries = [
+ models.QueryRequest(query=[0.2, 0.1, 0.9, 0.7], filter=filter_, limit=3),
+ models.QueryRequest(query=[0.5, 0.3, 0.2, 0.3], filter=filter_, limit=3),
-```scala
+]
-
- .write
- .format(""io.qdrant.spark.Qdrant"")
+client.query_batch_points(collection_name=""{collection_name}"", requests=search_queries)
- .option(""qdrant_url"", QDRANT_GRPC_URL) // REST URL of the Qdrant instance
+```
- .option(""collection_name"", QDRANT_COLLECTION_NAME) // Name of the collection to write data into
- .option(""embedding_field"", EMBEDDING_FIELD_NAME) // Name of the field holding the embeddings
- .option(""schema"", .schema.json()) // JSON string of the dataframe schema
+```typescript
- .mode(""append"")
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- .save()
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-```
+const filter = {
-```java
+ must: [
-
+ {
- .write()
+ key: ""city"",
- .format(""io.qdrant.spark.Qdrant"")
+ match: {
- .option(""qdrant_url"", QDRANT_GRPC_URL) // REST URL of the Qdrant instance
+ value: ""London"",
- .option(""collection_name"", QDRANT_COLLECTION_NAME) // Name of the collection to write data into
+ },
- .option(""embedding_field"", EMBEDDING_FIELD_NAME) // Name of the field holding the embeddings
+ },
- .option(""schema"", .schema().json()) // JSON string of the dataframe schema
+ ],
- .mode(""append"")
+};
- .save();
-```
+const searches = [
+ {
-## Databricks
+ query: [0.2, 0.1, 0.9, 0.7],
-You can use the `qdrant-spark` connector as a library in [Databricks](https://www.databricks.com/) to ingest data into Qdrant.
+ filter,
-- Go to the `Libraries` section in your cluster dashboard.
+ limit: 3,
-- Select `Install New` to open the library installation modal.
+ },
-- Search for `io.qdrant:spark:2.0.0` in the Maven packages and click `Install`.
+ {
+ query: [0.5, 0.3, 0.2, 0.3],
+ filter,
-![Databricks](/documentation/frameworks/spark/databricks.png)
+ limit: 3,
+ },
+];
-## Datatype Support
+client.queryBatch(""{collection_name}"", {
-Qdrant supports all the Spark data types, and the appropriate data types are mapped based on the provided schema.
+ searches,
+});
+```
-## Options and Spark Types
+```rust
-The Qdrant-Spark Connector provides a range of options to fine-tune your data integration process. Here's a quick reference:
+use qdrant_client::qdrant::{Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder};
+use qdrant_client::Qdrant;
-| Option | Description | DataType | Required |
-| :---------------- | :------------------------------------------------------------------------ | :--------------------- | :------- |
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-| `qdrant_url` | GRPC URL of the Qdrant instance. Eg: | `StringType` | ✅ |
-| `collection_name` | Name of the collection to write data into | `StringType` | ✅ |
-| `embedding_field` | Name of the field holding the embeddings | `ArrayType(FloatType)` | ✅ |
+let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]);
-| `schema` | JSON string of the dataframe schema | `StringType` | ✅ |
-| `id_field` | Name of the field holding the point IDs. Default: Generates a random UUId | `StringType` | ❌ |
-| `batch_size` | Max size of the upload batch. Default: 100 | `IntType` | ❌ |
+let searches = vec![
-| `retries` | Number of upload retries. Default: 3 | `IntType` | ❌ |
+ QueryPointsBuilder::new(""{collection_name}"")
-| `api_key` | Qdrant API key to be sent in the header. Default: null | `StringType` | ❌ |
+ .query(vec![0.1, 0.2, 0.3, 0.4])
-| `vector_name` | Name of the vector in the collection. Default: null | `StringType` | ❌
+ .limit(3)
+ .filter(filter.clone())
+ .build(),
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(vec![0.5, 0.3, 0.2, 0.3])
-For more information, be sure to check out the [Qdrant-Spark GitHub repository](https://github.com/qdrant/qdrant-spark). The Apache Spark guide is available [here](https://spark.apache.org/docs/latest/quick-start.html). Happy data processing!
-",documentation/frameworks/spark.md
-"---
+ .limit(3)
-title: Make.com
+ .filter(filter)
-weight: 1800
+ .build(),
----
+];
-# Make.com
+client
+ .query_batch(QueryBatchPointsBuilder::new(""{collection_name}"", searches))
+ .await?;
-[Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code.
+```
-Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations).
+```java
+import java.util.List;
-Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios.
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
-![Qdrant Make hero](/documentation/frameworks/make/hero-page.png)
+import io.qdrant.client.grpc.Points.Filter;
+import io.qdrant.client.grpc.Points.QueryPoints;
-## Prerequisites
+import static io.qdrant.client.QueryFactory.nearest;
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-Before you start, make sure you have the following:
+QdrantClient client =
-1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/).
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register).
+Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build();
-## Setting up a connection
+List searches = List.of(
-Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection.
+ QueryPoints.newBuilder()
-![Qdrant Make connection](/documentation/frameworks/make/connection.png)
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setFilter(filter)
+ .setLimit(3)
-You can now establish a connection to Qdrant using your [instance credentials](https://qdrant.tech/documentation/cloud/authentication/).
+ .build(),
+ QueryPoints.newBuilder()
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-![Qdrant Make form](/documentation/frameworks/make/connection-form.png)
+ .setFilter(filter)
+ .setLimit(3)
+ .build());
-## Modules
-
- Modules represent actions that Make performs with an app.
+client.queryBatchAsync(""{collection_name}"", searches).get();
+```
-The Qdrant Make app enables you to trigger the following app modules.
-![Qdrant Make modules](/documentation/frameworks/make/modules.png)
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules).
+using static Qdrant.Client.Grpc.Conditions;
-## Next steps
+var client = new QdrantClient(""localhost"", 6334);
-- Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates).
+var filter = MatchKeyword(""city"", ""London"");
-- Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios).",documentation/frameworks/make.md
-"---
+var queries = new List
-title: FiftyOne
+{
-weight: 600
+ new()
-aliases: [ ../integrations/fifty-one ]
+ {
----
+ CollectionName = ""{collection_name}"",
+ Query = new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
+ Filter = filter,
-# FiftyOne
+ Limit = 3
+ },
+ new()
-[FiftyOne](https://voxel51.com/) is an open-source toolkit designed to enhance computer vision workflows by optimizing dataset quality
+ {
-and providing valuable insights about your models. FiftyOne 0.20, which includes a native integration with Qdrant, supporting workflows
+ CollectionName = ""{collection_name}"",
-like [image similarity search](https://docs.voxel51.com/user_guide/brain.html#image-similarity) and
+ Query = new float[] { 0.5f, 0.3f, 0.2f, 0.3f },
-[text search](https://docs.voxel51.com/user_guide/brain.html#text-similarity).
+ Filter = filter,
+
+ Limit = 3
+ }
+};
-Qdrant helps FiftyOne to find the most similar images in the dataset using vector embeddings.
+await client.QueryBatchAsync(collectionName: ""{collection_name}"", queries: queries);
-FiftyOne is available as a Python package that might be installed in the following way:
+```
-```bash
+```go
-pip install fiftyone
+import (
-```
+ ""context""
-Please check out the documentation of FiftyOne on [Qdrant integration](https://docs.voxel51.com/integrations/qdrant.html).
+ ""github.com/qdrant/go-client/qdrant""
+)
-",documentation/frameworks/fifty-one.md
-"---
-title: Langchain Go
-weight: 120
+client, err := qdrant.NewClient(&qdrant.Config{
----
+ Host: ""localhost"",
+ Port: 6334,
+})
-# Langchain Go
+filter := qdrant.Filter{
-[Langchain Go](https://tmc.github.io/langchaingo/docs/) is a framework for developing data-aware applications powered by language models in Go.
+ Must: []*qdrant.Condition{
+ qdrant.NewMatch(""city"", ""London""),
+ },
-You can use Qdrant as a vector store in Langchain Go.
+}
-## Setup
+client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{
+ CollectionName: ""{collection_name}"",
+ QueryPoints: []*qdrant.QueryPoints{
-Install the `langchain-go` project dependency
+ {
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-```bash
+ Filter: &filter,
-go get -u github.com/tmc/langchaingo
+ },
-```
+ {
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.5, 0.3, 0.2, 0.3),
-## Usage
+ Filter: &filter,
+ },
+ },
-Before you use the following code sample, customize the following values for your configuration:
+})
+```
-- `YOUR_QDRANT_REST_URL`: If you've set up Qdrant using the [Quick Start](/documentation/quick-start/) guide,
- set this value to `http://localhost:6333`.
+The result of this API contains one array per search requests.
-- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections) guide to create or
- list collections.
+```json
+{
-```go
+ ""result"": [
-import (
+ [
- ""fmt""
+ { ""id"": 10, ""score"": 0.81 },
- ""log""
+ { ""id"": 14, ""score"": 0.75 },
+ { ""id"": 11, ""score"": 0.73 }
+ ],
- ""github.com/tmc/langchaingo/embeddings""
+ [
- ""github.com/tmc/langchaingo/llms/openai""
+ { ""id"": 1, ""score"": 0.92 },
- ""github.com/tmc/langchaingo/vectorstores""
+ { ""id"": 3, ""score"": 0.89 },
- ""github.com/tmc/langchaingo/vectorstores/qdrant""
+ { ""id"": 9, ""score"": 0.75 }
-)
+ ]
+ ],
+ ""status"": ""ok"",
- llm, err := openai.New()
+ ""time"": 0.001
- if err != nil {
+}
- log.Fatal(err)
+```
- }
+## Pagination
- e, err := embeddings.NewEmbedder(llm)
- if err != nil {
- log.Fatal(err)
+*Available as of v0.8.3*
- }
+Search and [recommendation](../explore/#recommendation-api) APIs allow to skip first results of the search and return only the result starting from some specified offset:
- url, err := url.Parse(""YOUR_QDRANT_REST_URL"")
- if err != nil {
- log.Fatal(err)
+Example:
- }
+```http
- store, err := qdrant.New(
+POST /collections/{collection_name}/points/query
- qdrant.WithURL(*url),
+{
- qdrant.WithCollectionName(""YOUR_COLLECTION_NAME""),
+ ""query"": [0.2, 0.1, 0.9, 0.7],
- qdrant.WithEmbedder(e),
+ ""with_vectors"": true,
- )
+ ""with_payload"": true,
- if err != nil {
+ ""limit"": 10,
- log.Fatal(err)
+ ""offset"": 100
- }
+}
```
-## Further Reading
+```python
+from qdrant_client import QdrantClient
-- You can find usage examples of Langchain Go [here](https://github.com/tmc/langchaingo/tree/main/examples).
-",documentation/frameworks/langchain-go.md
-"---
-title: Langchain4J
+client = QdrantClient(url=""http://localhost:6333"")
-weight: 110
----
+client.query_points(
+ collection_name=""{collection_name}"",
-# LangChain for Java
+ query=[0.2, 0.1, 0.9, 0.7],
+ with_vectors=True,
+ with_payload=True,
-LangChain for Java, also known as [Langchain4J](https://github.com/langchain4j/langchain4j), is a community port of [Langchain](https://www.langchain.com/) for building context-aware AI applications in Java
+ limit=10,
+ offset=100,
+)
-You can use Qdrant as a vector store in Langchain4J through the [`langchain4j-qdrant`](https://central.sonatype.com/artifact/dev.langchain4j/langchain4j-qdrant) module.
+```
-## Setup
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-Add the `langchain4j-qdrant` to your project dependencies.
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-```xml
-
+client.query(""{collection_name}"", {
- dev.langchain4j
+ query: [0.2, 0.1, 0.9, 0.7],
- langchain4j-qdrant
+ with_vector: true,
- VERSION
+ with_payload: true,
-
+ limit: 10,
-```
+ offset: 100,
+});
+```
-## Usage
+```rust
-Before you use the following code sample, customize the following values for your configuration:
+use qdrant_client::qdrant::QueryPointsBuilder;
+use qdrant_client::Qdrant;
-- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections) guide to create or
- list collections.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-- `YOUR_HOST_URL`: Use the GRPC URL for your system. If you used the [Quick Start](/documentation/quick-start/) guide,
- it may be http://localhost:6334. If you've deployed in the [Qdrant Cloud](/documentation/cloud/), you may have a
- longer URL such as `https://example.location.cloud.qdrant.io:6334`.
+client
-- `YOUR_API_KEY`: Substitute the API key associated with your configuration.
+ .query(
-```java
+ QueryPointsBuilder::new(""{collection_name}"")
-import dev.langchain4j.store.embedding.EmbeddingStore;
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore;
+ .with_payload(true)
+ .with_vectors(true)
+ .limit(10)
-EmbeddingStore embeddingStore =
+ .offset(100),
- QdrantEmbeddingStore.builder()
+ )
- // Ensure the collection is configured with the appropriate dimensions
+ .await?;
- // of the embedding model.
+```
- // Reference https://qdrant.tech/documentation/concepts/collections/
- .collectionName(""YOUR_COLLECTION_NAME"")
- .host(""YOUR_HOST_URL"")
+```java
- // GRPC port of the Qdrant server
+import java.util.List;
- .port(6334)
- .apiKey(""YOUR_API_KEY"")
- .build();
+import io.qdrant.client.QdrantClient;
-```
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.WithVectorsSelectorFactory;
+import io.qdrant.client.grpc.Points.QueryPoints;
-`QdrantEmbeddingStore` supports all the semantic features of Langchain4J.
+import static io.qdrant.client.QueryFactory.nearest;
-## Further Reading
+import static io.qdrant.client.WithPayloadSelectorFactory.enable;
-- You can refer to the [Langchain4J examples](https://github.com/langchain4j/langchain4j-examples/) to get started.
-",documentation/frameworks/langchain4j.md
-"---
+QdrantClient client =
-title: OpenLLMetry
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-weight: 2300
----
+client.queryAsync(
+ QueryPoints.newBuilder()
-# OpenLLMetry
+ .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
+ .setWithPayload(enable(true))
-OpenLLMetry from [Traceloop](https://www.traceloop.com/) is a set of extensions built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM application.
+ .setWithVectors(WithVectorsSelectorFactory.enable(true))
+ .setLimit(10)
+ .setOffset(100)
-OpenLLMetry supports instrumenting the `qdrant_client` Python library and exporting the traces to various observability platforms, as described in their [Integrations catalog](https://www.traceloop.com/docs/openllmetry/integrations/introduction#the-integrations-catalog).
+ .build())
+ .get();
+```
-This page assumes you're using `qdrant-client` version 1.7.3 or above.
-## Usage
+```csharp
+using Qdrant.Client;
-To set up OpenLLMetry, follow these steps:
+var client = new QdrantClient(""localhost"", 6334);
-1. Install the SDK:
+await client.QueryAsync(
-```console
+ collectionName: ""{collection_name}"",
-pip install traceloop-sdk
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-```
+ payloadSelector: true,
+ vectorsSelector: true,
+ limit: 10,
-1. Instantiate the SDK:
+ offset: 100
+);
+```
-```python
-from traceloop.sdk import Traceloop
+```go
+import (
-Traceloop.init()
+ ""context""
-```
+ ""github.com/qdrant/go-client/qdrant""
-You're now tracing your `qdrant_client` usage with OpenLLMetry!
+)
-## Without the SDK
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-Since Traceloop provides standard OpenTelemetry instrumentations, you can use them as standalone packages. To do so, follow these steps:
+})
-1. Install the package:
+client.Query(context.Background(), &qdrant.QueryPoints{
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
-```console
+ WithPayload: qdrant.NewWithPayload(true),
-pip install opentelemetry-instrumentation-qdrant
+ WithVectors: qdrant.NewWithVectors(true),
-```
+ Offset: qdrant.PtrOf(uint64(100)),
+})
+```
-1. Instantiate the `QdrantInstrumentor`.
+Is equivalent to retrieving the 11th page with 10 records per page.
-```python
-from opentelemetry.instrumentation.qdrant import QdrantInstrumentor
+
-QdrantInstrumentor().instrument()
-```
+Vector-based retrieval in general and HNSW index in particular, are not designed to be paginated.
+It is impossible to retrieve Nth closest vector without retrieving the first N vectors first.
-## Further Reading
+However, using the offset parameter saves the resources by reducing network traffic and the number of times the storage is accessed.
-- 📚 OpenLLMetry [API reference](https://www.traceloop.com/docs/api-reference/introduction)
-",documentation/frameworks/openllmetry.md
-"---
-title: LangChain
+Using an `offset` parameter, will require to internally retrieve `offset + limit` points, but only access payload and vector from the storage those points which are going to be actually returned.
-weight: 100
-aliases: [ ../integrations/langchain/ ]
----
+## Grouping API
-# LangChain
+*Available as of v1.2.0*
-LangChain is a library that makes developing Large Language Models based applications much easier. It unifies the interfaces
+It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results.
-to different libraries, including major embedding providers and Qdrant. Using LangChain, you can focus on the business value
-instead of writing the boilerplate.
+For example, if you have a large document split into multiple chunks, and you want to search or [recommend](../explore/#recommendation-api) on a per-document basis, you can group the results by the document ID.
-Langchain comes with the Qdrant integration by default. It might be installed with pip:
+Consider having points with the following payloads:
-```bash
-pip install langchain
+```json
-```
+[
+ {
+ ""id"": 0,
-Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways
+ ""payload"": {
-how to use it, but calling `Qdrant.from_texts` is probably the most straightforward way how to get started:
+ ""chunk_part"": 0,
+ ""document_id"": ""a""
+ },
-```python
+ ""vector"": [0.91]
-from langchain.vectorstores import Qdrant
+ },
-from langchain.embeddings import HuggingFaceEmbeddings
+ {
+ ""id"": 1,
+ ""payload"": {
-embeddings = HuggingFaceEmbeddings(
+ ""chunk_part"": 1,
- model_name=""sentence-transformers/all-mpnet-base-v2""
+ ""document_id"": [""a"", ""b""]
-)
+ },
-doc_store = Qdrant.from_texts(
+ ""vector"": [0.8]
- texts, embeddings, url="""", api_key="""", collection_name=""texts""
+ },
-)
+ {
-```
+ ""id"": 2,
+ ""payload"": {
+ ""chunk_part"": 2,
-Calling `Qdrant.from_documents` or `Qdrant.from_texts` will always recreate the collection and remove all the existing points.
+ ""document_id"": ""a""
-That's fine for some experiments, but you'll prefer not to start from scratch every single time in a real-world scenario.
+ },
-If you prefer reusing an existing collection, you can create an instance of Qdrant on your own:
+ ""vector"": [0.2]
+ },
+ {
-```python
+ ""id"": 3,
-import qdrant_client
+ ""payload"": {
+ ""chunk_part"": 0,
+ ""document_id"": 123
-embeddings = HuggingFaceEmbeddings(
+ },
- model_name=""sentence-transformers/all-mpnet-base-v2""
+ ""vector"": [0.79]
-)
+ },
+ {
+ ""id"": 4,
-client = qdrant_client.QdrantClient(
+ ""payload"": {
- """",
+ ""chunk_part"": 1,
- api_key="""", # For Qdrant Cloud, None for local instance
+ ""document_id"": 123
-)
+ },
+ ""vector"": [0.75]
+ },
-doc_store = Qdrant(
+ {
- client=client, collection_name=""texts"",
+ ""id"": 5,
- embeddings=embeddings,
+ ""payload"": {
-)
+ ""chunk_part"": 0,
-```
+ ""document_id"": -10
+ },
+ ""vector"": [0.6]
-## Local mode
+ }
+]
+```
-Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things
-out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or
-persisted on disk.
+With the ***groups*** API, you will be able to get the best *N* points for each document, assuming that the payload of the points contains the document ID. Of course there will be times where the best *N* points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, akin to the `limit` parameter.
-### In-memory
+### Search groups
-For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the
+REST API ([Schema](https://api.qdrant.tech/api-reference/search/query-points-groups)):
-client is destroyed - usually at the end of your script/notebook.
+```http
-```python
+POST /collections/{collection_name}/points/query/groups
-qdrant = Qdrant.from_documents(
+{
- docs, embeddings,
+ // Same as in the regular query API
- location="":memory:"", # Local mode with in-memory storage only
+ ""query"": [1.1],
- collection_name=""my_documents"",
+ // Grouping parameters
-)
+ ""group_by"": ""document_id"", // Path of the field to group by
-```
+ ""limit"": 4, // Max amount of groups
+ ""group_size"": 2 // Max amount of points per group
+}
-### On-disk storage
+```
-Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.
+```python
+client.query_points_groups(
+ collection_name=""{collection_name}"",
-```python
+ # Same as in the regular query_points() API
-qdrant = Qdrant.from_documents(
+ query=[1.1],
- docs, embeddings,
+ # Grouping parameters
- path=""/tmp/local_qdrant"",
+ group_by=""document_id"", # Path of the field to group by
- collection_name=""my_documents"",
+ limit=4, # Max amount of groups
+
+ group_size=2, # Max amount of points per group
)
@@ -43422,9859 +42935,70186 @@ qdrant = Qdrant.from_documents(
-### On-premise server deployment
-
-
+```typescript
-No matter if you choose to launch Qdrant locally with [a Docker container](/documentation/guides/installation/), or
+client.queryGroups(""{collection_name}"", {
-select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're
+ query: [1.1],
-going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.
+ group_by: ""document_id"",
+ limit: 4,
+ group_size: 2,
-```python
+});
-url = ""<---qdrant url here --->""
+```
-qdrant = Qdrant.from_documents(
- docs,
- embeddings,
+```rust
- url,
+use qdrant_client::qdrant::QueryPointGroupsBuilder;
- prefer_grpc=True,
- collection_name=""my_documents"",
-)
+client
-```
+ .query_groups(
+ QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"")
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-## Next steps
+ .group_size(2u64)
+ .with_payload(true)
+ .with_vectors(true)
-If you'd like to know more about running Qdrant in a LangChain-based application, please read our article
+ .limit(4u64),
-[Question Answering with LangChain and Qdrant without boilerplate](/articles/langchain-integration/). Some more information
+ )
-might also be found in the [LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant).
-",documentation/frameworks/langchain.md
-"---
+ .await?;
-title: LlamaIndex
+```
-weight: 200
-aliases: [ ../integrations/llama-index/ ]
----
+```java
+import java.util.List;
-# LlamaIndex (GPT Index)
+import io.qdrant.client.grpc.Points.SearchPointGroups;
-LlamaIndex (formerly GPT Index) acts as an interface between your external data and Large Language Models. So you can bring your
-private data and augment LLMs with it. LlamaIndex simplifies data ingestion and indexing, integrating Qdrant as a vector index.
+client.queryGroupsAsync(
+ QueryPointGroups.newBuilder()
+ .setCollectionName(""{collection_name}"")
-Installing LlamaIndex is straightforward if we use pip as a package manager. Qdrant is not installed by default, so we need to
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-install it separately:
+ .setGroupBy(""document_id"")
+ .setLimit(4)
+ .setGroupSize(2)
-```bash
+ .build())
-pip install llama-index qdrant-client
+ .get();
```
-LlamaIndex requires providing an instance of `QdrantClient`, so it can interact with Qdrant server.
+```csharp
+using Qdrant.Client;
-```python
-from llama_index.vector_stores.qdrant import QdrantVectorStore
+var client = new QdrantClient(""localhost"", 6334);
-import qdrant_client
+await client.QueryGroupsAsync(
+ collectionName: ""{collection_name}"",
+ query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f },
-client = qdrant_client.QdrantClient(
+ groupBy: ""document_id"",
- """",
+ limit: 4,
- api_key="""", # For Qdrant Cloud, None for local instance
+ groupSize: 2
-)
+);
+```
-vector_store = QdrantVectorStore(client=client, collection_name=""documents"")
-index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
+```go
+import (
+ ""context""
-```
+ ""github.com/qdrant/go-client/qdrant""
-The library [comes with a notebook](https://github.com/jerryjliu/llama_index/blob/main/docs/examples/vector_stores/QdrantIndexDemo.ipynb)
+)
-that shows an end-to-end example of how to use Qdrant within LlamaIndex.
-",documentation/frameworks/llama-index.md
-"---
-title: DLT
-weight: 1300
+client, err := qdrant.NewClient(&qdrant.Config{
-aliases: [ ../integrations/dlt/ ]
+ Host: ""localhost"",
----
+ Port: 6334,
+})
-# DLT(Data Load Tool)
+client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
+ CollectionName: ""{collection_name}"",
-[DLT](https://dlthub.com/) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets.
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ GroupBy: ""document_id"",
+ GroupSize: qdrant.PtrOf(uint64(2)),
-With the DLT-Qdrant integration, you can now select Qdrant as a DLT destination to load data into.
+})
+```
-**DLT Enables**
+The output of a ***groups*** call looks like this:
-- Automated maintenance - with schema inference, alerts and short declarative code, maintenance becomes simple.
-- Run it where Python runs - on Airflow, serverless functions, notebooks. Scales on micro and large infrastructure alike.
+```json
-- User-friendly, declarative interface that removes knowledge obstacles for beginners while empowering senior professionals.
+{
+ ""result"": {
+ ""groups"": [
-## Usage
+ {
+ ""id"": ""a"",
+ ""hits"": [
-To get started, install `dlt` with the `qdrant` extra.
+ { ""id"": 0, ""score"": 0.91 },
+ { ""id"": 1, ""score"": 0.85 }
+ ]
-```bash
+ },
-pip install ""dlt[qdrant]""
+ {
-```
+ ""id"": ""b"",
+ ""hits"": [
+ { ""id"": 1, ""score"": 0.85 }
-Configure the destination in the DLT secrets file. The file is located at `~/.dlt/secrets.toml` by default. Add the following section to the secrets file.
+ ]
+ },
+ {
-```toml
+ ""id"": 123,
-[destination.qdrant.credentials]
+ ""hits"": [
-location = ""https://your-qdrant-url""
+ { ""id"": 3, ""score"": 0.79 },
-api_key = ""your-qdrant-api-key""
+ { ""id"": 4, ""score"": 0.75 }
-```
+ ]
+ },
+ {
-The location will default to `http://localhost:6333` and `api_key` is not defined - which are the defaults for a local Qdrant instance.
+ ""id"": -10,
-Find more information about DLT configurations [here](https://dlthub.com/docs/general-usage/credentials).
+ ""hits"": [
+ { ""id"": 5, ""score"": 0.6 }
+ ]
-Define the source of the data.
+ }
+ ]
+ },
-```python
+ ""status"": ""ok"",
-import dlt
+ ""time"": 0.001
-from dlt.destinations.qdrant import qdrant_adapter
+}
+```
-movies = [
- {
+The groups are ordered by the score of the top point in the group. Inside each group the points are sorted too.
- ""title"": ""Blade Runner"",
- ""year"": 1982,
- ""description"": ""The film is about a dystopian vision of the future that combines noir elements with sci-fi imagery.""
+If the `group_by` field of a point is an array (e.g. `""document_id"": [""a"", ""b""]`), the point can be included in multiple groups (e.g. `""document_id"": ""a""` and `document_id: ""b""`).
- },
- {
- ""title"": ""Ghost in the Shell"",
+
- ""year"": 1995,
- ""description"": ""The film is about a cyborg policewoman and her partner who set out to find the main culprit behind brain hacking, the Puppet Master.""
- },
+**Limitations**:
- {
- ""title"": ""The Matrix"",
- ""year"": 1999,
+* Only [keyword](../payload/#keyword) and [integer](../payload/#integer) payload values are supported for the `group_by` parameter. Payload values with other types will be ignored.
- ""description"": ""The movie is set in the 22nd century and tells the story of a computer hacker who joins an underground group fighting the powerful computers that rule the earth.""
+* At the moment, pagination is not enabled when using **groups**, so the `offset` parameter is not allowed.
- }
-]
-```
+### Lookup in groups
-
+Having multiple points for parts of the same item often introduces redundancy in the stored data. Which may be fine if the information shared by the points is small, but it can become a problem if the payload is large, because it multiplies the storage space needed to store the points by a factor of the amount of points we have per group.
-Define the pipeline.
+One way of optimizing storage when using groups is to store the information shared by the points with the same group id in a single point in another collection. Then, when using the [**groups** API](#grouping-api), add the `with_lookup` parameter to bring the information from those points into each group.
-```python
-pipeline = dlt.pipeline(
+![Group id matches point id](/docs/lookup_id_linking.png)
- pipeline_name=""movies"",
- destination=""qdrant"",
- dataset_name=""movies_dataset"",
+This has the extra benefit of having a single point to update when the information shared by the points in a group changes.
-)
-```
+For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point.
-Run the pipeline.
+In this case, to bring the information from the documents into the chunks grouped by the document id, you can use the `with_lookup` parameter:
-```python
-info = pipeline.run(
+```http
- qdrant_adapter(
+POST /collections/chunks/points/query/groups
- movies,
+{
- embed=[""title"", ""description""]
+ // Same as in the regular query API
- )
+ ""query"": [1.1],
-)
-```
+ // Grouping parameters
+ ""group_by"": ""document_id"",
-The data is now loaded into Qdrant.
+ ""limit"": 2,
+ ""group_size"": 2,
-To use vector search after the data has been loaded, you must specify which fields Qdrant needs to generate embeddings for. You do that by wrapping the data (or [DLT resource](https://dlthub.com/docs/general-usage/resource)) with the `qdrant_adapter` function.
+ // Lookup parameters
+ ""with_lookup"": {
-## Write disposition
+ // Name of the collection to look up points in
+ ""collection"": ""documents"",
-A DLT [write disposition](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/#write-disposition) defines how the data should be written to the destination. All write dispositions are supported by the Qdrant destination.
+ // Options for specifying what to bring from the payload
+ // of the looked up point, true by default
-## DLT Sync
+ ""with_payload"": [""title"", ""text""],
-Qdrant destination supports syncing of the [`DLT` state](https://dlthub.com/docs/general-usage/state#syncing-state-with-destination).
+ // Options for specifying what to bring from the vector(s)
+ // of the looked up point, true by default
+ ""with_vectors"": false
-## Next steps
+ }
+}
+```
-- The comprehensive Qdrant DLT destination documentation can be found [here](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/).
-",documentation/frameworks/dlt.md
-"---
-title: Apache Airflow
-weight: 2100
+```python
----
+client.query_points_groups(
+ collection_name=""chunks"",
+ # Same as in the regular search() API
-# Apache Airflow
+ query=[1.1],
+ # Grouping parameters
+ group_by=""document_id"", # Path of the field to group by
-[Apache Airflow](https://airflow.apache.org/) is an open-source platform for authoring, scheduling and monitoring data and computing workflows. Airflow uses Python to create workflows that can be easily scheduled and monitored.
+ limit=2, # Max amount of groups
+ group_size=2, # Max amount of points per group
+ # Lookup parameters
-Qdrant is available as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in Airflow to interface with the database.
+ with_lookup=models.WithLookup(
+ # Name of the collection to look up points in
+ collection=""documents"",
-## Prerequisites
+ # Options for specifying what to bring from the payload
+ # of the looked up point, True by default
+ with_payload=[""title"", ""text""],
-Before configuring Airflow, you need:
+ # Options for specifying what to bring from the vector(s)
+ # of the looked up point, True by default
+ with_vectors=False,
-1. A Qdrant instance to connect to. You can set one up in our [installation guide](https://qdrant.tech/documentation/guides/installation).
+ ),
+)
+```
-2. A running Airflow instance. You can use their [Quick Start Guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html).
+```typescript
-## Setting up a connection
+client.queryGroups(""{collection_name}"", {
+ query: [1.1],
+ group_by: ""document_id"",
-Open the `Admin-> Connections` section of the Airflow UI. Click the `Create` link to create a new [Qdrant connection](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/connections.html).
+ limit: 2,
+ group_size: 2,
+ with_lookup: {
-![Qdrant connection](/documentation/frameworks/airflow/connection.png)
+ collection: ""documents"",
+ with_payload: [""title"", ""text""],
+ with_vectors: false,
-You can also set up a connection using [environment variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#environment-variables-connections) or an [external secret backend](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html).
+ },
+});
+```
-## Qdrant hook
+```rust
-An Airflow hook is an abstraction of a specific API that allows Airflow to interact with an external system.
+use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointGroupsBuilder, WithLookupBuilder};
-```python
+client
-from airflow.providers.qdrant.hooks.qdrant import QdrantHook
+ .query_groups(
+ QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"")
+ .query(vec![0.2, 0.1, 0.9, 0.7])
-hook = QdrantHook(conn_id=""qdrant_connection"")
+ .limit(2u64)
+ .limit(2u64)
+ .with_lookup(
-hook.verify_connection()
+ WithLookupBuilder::new(""documents"")
-```
+ .with_payload(SelectorOptions::Include(
+ vec![""title"".to_string(), ""text"".to_string()].into(),
+ ))
-A [`qdrant_client#QdrantClient`](https://pypi.org/project/qdrant-client/) instance is available via `@property conn` of the `QdrantHook` instance for use within your Airflow workflows.
+ .with_vectors(false),
+ ),
+ )
-```python
+ .await?;
-from qdrant_client import models
+```
-hook.conn.count("""")
+```java
+import java.util.List;
-hook.conn.upsert(
- """",
+import io.qdrant.client.grpc.Points.QueryPointGroups;
- points=[
+import io.qdrant.client.grpc.Points.WithLookup;
- models.PointStruct(id=32, vector=[0.32, 0.12, 0.123], payload={""color"": ""red""})
- ],
-)
+import static io.qdrant.client.QueryFactory.nearest;
+import static io.qdrant.client.WithVectorsSelectorFactory.enable;
+import static io.qdrant.client.WithPayloadSelectorFactory.include;
-```
+client.queryGroupsAsync(
-## Qdrant Ingest Operator
+ QueryPointGroups.newBuilder()
+ .setCollectionName(""{collection_name}"")
+ .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f))
-The Qdrant provider also provides a convenience operator for uploading data to a Qdrant collection that internally uses the Qdrant hook.
+ .setGroupBy(""document_id"")
+ .setLimit(2)
+ .setGroupSize(2)
-```python
+ .setWithLookup(
-from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator
+ WithLookup.newBuilder()
+ .setCollection(""documents"")
+ .setWithPayload(include(List.of(""title"", ""text"")))
-vectors = [
+ .setWithVectors(enable(false))
- [0.11, 0.22, 0.33, 0.44],
+ .build())
- [0.55, 0.66, 0.77, 0.88],
+ .build())
- [0.88, 0.11, 0.12, 0.13],
+ .get();
-]
+```
-ids = [32, 21, ""b626f6a9-b14d-4af9-b7c3-43d8deb719a6""]
-payload = [{""meta"": ""data""}, {""meta"": ""data_2""}, {""meta"": ""data_3"", ""extra"": ""data""}]
+```csharp
+using Qdrant.Client;
-QdrantIngestOperator(
+using Qdrant.Client.Grpc;
- conn_id=""qdrant_connection""
- task_id=""qdrant_ingest"",
- collection_name="""",
+var client = new QdrantClient(""localhost"", 6334);
- vectors=vectors,
- ids=ids,
- payload=payload,
+await client.SearchGroupsAsync(
-)
+ collectionName: ""{collection_name}"",
-```
+ vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f},
+ groupBy: ""document_id"",
+ limit: 2,
-## Reference
+ groupSize: 2,
+ withLookup: new WithLookup
+ {
-- 📦 [Provider package PyPI](https://pypi.org/project/apache-airflow-providers-qdrant/)
+ Collection = ""documents"",
-- 📚 [Provider docs](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html)
-",documentation/frameworks/airflow.md
-"---
+ WithPayload = new WithPayloadSelector
-title: PrivateGPT
+ {
-weight: 1600
+ Include = new PayloadIncludeSelector { Fields = { new string[] { ""title"", ""text"" } } }
-aliases: [ ../integrations/privategpt/ ]
+ },
----
+ WithVectors = false
+ }
+);
-# PrivateGPT
+```
-[PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support.
+```go
+import (
+ ""context""
-PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents.
+ ""github.com/qdrant/go-client/qdrant""
-## Configuration
+)
-Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-Example:
+})
-```yaml
-qdrant:
- url: ""https://xyz-example.eu-central.aws.cloud.qdrant.io:6333""
+client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
- api_key: """"
+ CollectionName: ""{collection_name}"",
-```
+ Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7),
+ GroupBy: ""document_id"",
+ GroupSize: qdrant.PtrOf(uint64(2)),
-The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are:
+ WithLookup: &qdrant.WithLookup{
-| Field | Description |
+ Collection: ""documents"",
-|--------------|-------------|
+ WithPayload: qdrant.NewWithPayloadInclude(""title"", ""text""),
-| location | If `:memory:` - use in-memory Qdrant instance. If `str` - use it as a `url` parameter.|
+ },
-| url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`. Eg. `http://localhost:6333` |
+})
-| port | Port of the REST API interface. Default: `6333` |
+```
-| grpc_port | Port of the gRPC interface. Default: `6334` |
-| prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. |
-| https | If `true` - use HTTPS(SSL) protocol.|
+For the `with_lookup` parameter, you can also use the shorthand `with_lookup=""documents""` to bring the whole payload and vector(s) without explicitly specifying it.
-| api_key | API key for authentication in Qdrant Cloud.|
-| prefix | If set, add `prefix` to the REST URL path. Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.|
-| timeout | Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC |
+The looked up result will show up under `lookup` in each group.
-| host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.|
-| path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`|
-| force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.|
+```json
+{
+ ""result"": {
-## Next steps
+ ""groups"": [
+ {
+ ""id"": 1,
-Find the PrivateGPT docs [here](https://docs.privategpt.dev/).
-",documentation/frameworks/privategpt.md
-"---
+ ""hits"": [
-title: DocArray
+ { ""id"": 0, ""score"": 0.91 },
-weight: 300
+ { ""id"": 1, ""score"": 0.85 }
-aliases: [ ../integrations/docarray/ ]
+ ],
----
+ ""lookup"": {
+ ""id"": 1,
+ ""payload"": {
-# DocArray
+ ""title"": ""Document A"",
-You can use Qdrant natively in DocArray, where Qdrant serves as a high-performance document store to enable scalable vector search.
+ ""text"": ""This is document A""
+ }
+ }
-DocArray is a library from Jina AI for nested, unstructured data in transit, including text, image, audio, video, 3D mesh, etc.
+ },
-It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the data with a Pythonic API.
+ {
+ ""id"": 2,
+ ""hits"": [
+ { ""id"": 1, ""score"": 0.85 }
+ ],
-To install DocArray with Qdrant support, please do
+ ""lookup"": {
+ ""id"": 2,
+ ""payload"": {
-```bash
+ ""title"": ""Document B"",
-pip install ""docarray[qdrant]""
+ ""text"": ""This is document B""
-```
+ }
+ }
+ }
-More information can be found in [DocArray's documentations](https://docarray.jina.ai/advanced/document-store/qdrant/).
-",documentation/frameworks/docarray.md
-"---
+ ]
-title: MindsDB
+ },
-weight: 1100
+ ""status"": ""ok"",
-aliases: [ ../integrations/mindsdb/ ]
+ ""time"": 0.001
----
+}
+```
-# MindsDB
+Since the lookup is done by matching directly with the point id, any group id that is not an existing (and valid) point id in the lookup collection will be ignored, and the `lookup` field will be empty.
-[MindsDB](https://mindsdb.com) is an AI automation platform for building AI/ML powered features and applications. It works by connecting any source of data with any AI/ML model or framework and automating how real-time data flows between them.
+## Random Sampling
-With the MindsDB-Qdrant integration, you can now select Qdrant as a database to load into and retrieve from with semantic search and filtering.
+*Available as of v1.11.0*
-**MindsDB allows you to easily**:
+In some cases it might be useful to retrieve a random sample of points from the collection. This can be useful for debugging, testing, or for providing entry points for exploration.
-- Connect to any store of data or end-user application.
-- Pass data to an AI model from any store of data or end-user application.
+Random sampling API is a part of [Universal Query API](#query-api) and can be used in the same way as regular search API.
-- Plug the output of an AI model into any store of data or end-user application.
-- Fully automate these workflows to build AI-powered features and applications
+```http
+{
-## Usage
+ ""collection_name"": ""{collection_name}"",
+ ""query"": {
+ ""sample"": ""random""
-To get started with Qdrant and MindsDB, the following syntax can be used.
+ }
+}
+```
-```sql
-CREATE DATABASE qdrant_test
-WITH ENGINE = ""qdrant"",
+```python
-PARAMETERS = {
+from qdrant_client import QdrantClient, models
- ""location"": "":memory:"",
- ""collection_config"": {
- ""size"": 386,
- ""distance"": ""Cosine""
- }
+sampled = client.query_points(
-}
+ collection_name=""{collection_name}"",
-```
+ query=models.SampleQuery(sample=models.Sample.Random)
+)
+```
-The available arguments for instantiating Qdrant can be found [here](https://github.com/mindsdb/mindsdb/blob/23a509cb26bacae9cc22475497b8644e3f3e23c3/mindsdb/integrations/handlers/qdrant_handler/qdrant_handler.py#L408-L468).
+```typescript
-## Creating a new table
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-- Qdrant options for creating a collection can be specified as `collection_config` in the `CREATE DATABASE` parameters.
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-- By default, UUIDs are set as collection IDs. You can provide your own IDs under the `id` column.
+const sampled = await client.query(""{collection_name}"", {
-```sql
+ query: {
-CREATE TABLE qdrant_test.test_table (
+ sample: ""random"",
- SELECT embeddings,'{""source"": ""bbc""}' as metadata FROM mysql_demo_db.test_embeddings
+ },
-);
+});
```
-## Querying the database
+```rust
+use qdrant_client::Qdrant;
+use qdrant_client::qdrant::{Query, QueryPointsBuilder};
-#### Perform a full retrieval using the following syntax.
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-```sql
+let sampled = client
-SELECT * FROM qdrant_test.test_table
+ .query(
-```
+ QueryPointsBuilder::new(""{collection_name}"")
+ .query(Query::new_sample(Sample::Random))
+ )
-By default, the `LIMIT` is set to 10 and the `OFFSET` is set to 0.
+ .await?;
-#### Perform a similarity search using your embeddings
+```
-
+```java
+import static io.qdrant.client.QueryFactory.sample;
-```sql
-SELECT * FROM qdrant_test.test_table
+import io.qdrant.client.QdrantClient;
-WHERE search_vector = (select embeddings from mysql_demo_db.test_embeddings limit 1)
+import io.qdrant.client.QdrantGrpcClient;
-```
+import io.qdrant.client.grpc.Points.QueryPoints;
+import io.qdrant.client.grpc.Points.Sample;
-#### Perform a search using filters
-```sql
+QdrantClient client =
-SELECT * FROM qdrant_test.test_table
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-WHERE `metadata.source` = 'bbc';
-```
-#### Delete entries using IDs
+client
+ .queryAsync(
+ QueryPoints.newBuilder()
-```sql
+ .setCollectionName(""{collection_name}"")
-DELETE FROM qtest.test_table_6
+ .setQuery(sample(Sample.Random))
-WHERE id = 2
+ .build())
+
+ .get();
```
-#### Delete entries using filters
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-```sql
-DELETE * FROM qdrant_test.test_table
-WHERE `metadata.source` = 'bbc';
+var client = new QdrantClient(""localhost"", 6334);
-```
+await client.QueryAsync(collectionName: ""{collection_name}"", query: Sample.Random);
-#### Drop a table
+```
-```sql
+```go
- DROP TABLE qdrant_test.test_table;
+import (
-```
+ ""context""
-## Next steps
+ ""github.com/qdrant/go-client/qdrant""
+)
-You can find more information pertaining to MindsDB and its datasources [here](https://docs.mindsdb.com/).
-",documentation/frameworks/mindsdb.md
-"---
-title: Autogen
+client, err := qdrant.NewClient(&qdrant.Config{
-weight: 1200
+ Host: ""localhost"",
-aliases: [ ../integrations/autogen/ ]
+ Port: 6334,
----
+})
-# Microsoft Autogen
+client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
+ CollectionName: ""{collection_name}"",
+ Query: qdrant.NewQuerySample(qdrant.Sample_Random),
-[AutoGen](https://github.com/microsoft/autogen) is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
+})
+```
-- Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
-- Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
+## Query planning
-- Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.
+Depending on the filter used in the search - there are several possible scenarios for query execution.
-With the Autogen-Qdrant integration, you can use the `QdrantRetrieveUserProxyAgent` from autogen to build retrieval augmented generation(RAG) services with ease.
+Qdrant chooses one of the query execution options depending on the available indexes, the complexity of the conditions and the cardinality of the filtering result.
+This process is called query planning.
-## Installation
+The strategy selection process relies heavily on heuristics and can vary from release to release.
+However, the general principles are:
-```bash
-pip install ""pyautogen[retrievechat]"" ""qdrant_client[fastembed]""
-```
+* planning is performed for each segment independently (see [storage](../storage/) for more information about segments)
+* prefer a full scan if the amount of points is below a threshold
+* estimate the cardinality of a filtered result before selecting a strategy
-## Usage
+* retrieve points using payload index (see [indexing](../indexing/)) if cardinality is below threshold
+* use filterable vector index if the cardinality is above a threshold
-A demo application that generates code based on context w/o human feedback
+You can adjust the threshold using a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), as well as independently for each collection.
+",documentation/concepts/search.md
+"---
+title: Payload
-#### Set your API Endpoint
+weight: 45
+aliases:
+ - ../payload
-The config_list_from_json function loads a list of configurations from an environment variable or a JSON file.
+---
-```python
+# Payload
-from autogen import config_list_from_json
-from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent
-from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent
+One of the significant features of Qdrant is the ability to store additional information along with vectors.
-from qdrant_client import QdrantClient
+This information is called `payload` in Qdrant terminology.
-config_list = config_list_from_json(
+Qdrant allows you to store any information that can be represented using JSON.
- env_or_file=""OAI_CONFIG_LIST"",
- file_location="".""
-)
+Here is an example of a typical payload:
-```
+```json
-It first looks for the environment variable ""OAI_CONFIG_LIST"" which needs to be a valid JSON string. If that variable is not found, it then looks for a JSON file named ""OAI_CONFIG_LIST"". The file structure sample can be found [here](https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample).
+{
+ ""name"": ""jacket"",
+ ""colors"": [""red"", ""blue""],
-#### Construct agents for RetrieveChat
+ ""count"": 10,
+ ""price"": 11.99,
+ ""locations"": [
-We start by initializing the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent. The system message needs to be set to ""You are a helpful assistant."" for RetrieveAssistantAgent. The detailed instructions are given in the user message.
+ {
+ ""lon"": 52.5200,
+ ""lat"": 13.4050
-```python
+ }
-# Print the generation steps
+ ],
-autogen.ChatCompletion.start_logging()
+ ""reviews"": [
+ {
+ ""user"": ""alice"",
-# 1. create a RetrieveAssistantAgent instance named ""assistant""
+ ""score"": 4
-assistant = RetrieveAssistantAgent(
+ },
- name=""assistant"",
+ {
- system_message=""You are a helpful assistant."",
+ ""user"": ""bob"",
- llm_config={
+ ""score"": 5
- ""request_timeout"": 600,
+ }
- ""seed"": 42,
+ ]
- ""config_list"": config_list,
+}
- },
+```
-)
+## Payload types
-# 2. create a QdrantRetrieveUserProxyAgent instance named ""qdrantagent""
-# By default, the human_input_mode is ""ALWAYS"", i.e. the agent will ask for human input at every step.
-# `docs_path` is the path to the docs directory.
+In addition to storing payloads, Qdrant also allows you search based on certain kinds of values.
-# `task` indicates the kind of task we're working on.
+This feature is implemented as additional filters during the search and will enable you to incorporate custom logic on top of semantic similarity.
-# `chunk_token_size` is the chunk token size for the retrieve chat.
-# We use an in-memory QdrantClient instance here. Not recommended for production.
+During the filtering, Qdrant will check the conditions over those values that match the type of the filtering condition. If the stored value type does not fit the filtering condition - it will be considered not satisfied.
-ragproxyagent = QdrantRetrieveUserProxyAgent(
- name=""qdrantagent"",
+For example, you will get an empty output if you apply the [range condition](../filtering/#range) on the string data.
- human_input_mode=""NEVER"",
- max_consecutive_auto_reply=10,
- retrieve_config={
+However, arrays (multiple values of the same type) are treated a little bit different. When we apply a filter to an array, it will succeed if at least one of the values inside the array meets the condition.
- ""task"": ""code"",
- ""docs_path"": ""./path/to/docs"",
- ""chunk_token_size"": 2000,
+The filtering process is discussed in detail in the section [Filtering](../filtering/).
- ""model"": config_list[0][""model""],
- ""client"": QdrantClient("":memory:""),
- ""embedding_model"": ""BAAI/bge-small-en-v1.5"",
+Let's look at the data types that Qdrant supports for searching:
- },
-)
-```
+### Integer
-#### Run the retriever service
+`integer` - 64-bit integer in the range from `-9223372036854775808` to `9223372036854775807`.
-```python
+Example of single and multiple `integer` values:
-# Always reset the assistant before starting a new conversation.
-assistant.reset()
+```json
+{
-# We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.
+ ""count"": 10,
-# The assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing.
+ ""sizes"": [35, 36, 38]
-# The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected.
+}
+```
-# The query used below is for demonstration. It should usually be related to the docs made available to the agent
-code_problem = ""How can I use FLAML to perform a classification task?""
+### Float
-ragproxyagent.initiate_chat(assistant, problem=code_problem)
-```
+`float` - 64-bit floating point number.
-## Next steps
+Example of single and multiple `float` values:
-Check out more Autogen [examples](https://microsoft.github.io/autogen/docs/Examples). You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).
-",documentation/frameworks/autogen.md
-"---
-title: Unstructured
+```json
-weight: 1900
+{
----
+ ""price"": 11.99,
+ ""ratings"": [9.1, 9.2, 9.4]
+}
-# Unstructured
+```
-[Unstructured](https://unstructured.io/) is a library designed to help preprocess, structure unstructured text documents for downstream machine learning tasks.
+### Bool
-Qdrant can be used as an ingestion destination in Unstructured.
+Bool - binary value. Equals to `true` or `false`.
-## Setup
+Example of single and multiple `bool` values:
-Install Unstructured with the `qdrant` extra.
+```json
+{
+ ""is_delivered"": true,
-```bash
+ ""responses"": [false, false, true, false]
-pip install ""unstructured[qdrant]""
+}
```
-## Usage
+### Keyword
+`keyword` - string value.
-Depending on the use case you can prefer the command line or using it within your application.
+Example of single and multiple `keyword` values:
-### CLI
+```json
+{
-```bash
+ ""name"": ""Alice"",
-EMBEDDING_PROVIDER=${EMBEDDING_PROVIDER:-""langchain-huggingface""}
+ ""friends"": [
+ ""bob"",
+ ""eva"",
-unstructured-ingest \
+ ""jack""
- local \
+ ]
- --input-path example-docs/book-war-and-peace-1225p.txt \
+}
- --output-dir local-output-to-qdrant \
+```
- --strategy fast \
- --chunk-elements \
- --embedding-provider ""$EMBEDDING_PROVIDER"" \
+### Geo
- --num-processes 2 \
- --verbose \
- qdrant \
+`geo` is used to represent geographical coordinates.
- --collection-name ""test"" \
- --location ""http://localhost:6333"" \
- --batch-size 80
+Example of single and multiple `geo` values:
-```
+```json
-For a full list of the options the CLI accepts, run `unstructured-ingest qdrant --help`
+{
+ ""location"": {
+ ""lon"": 52.5200,
-### Programmatic usage
+ ""lat"": 13.4050
+ },
+ ""cities"": [
-```python
+ {
-from unstructured.ingest.connector.local import SimpleLocalConfig
+ ""lon"": 51.5072,
-from unstructured.ingest.connector.qdrant import (
+ ""lat"": 0.1276
- QdrantWriteConfig,
+ },
- SimpleQdrantConfig,
+ {
-)
+ ""lon"": 40.7128,
-from unstructured.ingest.interfaces import (
+ ""lat"": 74.0060
- ChunkingConfig,
+ }
- EmbeddingConfig,
+ ]
- PartitionConfig,
+}
- ProcessorConfig,
+```
- ReadConfig,
-)
-from unstructured.ingest.runner import LocalRunner
+Coordinate should be described as an object containing two fields: `lon` - for longitude, and `lat` - for latitude.
-from unstructured.ingest.runner.writers.base_writer import Writer
-from unstructured.ingest.runner.writers.qdrant import QdrantWriter
+### Datetime
-def get_writer() -> Writer:
- return QdrantWriter(
+*Available as of v1.8.0*
- connector_config=SimpleQdrantConfig(
- location=""http://localhost:6333"",
- collection_name=""test"",
+`datetime` - date and time in [RFC 3339] format.
- ),
- write_config=QdrantWriteConfig(batch_size=80),
- )
+See the following examples of single and multiple `datetime` values:
-if __name__ == ""__main__"":
+```json
- writer = get_writer()
+{
- runner = LocalRunner(
+ ""created_at"": ""2023-02-08T10:49:00Z"",
- processor_config=ProcessorConfig(
+ ""updated_at"": [
- verbose=True,
+ ""2023-02-08T13:52:00Z"",
- output_dir=""local-output-to-qdrant"",
+ ""2023-02-21T21:23:00Z""
- num_processes=2,
+ ]
- ),
+}
- connector_config=SimpleLocalConfig(
+```
- input_path=""example-docs/book-war-and-peace-1225p.txt"",
- ),
- read_config=ReadConfig(),
+The following formats are supported:
- partition_config=PartitionConfig(),
- chunking_config=ChunkingConfig(chunk_elements=True),
- embedding_config=EmbeddingConfig(provider=""langchain-huggingface""),
+- `""2023-02-08T10:49:00Z""` ([RFC 3339], UTC)
- writer=writer,
+- `""2023-02-08T11:49:00+01:00""` ([RFC 3339], with timezone)
- writer_kwargs={},
+- `""2023-02-08T10:49:00""` (without timezone, UTC is assumed)
- )
+- `""2023-02-08T10:49""` (without timezone and seconds)
- runner.run()
+- `""2023-02-08""` (only date, midnight is assumed)
-```
+Notes about the format:
-## Next steps
+- `T` can be replaced with a space.
-- Unstructured API [reference](https://unstructured-io.github.io/unstructured/api.html).
+- The `T` and `Z` symbols are case-insensitive.
-- Qdrant ingestion destination [reference](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html).
-",documentation/frameworks/unstructured.md
-"---
+- UTC is always assumed when the timezone is not specified.
-title: txtai
+- Timezone can have the following formats: `±HH:MM`, `±HHMM`, `±HH`, or `Z`.
-weight: 500
+- Seconds can have up to 6 decimals, so the finest granularity for `datetime` is microseconds.
-aliases: [ ../integrations/txtai/ ]
----
+[RFC 3339]: https://datatracker.ietf.org/doc/html/rfc3339#section-5.6
-# txtai
+### UUID
-Qdrant might be also used as an embedding backend in [txtai](https://neuml.github.io/txtai/) semantic applications.
+*Available as of v1.11.0*
-txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their
-properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings'
+In addition to the basic `keyword` type, Qdrant supports `uuid` type for storing UUID values.
-proximity.
+Functionally, it works the same as `keyword`, internally stores parsed UUID values.
-Qdrant is not built-in txtai backend and requires installing an additional dependency:
+```json
+{
+ ""uuid"": ""550e8400-e29b-41d4-a716-446655440000"",
-```bash
+ ""uuids"": [
-pip install qdrant-txtai
+ ""550e8400-e29b-41d4-a716-446655440000"",
-```
+ ""550e8400-e29b-41d4-a716-446655440001""
+ ]
+}
-The examples and some more information might be found in [qdrant-txtai repository](https://github.com/qdrant/qdrant-txtai).
+```
+String representation of UUID (e.g. `550e8400-e29b-41d4-a716-446655440000`) occupies 36 bytes.
-",documentation/frameworks/txtai.md
-"---
+But when numeric representation is used, it is only 128 bits (16 bytes).
-title: Frameworks
-weight: 33
-# If the index.md file is empty, the link to the section will be hidden from the sidebar
+Usage of `uuid` index type is recommended in payload-heavy collections to save RAM and improve search performance.
-is_empty: true
----
-| Frameworks |
+## Create point with payload
-|---|
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/upsert-points))
-| [AirByte](./airbyte/) |
-| [AutoGen](./autogen/) |
-| [Cheshire Cat](./cheshire-cat/) |
+```http
-| [DLT](./dlt/) |
+PUT /collections/{collection_name}/points
-| [DocArray](./docarray/) |
+{
-| [DSPy](./dspy/) |
+ ""points"": [
-| [Fifty One](./fifty-one/) |
+ {
-| [txtai](./txtai/) |
+ ""id"": 1,
-| [Fondant](./fondant/) |
+ ""vector"": [0.05, 0.61, 0.76, 0.74],
-| [Haystack](./haystack/) |
+ ""payload"": {""city"": ""Berlin"", ""price"": 1.99}
-| [Langchain](./langchain/) |
+ },
-| [Llama Index](./llama-index/) |
+ {
-| [Minds DB](./mindsdb/) |
+ ""id"": 2,
-| [PrivateGPT](./privategpt/) |
+ ""vector"": [0.19, 0.81, 0.75, 0.11],
-| [Spark](./spark/) |",documentation/frameworks/_index.md
-"---
+ ""payload"": {""city"": [""Berlin"", ""London""], ""price"": 1.99}
-title: N8N
+ },
-weight: 2000
+ {
----
+ ""id"": 3,
+ ""vector"": [0.36, 0.55, 0.47, 0.94],
+ ""payload"": {""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]}
-# N8N
+ }
+ ]
+}
-[N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration.
+```
-Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows.
+```python
+from qdrant_client import QdrantClient, models
-## Prerequisites
+client = QdrantClient(url=""http://localhost:6333"")
-1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/).
-2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/).
+client.upsert(
+ collection_name=""{collection_name}"",
+ points=[
-## Setting up the vectorstore
+ models.PointStruct(
+ id=1,
+ vector=[0.05, 0.61, 0.76, 0.74],
-Select the Qdrant vectorstore from the list of nodes in your workflow editor.
+ payload={
+ ""city"": ""Berlin"",
+ ""price"": 1.99,
-![Qdrant n8n node](/documentation/frameworks/n8n/node.png)
+ },
+ ),
+ models.PointStruct(
-You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters).
+ id=2,
+ vector=[0.19, 0.81, 0.75, 0.11],
+ payload={
-![Qdrant Config](/documentation/frameworks/n8n/config.png)
+ ""city"": [""Berlin"", ""London""],
+ ""price"": 1.99,
+ },
-Create a connection to Qdrant using your [instance credentials](https://qdrant.tech/documentation/cloud/authentication/).
+ ),
+ models.PointStruct(
+ id=3,
-![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png)
+ vector=[0.36, 0.55, 0.47, 0.94],
+ payload={
+ ""city"": [""Berlin"", ""Moscow""],
-The vectorstore supports the following operations:
+ ""price"": [1.99, 2.99],
+ },
+ ),
-- Get Many - Get the top-ranked documents for a query.
+ ],
-- Insert documents - Add documents to the vectorstore.
+)
-- Retrieve documents - Retrieve documents for use with AI nodes.
+```
-## Further Reading
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-- N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/).
-- N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/).
-",documentation/frameworks/n8n.md
-"---
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-title: Haystack
-weight: 400
-aliases: [ ../integrations/haystack/ ]
+client.upsert(""{collection_name}"", {
----
+ points: [
+ {
+ id: 1,
-# Haystack
+ vector: [0.05, 0.61, 0.76, 0.74],
+ payload: {
+ city: ""Berlin"",
-[Haystack](https://haystack.deepset.ai/) serves as a comprehensive NLP framework, offering a modular methodology for constructing
+ price: 1.99,
-cutting-edge generative AI, QA, and semantic knowledge base search systems. A critical element in contemporary NLP systems is an
+ },
-efficient database for storing and retrieving extensive text data. Vector databases excel in this role, as they house vector
+ },
-representations of text and implement effective methods for swift retrieval. Thus, we are happy to announce the integration
+ {
-with Haystack - `QdrantDocumentStore`. This document store is unique, as it is maintained externally by the Qdrant team.
+ id: 2,
+ vector: [0.19, 0.81, 0.75, 0.11],
+ payload: {
-The new document store comes as a separate package and can be updated independently of Haystack:
+ city: [""Berlin"", ""London""],
+ price: 1.99,
+ },
-```bash
+ },
-pip install qdrant-haystack
+ {
-```
+ id: 3,
+ vector: [0.36, 0.55, 0.47, 0.94],
+ payload: {
-`QdrantDocumentStore` supports [all the configuration properties](/documentation/collections/#create-collection) available in
+ city: [""Berlin"", ""Moscow""],
-the Qdrant Python client. If you want to customize the default configuration of the collection used under the hood, you can
+ price: [1.99, 2.99],
-provide that settings when you create an instance of the `QdrantDocumentStore`. For example, if you'd like to enable the
+ },
-Scalar Quantization, you'd make that in the following way:
+ },
+ ],
+});
-```python
+```
-from qdrant_haystack.document_stores import QdrantDocumentStore
-from qdrant_client.http import models
+```rust
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
-document_store = QdrantDocumentStore(
+use qdrant_client::{Payload, Qdrant, QdrantError};
- "":memory:"",
+use serde_json::json;
- index=""Document"",
- embedding_dim=512,
- recreate_index=True,
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
- quantization_config=models.ScalarQuantization(
- scalar=models.ScalarQuantizationConfig(
- type=models.ScalarType.INT8,
+let points = vec![
- quantile=0.99,
+ PointStruct::new(
- always_ram=True,
+ 1,
- ),
+ vec![0.05, 0.61, 0.76, 0.74],
+
+ Payload::try_from(json!({""city"": ""Berlin"", ""price"": 1.99})).unwrap(),
),
-)
+ PointStruct::new(
-```
-",documentation/frameworks/haystack.md
-"---
+ 2,
-title: Fondant
+ vec![0.19, 0.81, 0.75, 0.11],
-weight: 1700
+ Payload::try_from(json!({""city"": [""Berlin"", ""London""]})).unwrap(),
-aliases: [ ../integrations/fondant/ ]
+ ),
----
+ PointStruct::new(
+ 3,
+ vec![0.36, 0.55, 0.47, 0.94],
-# Fondant
+ Payload::try_from(json!({""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]}))
+ .unwrap(),
+ ),
-[Fondant](https://fondant.ai/en/stable/) is an open-source framework that aims to simplify and speed
+];
-up large-scale data processing by making containerized components reusable across pipelines and
-execution environments. Benefit from built-in features such as autoscaling, data lineage, and
-pipeline caching, and deploy to (managed) platforms such as Vertex AI, Sagemaker, and Kubeflow
+client
-Pipelines.
+ .upsert_points(UpsertPointsBuilder::new(""{collection_name}"", points).wait(true))
+ .await?;
+```
-Fondant comes with a library of reusable components that you can leverage to compose your own
-pipeline, including a Qdrant component for writing embeddings to Qdrant.
+```java
+import java.util.List;
-## Usage
+import java.util.Map;
-
+import static io.qdrant.client.VectorsFactory.vectors;
-**A data load pipeline for RAG using Qdrant**.
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Points.PointStruct;
-A simple ingestion pipeline could look like the following:
+QdrantClient client =
-```python
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-import pyarrow as pa
-from fondant.pipeline import Pipeline
+client
+ .upsertAsync(
-indexing_pipeline = Pipeline(
+ ""{collection_name}"",
- name=""ingestion-pipeline"",
+ List.of(
- description=""Pipeline to prepare and process data for building a RAG solution"",
+ PointStruct.newBuilder()
- base_path=""./fondant-artifacts"",
+ .setId(id(1))
-)
+ .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+ .putAllPayload(Map.of(""city"", value(""Berlin""), ""price"", value(1.99)))
+ .build(),
-# An custom implemenation of a read component.
+ PointStruct.newBuilder()
-text = indexing_pipeline.read(
+ .setId(id(2))
- ""path/to/data-source-component"",
+ .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f))
- arguments={
+ .putAllPayload(
- # your custom arguments
+ Map.of(""city"", list(List.of(value(""Berlin""), value(""London"")))))
- }
+ .build(),
-)
+ PointStruct.newBuilder()
+ .setId(id(3))
+ .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f))
-chunks = text.apply(
+ .putAllPayload(
- ""chunk_text"",
+ Map.of(
- arguments={
+ ""city"",
- ""chunk_size"": 512,
+ list(List.of(value(""Berlin""), value(""London""))),
- ""chunk_overlap"": 32,
+ ""price"",
- },
+ list(List.of(value(1.99), value(2.99)))))
-)
+ .build()))
+ .get();
+```
-embeddings = chunks.apply(
- ""embed_text"",
- arguments={
+```csharp
- ""model_provider"": ""huggingface"",
+using Qdrant.Client;
- ""model"": ""all-MiniLM-L6-v2"",
+using Qdrant.Client.Grpc;
- },
-)
+var client = new QdrantClient(""localhost"", 6334);
-embeddings.write(
- ""index_qdrant"",
+await client.UpsertAsync(
- arguments={
+ collectionName: ""{collection_name}"",
- ""url"": ""http:localhost:6333"",
+ points: new List
- ""collection_name"": ""some-collection-name"",
+ {
- },
+ new PointStruct
- cache=False,
+ {
-)
+ Id = 1,
-```
+ Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+ Payload = { [""city""] = ""Berlin"", [""price""] = 1.99 }
+ },
-Once you have a pipeline, you can easily run it using the built-in CLI. Fondant allows
+ new PointStruct
-you to run the pipeline in production across different clouds.
+ {
+ Id = 2,
+ Vectors = new[] { 0.19f, 0.81f, 0.75f, 0.11f },
-The first component is a custom read module that needs to be implemented and cannot be used off the
+ Payload = { [""city""] = new[] { ""Berlin"", ""London"" } }
-shelf. A detailed tutorial on how to rebuild this
+ },
-pipeline [is provided on GitHub](https://github.com/ml6team/fondant-usecase-RAG/tree/main).
+ new PointStruct
+ {
+ Id = 3,
-## Next steps
+ Vectors = new[] { 0.36f, 0.55f, 0.47f, 0.94f },
+ Payload =
+ {
-More information about creating your own pipelines and components can be found in the [Fondant
+ [""city""] = new[] { ""Berlin"", ""Moscow"" },
-documentation](https://fondant.ai/en/stable/).
-",documentation/frameworks/fondant.md
-"---
+ [""price""] = new Value
-title: Cheshire Cat
+ {
-weight: 600
+ ListValue = new ListValue { Values = { new Value[] { 1.99, 2.99 } } }
-aliases: [ ../integrations/cheshire-cat/ ]
+ }
----
+ }
+ }
+ }
-# Cheshire Cat
+);
+```
-[Cheshire Cat](https://cheshirecat.ai/) is an open-source framework that allows you to develop intelligent agents on top of many Large Language Models (LLM). You can develop your custom AI architecture to assist you in a wide range of tasks.
+```go
+import (
-![Cheshire cat](/documentation/frameworks/cheshire-cat/cat.jpg)
+ ""context""
-## Cheshire Cat and Qdrant
+ ""github.com/qdrant/go-client/qdrant""
+)
-Cheshire Cat uses Qdrant as the default [Vector Memory](https://cheshire-cat-ai.github.io/docs/conceptual/memory/vector_memory/) for ingesting and retrieving documents.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-```
+ Port: 6334,
-# Decide host and port for your Cat. Default will be localhost:1865
+})
-CORE_HOST=localhost
-CORE_PORT=1865
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+ CollectionName: ""{collection_name}"",
-# Qdrant server
+ Points: []*qdrant.PointStruct{
-# QDRANT_HOST=localhost
+ {
-# QDRANT_PORT=6333
+ Id: qdrant.NewIDNum(1),
-```
+ Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
+ Payload: qdrant.NewValueMap(map[string]any{
+ ""city"": ""Berlin"", ""price"": 1.99}),
-Cheshire Cat takes great advantage of the following features of Qdrant:
+ },
-* [Collection Aliases](../../concepts/collections/#collection-aliases) to manage the change from one embedder to another.
+ {
-* [Quantization](../../guides/quantization/) to obtain a good balance between speed, memory usage and quality of the results.
+ Id: qdrant.NewIDNum(2),
-* [Snapshots](../../concepts/snapshots/) to not miss any information.
+ Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11),
-* [Community](https://discord.com/invite/tdtYvXjC4h)
+ Payload: qdrant.NewValueMap(map[string]any{
+ ""city"": []any{""Berlin"", ""London""}}),
+ },
-![RAG Pipeline](/documentation/frameworks/cheshire-cat/stregatto.jpg)
+ {
+ Id: qdrant.NewIDNum(3),
+ Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94),
-## How to use the Cheshire Cat
+ Payload: qdrant.NewValueMap(map[string]any{
+ ""city"": []any{""Berlin"", ""London""},
+ ""price"": []any{1.99, 2.99}}),
-### Requirements
+ },
-To run the Cheshire Cat, you need to have [Docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docs.docker.com/compose/install/) already installed on your system.
+ },
+})
+```
-```shell
-docker run --rm -it -p 1865:80 ghcr.io/cheshire-cat-ai/core:latest
-```
+## Update payload
-* Chat with the Cheshire Cat on [localhost:1865/admin](http://localhost:1865/admin).
+### Set payload
-* You can also interact via REST API and try out the endpoints on [localhost:1865/docs](http://localhost:1865/docs)
+Set only the given payload values on a point.
-Check the [instructions on github](https://github.com/cheshire-cat-ai/core/blob/main/README.md) for a more comprehensive quick start.
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/set-payload)):
-### First configuration of the LLM
+```http
-* Open the Admin Portal in your browser at [localhost:1865/admin](http://localhost:1865/admin).
+POST /collections/{collection_name}/points/payload
-* Configure the LLM in the `Settings` tab.
+{
-* If you don't explicitly choose it using `Settings` tab, the Embedder follows the LLM.
+ ""payload"": {
+ ""property1"": ""string"",
+ ""property2"": ""string""
-## Next steps
+ },
+ ""points"": [
+ 0, 3, 100
-For more information, refer to the Cheshire Cat [documentation](https://cheshire-cat-ai.github.io/docs/) and [blog](https://cheshirecat.ai/blog/).
+ ]
+}
+```
-* [Getting started](https://cheshirecat.ai/hello-world/)
-* [How the Cat works](https://cheshirecat.ai/how-the-cat-works/)
-* [Write Your First Plugin](https://cheshirecat.ai/write-your-first-plugin/)
+```python
-* [Cheshire Cat's use of Qdrant - Vector Space](https://cheshirecat.ai/dont-get-lost-in-vector-space/)
+client.set_payload(
-* [Cheshire Cat's use of Qdrant - Aliases](https://cheshirecat.ai/the-drunken-cat-effect/)
+ collection_name=""{collection_name}"",
-* [Discord Community](https://discord.com/invite/bHX5sNFCYU)
-",documentation/frameworks/cheshire-cat.md
-"---
+ payload={
-title: Vector Search Basics
+ ""property1"": ""string"",
-weight: 1
+ ""property2"": ""string"",
-social_preview_image: /docs/gettingstarted/vector-social.png
+ },
----
+ points=[0, 3, 10],
+)
+```
-# Vector Search Basics
+```typescript
-If you are still trying to figure out how vector search works, please read ahead. This document describes how vector search is used, covers Qdrant's place in the larger ecosystem, and outlines how you can use Qdrant to augment your existing projects.
+client.setPayload(""{collection_name}"", {
+ payload: {
+ property1: ""string"",
-For those who want to start writing code right away, visit our [Complete Beginners tutorial](/documentation/tutorials/search-beginners) to build a search engine in 5-15 minutes.
+ property2: ""string"",
+ },
+ points: [0, 3, 10],
-## A Brief History of Search
+});
+```
-Human memory is unreliable. Thus, as long as we have been trying to collect ‘knowledge’ in written form, we had to figure out how to search for relevant content without rereading the same books repeatedly. That’s why some brilliant minds introduced the inverted index. In the simplest form, it’s an appendix to a book, typically put at its end, with a list of the essential terms-and links to pages they occur at. Terms are put in alphabetical order. Back in the day, that was a manually crafted list requiring lots of effort to prepare. Once digitalization started, it became a lot easier, but still, we kept the same general principles. That worked, and still, it does.
+```rust
+use qdrant_client::qdrant::{
-If you are looking for a specific topic in a particular book, you can try to find a related phrase and quickly get to the correct page. Of course, assuming you know the proper term. If you don’t, you must try and fail several times or find somebody else to help you form the correct query.
+ PointsIdsList, SetPayloadPointsBuilder,
+};
+use qdrant_client::Payload,;
-{{< figure src=/docs/gettingstarted/inverted-index.png caption=""A simplified version of the inverted index."" >}}
+use serde_json::json;
-Time passed, and we haven’t had much change in that area for quite a long time. But our textual data collection started to grow at a greater pace. So we also started building up many processes around those inverted indexes. For example, we allowed our users to provide many words and started splitting them into pieces. That allowed finding some documents which do not necessarily contain all the query words, but possibly part of them. We also started converting words into their root forms to cover more cases, removing stopwords, etc. Effectively we were becoming more and more user-friendly. Still, the idea behind the whole process is derived from the most straightforward keyword-based search known since the Middle Ages, with some tweaks.
+client
+ .set_payload(
+ SetPayloadPointsBuilder::new(
-{{< figure src=/docs/gettingstarted/tokenization.png caption=""The process of tokenization with an additional stopwords removal and converstion to root form of a word."" >}}
+ ""{collection_name}"",
+ Payload::try_from(json!({
+ ""property1"": ""string"",
-Technically speaking, we encode the documents and queries into so-called sparse vectors where each position has a corresponding word from the whole dictionary. If the input text contains a specific word, it gets a non-zero value at that position. But in reality, none of the texts will contain more than hundreds of different words. So the majority of vectors will have thousands of zeros and a few non-zero values. That’s why we call them sparse. And they might be already used to calculate some word-based similarity by finding the documents which have the biggest overlap.
+ ""property2"": ""string"",
+ }))
+ .unwrap(),
-{{< figure src=/docs/gettingstarted/query.png caption=""An example of a query vectorized to sparse format."" >}}
+ )
+ .points_selector(PointsIdsList {
+ ids: vec![0.into(), 3.into(), 10.into()],
-Sparse vectors have relatively **high dimensionality**; equal to the size of the dictionary. And the dictionary is obtained automatically from the input data. So if we have a vector, we are able to partially reconstruct the words used in the text that created that vector.
+ })
+ .wait(true),
+ )
-## The Tower of Babel
+ .await?;
+```
-Every once in a while, when we discover new problems with inverted indexes, we come up with a new heuristic to tackle it, at least to some extent. Once we realized that people might describe the same concept with different words, we started building lists of synonyms to convert the query to a normalized form. But that won’t work for the cases we didn’t foresee. Still, we need to craft and maintain our dictionaries manually, so they can support the language that changes over time. Another difficult issue comes to light with multilingual scenarios. Old methods require setting up separate pipelines and keeping humans in the loop to maintain the quality.
+```java
+import java.util.List;
-{{< figure src=/docs/gettingstarted/babel.jpg caption=""The Tower of Babel, Pieter Bruegel."" >}}
+import java.util.Map;
-## The Representation Revolution
+import static io.qdrant.client.PointIdFactory.id;
+import static io.qdrant.client.ValueFactory.value;
-The latest research in Machine Learning for NLP is heavily focused on training Deep Language Models. In this process, the neural network takes a large corpus of text as input and creates a mathematical representation of the words in the form of vectors. These vectors are created in such a way that words with similar meanings and occurring in similar contexts are grouped together and represented by similar vectors. And we can also take, for example, an average of all the word vectors to create the vector for a whole text (e.g query, sentence, or paragraph).
+client
+ .setPayloadAsync(
-![deep neural](/docs/gettingstarted/deep-neural.png)
+ ""{collection_name}"",
+ Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
+ List.of(id(0), id(3), id(10)),
-We can take those **dense vectors** produced by the network and use them as a **different data representation**. They are dense because neural networks will rarely produce zeros at any position. In contrary to sparse ones, they have a relatively low dimensionality — hundreds or a few thousand only. Unfortunately, if we want to have a look and understand the content of the document by looking at the vector it’s no longer possible. Dimensions are no longer representing the presence of specific words.
+ true,
+ null,
+ null)
-Dense vectors can capture the meaning, not the words used in a text. That being said, **Large Language Models can automatically handle synonyms**. Moreso, since those neural networks might have been trained with multilingual corpora, they translate the same sentence, written in different languages, to similar vector representations, also called **embeddings**. And we can compare them to find similar pieces of text by calculating the distance to other vectors in our database.
+ .get();
+```
-{{< figure src=/docs/gettingstarted/input.png caption=""Input queries contain different words, but they are still converted into similar vector representations, because the neural encoder can capture the meaning of the sentences. That feature can capture synonyms but also different languages.."" >}}
+```csharp
+using Qdrant.Client;
-**Vector search** is a process of finding similar objects based on their embeddings similarity. The good thing is, you don’t have to design and train your neural network on your own. Many pre-trained models are available, either on **HuggingFace** or by using libraries like [SentenceTransformers](https://www.sbert.net/?ref=hackernoon.com). If you, however, prefer not to get your hands dirty with neural models, you can also create the embeddings with SaaS tools, like [co.embed API](https://docs.cohere.com/reference/embed?ref=hackernoon.com).
+using Qdrant.Client.Grpc;
-## Why Qdrant?
+var client = new QdrantClient(""localhost"", 6334);
-The challenge with vector search arises when we need to find similar documents in a big set of objects. If we want to find the closest examples, the naive approach would require calculating the distance to every document. That might work with dozens or even hundreds of examples but may become a bottleneck if we have more than that. When we work with relational data, we set up database indexes to speed things up and avoid full table scans. And the same is true for vector search. Qdrant is a fully-fledged vector database that speeds up the search process by using a graph-like structure to find the closest objects in sublinear time. So you don’t calculate the distance to every object from the database, but some candidates only.
+await client.SetPayloadAsync(
+ collectionName: ""{collection_name}"",
+ payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
-{{< figure src=/docs/gettingstarted/vector-search.png caption=""Vector search with Qdrant. Thanks to HNSW graph we are able to compare the distance to some of the objects from the database, not to all of them."" >}}
+ ids: new ulong[] { 0, 3, 10 }
+);
+```
-While doing a semantic search at scale, because this is what we sometimes call the vector search done on texts, we need a specialized tool to do it effectively — a tool like Qdrant.
+```go
-## Next Steps
+import (
+ ""context""
-Vector search is an exciting alternative to sparse methods. It solves the issues we had with the keyword-based search without needing to maintain lots of heuristics manually. It requires an additional component, a neural encoder, to convert text into vectors.
+ ""github.com/qdrant/go-client/qdrant""
+)
-[**Tutorial 1 - Qdrant for Complete Beginners**](../../tutorials/search-beginners)
-Despite its complicated background, vectors search is extraordinarily simple to set up. With Qdrant, you can have a search engine up-and-running in five minutes. Our [Complete Beginners tutorial](../../tutorials/search-beginners) will show you how.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-[**Tutorial 2 - Question and Answer System**](../../../articles/qa-with-cohere-and-qdrant)
+ Port: 6334,
-However, you can also choose SaaS tools to generate them and avoid building your model. Setting up a vector search project with Qdrant Cloud and Cohere co.embed API is fairly easy if you follow the [Question and Answer system tutorial](../../../articles/qa-with-cohere-and-qdrant).
+})
-There is another exciting thing about vector search. You can search for any kind of data as long as there is a neural network that would vectorize your data type. Do you think about a reverse image search? That’s also possible with vector embeddings.
+client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{
+ CollectionName: ""{collection_name}"",
+ Payload: qdrant.NewValueMap(
+ map[string]any{""property1"": ""string"", ""property2"": ""string""}),
+ PointsSelector: qdrant.NewPointsSelector(
+ qdrant.NewIDNum(0),
+ qdrant.NewIDNum(3)),
+})
-",documentation/overview/vector-search.md
-"---
+```
-title: Qdrant vs. Alternatives
-weight: 2
----
+You don't need to know the ids of the points you want to modify. The alternative
+is to use filters.
-# Comparing Qdrant with alternatives
+```http
+POST /collections/{collection_name}/points/payload
-If you are currently using other vector databases, we recommend you read this short guide. It breaks down the key differences between Qdrant and other similar products. This document should help you decide which product has the features and support you need.
+{
-Unfortunately, since Pinecone is not an open source product, we can't include it in our [benchmarks](/benchmarks/). However, we still recommend you use the [benchmark tool](/benchmarks/) while exploring Qdrant.
+ ""payload"": {
+ ""property1"": ""string"",
+ ""property2"": ""string""
-## Feature comparison
+ },
+ ""filter"": {
+ ""must"": [
-| Feature | Pinecone | Qdrant | Comments |
+ {
-|-------------------------------------|-------------------------------|----------------------------------------------|----------------------------------------------------------|
+ ""key"": ""color"",
-| **Deployment Modes** | SaaS-only | Local, on-premise, Cloud | Qdrant offers more flexibility in deployment modes |
+ ""match"": {
-| **Supported Technologies** | Python, JavaScript/TypeScript | Python, JavaScript/TypeScript, Rust, Go | Qdrant supports a broader range of programming languages |
+ ""value"": ""red""
-| **Performance** (e.g., query speed) | TnC Prohibit Benchmarking | [Benchmark result](/benchmarks/) | Compare performance metrics |
+ }
-| **Pricing** | Starts at $70/mo | Free and Open Source, Cloud starts at $25/mo | Pricing as of May 2023 |
+ }
+ ]
+ }
-## Prototyping options
+}
+```
-Qdrant offers multiple ways of deployment, including local mode, on-premise, and [Qdrant Cloud](https://cloud.qdrant.io/).
-You can [get started with local mode quickly](/documentation/quick-start/) and without signing up for SaaS. With Pinecone you will have to connect your development environment to the cloud service just to test the product.
+```python
+client.set_payload(
+ collection_name=""{collection_name}"",
-When it comes to SaaS, both Pinecone and [Qdrant Cloud](https://cloud.qdrant.io/) offer a free cloud tier to check out the services, and you don't have to give credit card details for either. Qdrant's free tier should be enough to keep around 1M of 768-dimensional vectors, but it may vary depending on the additional attributes stored with vectors. Pinecone's starter plan supports approximately 200k 768-dimensional embeddings and metadata, stored within a single index. With Qdrant Cloud, however, you can experiment with different models as you may create several collections or keep multiple vectors per each point. That means Qdrant Cloud allows you building several small demos, even on a free tier.
+ payload={
+ ""property1"": ""string"",
+ ""property2"": ""string"",
-## Terminology
+ },
+ points=models.Filter(
+ must=[
-Although both tools serve similar purposes, there are some differences in the terms used. This dictionary may come
+ models.FieldCondition(
-in handy during the transition.
+ key=""color"",
+ match=models.MatchValue(value=""red""),
+ ),
-| Pinecone | Qdrant | Comments |
+ ],
-|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+ ),
-| **Index** | [**Collection**](../../concepts/collections/) | Pinecone's index is an organizational unit for storing and managing vectors of the same size. The index is tightly coupled with hardware (pods). Qdrant uses the collection to describe a similar concept, however, a single instance may handle multiple collections at once. |
+)
-| **Collection** | [**Snapshots**](../../concepts/snapshots/) | A collection in Pinecone is a static copy of an *index* that you cannot query, mostly used as some sort of backup. There is no direct analogy in Qdrant, but if you want to back your collection up, you may always create a more flexible [snapshot](../../concepts/snapshots/). |
+```
-| **Namespace** | [**Payload-based isolation**](../../guides/multiple-partitions/) / [**User-defined sharding**](../../guides/distributed_deployment/#user-defined-sharding) | Namespaces allow the partitioning of the vectors in an index into subsets. Qdrant provides multiple tools to ensure efficient data isolation within a collection. For fine-grained data segreation you can use payload-based approach to multitenancy, and use custom sharding at bigger scale |
-| **Metadata** | [**Payload**](../../concepts/payload/) | Additional attributes describing a particular object, other than the embedding vector. Both engines support various data types, but Pinecone metadata is key-value, while Qdrant supports any JSON-like objects. |
-| **Query** | [**Search**](../../concepts/search/) | Name of the method used to find the nearest neighbors for a given vector, possibly with some additional filters applied on top. |
+```typescript
-| N/A | [**Scroll**](../../concepts/points/#scroll-points) | Pinecone does not offer a way to iterate through all the vectors in a particular index. Qdrant has a `scroll` method to get them all without using search. |
+client.setPayload(""{collection_name}"", {
+ payload: {
+ property1: ""string"",
-## Known limitations
+ property2: ""string"",
+ },
+ filter: {
-1. Pinecone does not support arbitrary JSON metadata, but a flat structure with strings, numbers, booleans, or lists of strings used as values. Qdrant accepts any JSON object as a payload, even nested structures.
+ must: [
-2. NULL values are not supported in Pinecone metadata but are handled properly by Qdrant.
+ {
-3. The maximum size of Pinecone metadata is 40kb per vector.
+ key: ""color"",
-4. Pinecone, unlike Qdrant, does not support geolocation and filtering based on geographical criteria.
+ match: {
-5. Qdrant allows storing multiple vectors per point, and those might be of a different dimensionality. Pinecone doesn't support anything similar.
+ value: ""red"",
-6. Vectors in Pinecone are mandatory for each point. Qdrant supports optional vectors.
+ },
+ },
+ ],
-It is worth mentioning, that **Pinecone will automatically create metadata indexes for all the fields**. Qdrant assumes you know
+ },
-your data and your future queries best, so it's up to you to choose the fields to be indexed. Thus, **you need to explicitly define the payload indexes while using Qdrant**.
+});
+```
-## Supported technologies
+```rust
+use qdrant_client::qdrant::{Condition, Filter, SetPayloadPointsBuilder};
-Both tools support various programming languages providing official SDKs.
+use qdrant_client::Payload;
+use serde_json::json;
-| | Pinecone | Qdrant |
-|---------------------------|----------------------|----------------------|
+client
-| **Python** | ✅ | ✅ |
+ .set_payload(
-| **JavaScript/TypeScript** | ✅ | ✅ |
+ SetPayloadPointsBuilder::new(
-| **Rust** | ❌ | ✅ |
+ ""{collection_name}"",
-| **Go** | ❌ | ✅ |
+ Payload::try_from(json!({
+ ""property1"": ""string"",
+ ""property2"": ""string"",
-There are also various community-driven projects aimed to provide the support for the other languages, but those are not officially
+ }))
-maintained, thus not mentioned here. However, it is still possible to interact with both engines through the HTTP REST or gRPC API.
+ .unwrap(),
-That makes it easy to integrate with any technology of your choice.
+ )
+ .points_selector(Filter::must([Condition::matches(
+ ""color"",
-If you are a Python user, then both tools are well-integrated with the most popular libraries like [LangChain](../integrations/langchain/), [LlamaIndex](../integrations/llama-index/), [Haystack](../integrations/haystack/), and more.
+ ""red"".to_string(),
-Using any of those libraries makes it easier to experiment with different vector databases, as the transition should be seamless.
+ )]))
+ .wait(true),
+ )
-## Planning to migrate?
+ .await?;
+```
-> We strongly recommend you use [Qdrant Tools](https://github.com/NirantK/qdrant_tools) to migrate from Pinecone to Qdrant.
+```java
+import java.util.Map;
-Migrating from Pinecone to Qdrant involves a series of well-planned steps to ensure that the transition is smooth and disruption-free. Here is a suggested migration plan:
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-1. Understanding Qdrant: It's important to first get a solid grasp of Qdrant, its functions, and its APIs. Take time to understand how to establish collections, add points, and query these collections.
+import static io.qdrant.client.ValueFactory.value;
-2. Migration strategy: Create a comprehensive migration strategy, incorporating data migration (copying your vectors and associated metadata from Pinecone to Qdrant), feature migration (verifying the availability and setting up of features currently in use with Pinecone in Qdrant), and a contingency plan (should there be any unexpected issues).
+client
+ .setPayloadAsync(
+ ""{collection_name}"",
-3. Establishing a parallel Qdrant system: Set up a Qdrant system to run concurrently with your current Pinecone system. This step will let you begin testing Qdrant without disturbing your ongoing operations on Pinecone.
+ Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
+ true,
-4. Data migration: Shift your vectors and metadata from Pinecone to Qdrant. The timeline for this step could vary, depending on the size of your data and Pinecone API's rate limitations.
+ null,
+ null)
+ .get();
-5. Testing and transition: Following the data migration, thoroughly test the Qdrant system. Once you're assured of the Qdrant system's stability and performance, you can make the switch.
+```
-6. Monitoring and fine-tuning: After transitioning to Qdrant, maintain a close watch on its performance. It's key to continue refining the system for optimal results as needed.
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-## Next steps
+using static Qdrant.Client.Grpc.Conditions;
-1. If you aren't ready yet, [try out Qdrant locally](/documentation/quick-start/) or sign up for [Qdrant Cloud](https://cloud.qdrant.io/).
+var client = new QdrantClient(""localhost"", 6334);
-2. For more basic information on Qdrant read our [Overview](overview/) section or learn more about Qdrant Cloud's [Free Tier](documentation/cloud/).
+await client.SetPayloadAsync(
+ collectionName: ""{collection_name}"",
+ payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
-3. If ready to migrate, please consult our [Comprehensive Guide](https://github.com/NirantK/qdrant_tools) for further details on migration steps.
-",documentation/overview/qdrant-alternatives.md
-"---
+ filter: MatchKeyword(""color"", ""red"")
-title: What is Qdrant?
+);
-weight: 9
+```
-aliases:
- - overview
----
+```go
+import (
+ ""context""
-# Introduction
+ ""github.com/qdrant/go-client/qdrant""
-![qdrant](https://qdrant.tech/images/logo_with_text.png)
+)
-Vector databases are a relatively new way for interacting with abstract data representations
+client, err := qdrant.NewClient(&qdrant.Config{
-derived from opaque machine learning models such as deep learning architectures. These
+ Host: ""localhost"",
-representations are often called vectors or embeddings and they are a compressed version of
+ Port: 6334,
-the data used to train a machine learning model to accomplish a task like sentiment analysis,
+})
-speech recognition, object detection, and many others.
+client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{
-These new databases shine in many applications like [semantic search](https://en.wikipedia.org/wiki/Semantic_search)
+ CollectionName: ""{collection_name}"",
-and [recommendation systems](https://en.wikipedia.org/wiki/Recommender_system), and here, we'll
+ Payload: qdrant.NewValueMap(
-learn about one of the most popular and fastest growing vector databases in the market, [Qdrant](https://qdrant.tech).
+ map[string]any{""property1"": ""string"", ""property2"": ""string""}),
+ PointsSelector: qdrant.NewPointsSelectorFilter(&qdrant.Filter{
+ Must: []*qdrant.Condition{
-## What is Qdrant?
+ qdrant.NewMatch(""color"", ""red""),
+ },
+ }),
-[Qdrant](http://qdrant.tech) ""is a vector similarity search engine that provides a production-ready
+})
-service with a convenient API to store, search, and manage points (i.e. vectors) with an additional
+```
-payload."" You can think of the payloads as additional pieces of information that can help you
-hone in on your search and also receive useful information that you can give to your users.
+_Available as of v1.8.0_
-You can get started using Qdrant with the Python `qdrant-client`, by pulling the latest docker
-image of `qdrant` and connecting to it locally, or by trying out [Qdrant's Cloud](https://cloud.qdrant.io/)
+It is possible to modify only a specific key of the payload by using the `key` parameter.
-free tier option until you are ready to make the full switch.
+For instance, given the following payload JSON object on a point:
-With that out of the way, let's talk about what are vector databases.
+```json
-## What Are Vector Databases?
+{
+ ""property1"": {
+ ""nested_property"": ""foo"",
-![dbs](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/databases.png)
+ },
+ ""property2"": {
+ ""nested_property"": ""bar"",
-Vector databases are a type of database designed to store and query high-dimensional vectors
+ }
-efficiently. In traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap)
+}
-databases (as seen in the image above), data is organized in rows and columns (and these are
+```
-called **Tables**), and queries are performed based on the values in those columns. However,
-in certain applications including image recognition, natural language processing, and recommendation
-systems, data is often represented as vectors in a high-dimensional space, and these vectors, plus
+You can modify the `nested_property` of `property1` with the following request:
-an id and a payload, are the elements we store in something called a **Collection** a vector
-database like Qdrant.
+```http
+POST /collections/{collection_name}/points/payload
-A vector in this context is a mathematical representation of an object or data point, where each
+{
-element of the vector corresponds to a specific feature or attribute of the object. For example,
+ ""payload"": {
-in an image recognition system, a vector could represent an image, with each element of the vector
+ ""nested_property"": ""qux"",
-representing a pixel value or a descriptor/characteristic of that pixel. In a music recommendation
+ },
-system, each vector would represent a song, and each element of the vector would represent a
+ ""key"": ""property1"",
-characteristic song such as tempo, genre, lyrics, and so on.
+ ""points"": [1]
+}
+```
-Vector databases are optimized for **storing** and **querying** these high-dimensional vectors
-efficiently, and they often using specialized data structures and indexing techniques such as
-Hierarchical Navigable Small World (HNSW) -- which is used to implement Approximate Nearest
+Resulting in the following payload:
-Neighbors -- and Product Quantization, among others. These databases enable fast similarity
-and semantic search while allowing users to find vectors that are the closest to a given query
-vector based on some distance metric. The most commonly used distance metrics are Euclidean
+```json
-Distance, Cosine Similarity, and Dot Product, and these three are fully supported Qdrant.
+{
+ ""property1"": {
+ ""nested_property"": ""qux"",
-Here's a quick overview of the three:
+ },
-- [**Cosine Similarity**](https://en.wikipedia.org/wiki/Cosine_similarity) - Cosine similarity
+ ""property2"": {
-is a way to measure how similar two things are. Think of it like a ruler that tells you how far
+ ""nested_property"": ""bar"",
-apart two points are, but instead of measuring distance, it measures how similar two things
+ }
-are. It's often used with text to compare how similar two documents or sentences are to each
+}
-other. The output of the cosine similarity ranges from -1 to 1, where -1 means the two things
+```
-are completely dissimilar, and 1 means the two things are exactly the same. It's a straightforward
-and effective way to compare two things!
-- [**Dot Product**](https://en.wikipedia.org/wiki/Dot_product) - The dot product similarity
+### Overwrite payload
-metric is another way of measuring how similar two things are, like cosine similarity. It's
-often used in machine learning and data science when working with numbers. The dot product
-similarity is calculated by multiplying the values in two sets of numbers, and then adding
+Fully replace any existing payload with the given one.
-up those products. The higher the sum, the more similar the two sets of numbers are. So, it's
-like a scale that tells you how closely two sets of numbers match each other.
-- [**Euclidean Distance**](https://en.wikipedia.org/wiki/Euclidean_distance) - Euclidean
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/overwrite-payload)):
-distance is a way to measure the distance between two points in space, similar to how we
-measure the distance between two places on a map. It's calculated by finding the square root
-of the sum of the squared differences between the two points' coordinates. This distance metric
+```http
-is commonly used in machine learning to measure how similar or dissimilar two data points are
+PUT /collections/{collection_name}/points/payload
-or, in other words, to understand how far apart they are.
+{
+ ""payload"": {
+ ""property1"": ""string"",
-Now that we know what vector databases are and how they are structurally different than other
+ ""property2"": ""string""
-databases, let's go over why they are important.
+ },
+ ""points"": [
+ 0, 3, 100
-## Why do we need Vector Databases?
+ ]
+}
+```
-Vector databases play a crucial role in various applications that require similarity search, such
-as recommendation systems, content-based image retrieval, and personalized search. By taking
-advantage of their efficient indexing and searching techniques, vector databases enable faster
+```python
-and more accurate retrieval of unstructured data already represented as vectors, which can
+client.overwrite_payload(
-help put in front of users the most relevant results to their queries.
+ collection_name=""{collection_name}"",
+ payload={
+ ""property1"": ""string"",
-In addition, other benefits of using vector databases include:
+ ""property2"": ""string"",
-1. Efficient storage and indexing of high-dimensional data.
+ },
-3. Ability to handle large-scale datasets with billions of data points.
+ points=[0, 3, 10],
-4. Support for real-time analytics and queries.
+)
-5. Ability to handle vectors derived from complex data types such as images, videos, and natural language text.
+```
-6. Improved performance and reduced latency in machine learning and AI applications.
-7. Reduced development and deployment time and cost compared to building a custom solution.
+```typescript
+client.overwritePayload(""{collection_name}"", {
-Keep in mind that the specific benefits of using a vector database may vary depending on the
+ payload: {
-use case of your organization and the features of the database you ultimately choose.
+ property1: ""string"",
+ property2: ""string"",
+ },
-Let's now evaluate, at a high-level, the way Qdrant is architected.
+ points: [0, 3, 10],
+});
+```
-## High-Level Overview of Qdrant's Architecture
+```rust
-![qdrant](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/qdrant_overview_high_level.png)
+use qdrant_client::qdrant::{PointsIdsList, SetPayloadPointsBuilder};
+use qdrant_client::Payload;
+use serde_json::json;
-The diagram above represents a high-level overview of some of the main components of Qdrant. Here
-are the terminologies you should get familiar with.
+client
+ .overwrite_payload(
-- [Collections](../concepts/collections/): A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](../concepts/collections/#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
+ SetPayloadPointsBuilder::new(
-- [Distance Metrics](https://en.wikipedia.org/wiki/Metric_space): These are used to measure
+ ""{collection_name}"",
-similarities among vectors and they must be selected at the same time you are creating a
+ Payload::try_from(json!({
-collection. The choice of metric depends on the way the vectors were obtained and, in particular,
+ ""property1"": ""string"",
-on the neural network that will be used to encode new queries.
+ ""property2"": ""string"",
-- [Points](../concepts/points/): The points are the central entity that
+ }))
-Qdrant operates with and they consist of a vector and an optional id and payload.
+ .unwrap(),
- - id: a unique identifier for your vectors.
+ )
- - Vector: a high-dimensional representation of data, for example, an image, a sound, a document, a video, etc.
+ .points_selector(PointsIdsList {
- - [Payload](../concepts/payload/): A payload is a JSON object with additional data you can add to a vector.
+ ids: vec![0.into(), 3.into(), 10.into()],
-- [Storage](../concepts/storage/): Qdrant can use one of two options for
+ })
-storage, **In-memory** storage (Stores all vectors in RAM, has the highest speed since disk
+ .wait(true),
-access is required only for persistence), or **Memmap** storage, (creates a virtual address
+ )
-space associated with the file on disk).
+ .await?;
-- Clients: the programming languages you can use to connect to Qdrant.
+```
-## Next Steps
+```java
+import java.util.List;
-Now that you know more about vector databases and Qdrant, you are ready to get started with one
-of our tutorials. If you've never used a vector database, go ahead and jump straight into
+import static io.qdrant.client.PointIdFactory.id;
-the **Getting Started** section. Conversely, if you are a seasoned developer in these
+import static io.qdrant.client.ValueFactory.value;
-technology, jump to the section most relevant to your use case.
+client
-As you go through the tutorials, please let us know if any questions come up in our
+ .overwritePayloadAsync(
-[Discord channel here](https://qdrant.to/discord). 😎
-",documentation/overview/_index.md
-"---
+ ""{collection_name}"",
-title: ""Qdrant 1.7.0 has just landed!""
+ Map.of(""property1"", value(""string""), ""property2"", value(""string"")),
-short_description: ""Qdrant 1.7.0 brought a bunch of new features. Let's take a closer look at them!""
+ List.of(id(0), id(3), id(10)),
-description: ""Sparse vectors, Discovery API, user-defined sharding, and snapshot-based shard transfer. That's what you can find in the latest Qdrant 1.7.0 release!""
+ true,
-social_preview_image: /articles_data/qdrant-1.7.x/social_preview.png
+ null,
-small_preview_image: /articles_data/qdrant-1.7.x/icon.svg
+ null)
-preview_dir: /articles_data/qdrant-1.7.x/preview
+ .get();
-weight: -90
+```
-author: Kacper Łukawski
-author_link: https://kacperlukawski.com
-date: 2023-12-10T10:00:00Z
+```csharp
-draft: false
+using Qdrant.Client;
-keywords:
+using Qdrant.Client.Grpc;
- - vector search
- - new features
- - sparse vectors
+var client = new QdrantClient(""localhost"", 6334);
- - discovery
- - exploration
- - custom sharding
+await client.OverwritePayloadAsync(
- - snapshot-based shard transfer
+ collectionName: ""{collection_name}"",
- - hybrid search
+ payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } },
- - bm25
+ ids: new ulong[] { 0, 3, 10 }
- - tfidf
+);
- - splade
+```
----
+```go
-Please welcome the long-awaited [Qdrant 1.7.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.7.0). Except for a handful of minor fixes and improvements, this release brings some cool brand-new features that we are excited to share!
+import (
-The latest version of your favorite vector search engine finally supports **sparse vectors**. That's the feature many of you requested, so why should we ignore it?
+ ""context""
-We also decided to continue our journey with [vector similarity beyond search](/articles/vector-similarity-beyond-search/). The new Discovery API covers some utterly new use cases. We're more than excited to see what you will build with it!
-But there is more to it! Check out what's new in **Qdrant 1.7.0**!
+ ""github.com/qdrant/go-client/qdrant""
+)
-1. Sparse vectors: do you want to use keyword-based search? Support for sparse vectors is finally here!
-2. Discovery API: an entirely new way of using vectors for restricted search and exploration.
-3. User-defined sharding: you can now decide which points should be stored on which shard.
+client, err := qdrant.NewClient(&qdrant.Config{
-4. Snapshot-based shard transfer: a new option for moving shards between nodes.
+ Host: ""localhost"",
+ Port: 6334,
+})
-Do you see something missing? Your feedback drives the development of Qdrant, so do not hesitate to [join our Discord community](https://qdrant.to/discord) and help us build the best vector search engine out there!
+client.OverwritePayload(context.Background(), &qdrant.SetPayloadPoints{
-## New features
+ CollectionName: ""{collection_name}"",
+ Payload: qdrant.NewValueMap(
+ map[string]any{""property1"": ""string"", ""property2"": ""string""}),
-Qdrant 1.7.0 brings a bunch of new features. Let's take a closer look at them!
+ PointsSelector: qdrant.NewPointsSelector(
+ qdrant.NewIDNum(0),
+ qdrant.NewIDNum(3)),
-### Sparse vectors
+})
+```
-Traditional keyword-based search mechanisms often rely on algorithms like TF-IDF, BM25, or comparable methods. While these techniques internally utilize vectors, they typically involve sparse vector representations. In these methods, the **vectors are predominantly filled with zeros, containing a relatively small number of non-zero values**.
-Those sparse vectors are theoretically high dimensional, definitely way higher than the dense vectors used in semantic search. However, since the majority of dimensions are usually zeros, we store them differently and just keep the non-zero dimensions.
+Like [set payload](#set-payload), you don't need to know the ids of the points
+you want to modify. The alternative is to use filters.
-Until now, Qdrant has not been able to handle sparse vectors natively. Some were trying to convert them to dense vectors, but that was not the best solution or a suggested way. We even wrote a piece with [our thoughts on building a hybrid search](/articles/hybrid-search/), and we encouraged you to use a different tool for keyword lookup.
+### Clear payload
-Things have changed since then, as so many of you wanted a single tool for sparse and dense vectors. And responding to this [popular](https://github.com/qdrant/qdrant/issues/1678) [demand](https://github.com/qdrant/qdrant/issues/1135), we've now introduced sparse vectors!
+This method removes all payload keys from specified points
-If you're coming across the topic of sparse vectors for the first time, our [Brief History of Search](https://qdrant.tech/documentation/overview/vector-search/) explains the difference between sparse and dense vectors.
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/clear-payload)):
-Check out the [sparse vectors article](../sparse-vectors/) and [sparse vectors index docs](/documentation/concepts/indexing/#sparse-vector-index) for more details on what this new index means for Qdrant users.
+```http
+POST /collections/{collection_name}/points/payload/clear
-### Discovery API
+{
+ ""points"": [0, 3, 100]
+}
-The recently launched [Discovery API](/documentation/concepts/explore/#discovery-api) extends the range of scenarios for leveraging vectors. While its interface mirrors the [Recommendation API](/documentation/concepts/explore/#recommendation-api), it focuses on refining the search parameters for greater precision.
+```
-The concept of 'context' refers to a collection of positive-negative pairs that define zones within a space. Each pair effectively divides the space into positive or negative segments. This concept guides the search operation to prioritize points based on their inclusion within positive zones or their avoidance of negative zones. Essentially, the search algorithm favors points that fall within multiple positive zones or steer clear of negative ones.
+```python
-The Discovery API can be used in two ways - either with or without the target point. The first case is called a **discovery search**, while the second is called a **context search**.
+client.clear_payload(
+ collection_name=""{collection_name}"",
+ points_selector=[0, 3, 100],
-#### Discovery search
+)
+```
-*Discovery search* is an operation that uses a target point to find the most relevant points in the collection, while performing the search in the preferred areas only. That is basically a search operation with more control over the search space.
+```typescript
+client.clearPayload(""{collection_name}"", {
-![Discovery search visualization](/articles_data/qdrant-1.7.x/discovery-search.png)
+ points: [0, 3, 100],
+});
+```
-Please refer to the [Discovery API documentation on discovery search](/documentation/concepts/explore/#discovery-search) for more details and the internal mechanics of the operation.
+```rust
-#### Context search
+use qdrant_client::qdrant::{ClearPayloadPointsBuilder, PointsIdsList};
-The mode of *context search* is similar to the discovery search, but it does not use a target point. Instead, the `context` is used to navigate the [HNSW graph](https://arxiv.org/abs/1603.09320) towards preferred zones. It is expected that the results in that mode will be diverse, and not centered around one point.
+client
-*Context Search* could serve as a solution for individuals seeking a more exploratory approach to navigate the vector space.
+ .clear_payload(
+ ClearPayloadPointsBuilder::new(""{collection_name}"")
+ .points(PointsIdsList {
-![Context search visualization](/articles_data/qdrant-1.7.x/context-search.png)
+ ids: vec![0.into(), 3.into(), 10.into()],
+ })
+ .wait(true),
-### User-defined sharding
+ )
+ .await?;
+```
-Qdrant's collections are divided into shards. A single **shard** is a self-contained store of points, which can be moved between nodes. Up till now, the points were distributed among shards by using a consistent hashing algorithm, so that shards were managing non-intersecting subsets of points.
-The latter one remains true, but now you can define your own sharding and decide which points should be stored on which shard. Sounds cool, right? But why would you need that? Well, there are multiple scenarios in which you may want to use custom sharding. For example, you may want to store some points on a dedicated node, or you may want to store points from the same user on the same shard and
+```java
+import java.util.List;
-While the existing behavior is still the default one, you can now define the shards when you create a collection. Then, you can assign each point to a shard by providing a `shard_key` in the `upsert` operation. What's more, you can also search over the selected shards only, by providing the `shard_key` parameter in the search operation.
+import static io.qdrant.client.PointIdFactory.id;
-```http request
-POST /collections/my_collection/points/search
-{
+client
- ""vector"": [0.29, 0.81, 0.75, 0.11],
+ .clearPayloadAsync(""{collection_name}"", List.of(id(0), id(3), id(100)), true, null, null)
- ""shard_key"": [""cats"", ""dogs""],
+ .get();
- ""limit"": 10,
+```
- ""with_payload"": true,
-}
-```
+```csharp
+using Qdrant.Client;
-If you want to know more about the user-defined sharding, please refer to the [sharding documentation](/documentation/guides/distributed_deployment/#sharding).
+var client = new QdrantClient(""localhost"", 6334);
-### Snapshot-based shard transfer
+await client.ClearPayloadAsync(collectionName: ""{collection_name}"", ids: new ulong[] { 0, 3, 100 });
+```
-That's a really more in depth technical improvement for the distributed mode users, that we implemented a new options the shard transfer mechanism. The new approach is based on the snapshot of the shard, which is transferred to the target node.
+```go
-Moving shards is required for dynamical scaling of the cluster. Your data can migrate between nodes, and the way you move it is crucial for the performance of the whole system. The good old `stream_records` method (still the default one) transmits all the records between the machines and indexes them on the target node.
+import (
-In the case of moving the shard, it's necessary to recreate the HNSW index each time. However, with the introduction of the new `snapshot` approach, the snapshot itself, inclusive of all data and potentially quantized content, is transferred to the target node. This comprehensive snapshot includes the entire index, enabling the target node to seamlessly load it and promptly begin handling requests without the need for index recreation.
+ ""context""
-There are multiple scenarios in which you may prefer one over the other. Please check out the docs of the [shard transfer method](/documentation/guides/distributed_deployment/#shard-transfer-method) for more details and head-to-head comparison. As for now, the old `stream_records` method is still the default one, but we may decide to change it in the future.
+ ""github.com/qdrant/go-client/qdrant""
+)
-## Minor improvements
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-Beyond introducing new features, Qdrant 1.7.0 enhances performance and addresses various minor issues. Here's a rundown of the key improvements:
+ Port: 6334,
+})
-1. Improvement of HNSW Index Building on High CPU Systems ([PR#2869](https://github.com/qdrant/qdrant/pull/2869)).
+client.ClearPayload(context.Background(), &qdrant.ClearPayloadPoints{
+ CollectionName: ""{collection_name}"",
-2. Improving [Search Tail Latencies](https://github.com/qdrant/qdrant/pull/2931): improvement for high CPU systems with many parallel searches, directly impacting the user experience by reducing latency.
+ Points: qdrant.NewPointsSelector(
+ qdrant.NewIDNum(0),
+ qdrant.NewIDNum(3)),
-3. [Adding Index for Geo Map Payloads](https://github.com/qdrant/qdrant/pull/2768): index for geo map payloads can significantly improve search performance, especially for applications involving geographical data.
+})
+```
-4. Stability of Consensus on Big High Load Clusters: enhancing the stability of consensus in large, high-load environments is critical for ensuring the reliability and scalability of the system ([PR#3013](https://github.com/qdrant/qdrant/pull/3013), [PR#3026](https://github.com/qdrant/qdrant/pull/3026), [PR#2942](https://github.com/qdrant/qdrant/pull/2942), [PR#3103](https://github.com/qdrant/qdrant/pull/3103), [PR#3054](https://github.com/qdrant/qdrant/pull/3054)).
+
-## Release notes
+### Delete payload keys
-[Our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.7.0) are a place to go if you are interested in more details. Please remember that Qdrant is an open source project, so feel free to [contribute](https://github.com/qdrant/qdrant/issues)!
-",articles/qdrant-1.7.x.md
-"---
+Delete specific payload keys from points.
-title: Metric Learning Tips & Tricks
-short_description: How to train an object matching model and serve it in production.
-description: Practical recommendations on how to train a matching model and serve it in production. Even with no labeled data.
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-payload)):
-# external_link: https://vasnetsov93.medium.com/metric-learning-tips-n-tricks-2e4cfee6b75b
-social_preview_image: /articles_data/metric-learning-tips/preview/social_preview.jpg
-preview_dir: /articles_data/metric-learning-tips/preview
+```http
-small_preview_image: /articles_data/metric-learning-tips/scatter-graph.svg
+POST /collections/{collection_name}/points/payload/delete
-weight: 20
+{
-author: Andrei Vasnetsov
+ ""keys"": [""color"", ""price""],
-author_link: https://blog.vasnetsov.com/
+ ""points"": [0, 3, 100]
-date: 2021-05-15T10:18:00.000Z
+}
-# aliases: [ /articles/metric-learning-tips/ ]
+```
----
+```python
+client.delete_payload(
+ collection_name=""{collection_name}"",
-## How to train object matching model with no labeled data and use it in production
+ keys=[""color"", ""price""],
+ points=[0, 3, 100],
+)
+```
-Currently, most machine-learning-related business cases are solved as a classification problems.
-Classification algorithms are so well studied in practice that even if the original problem is not directly a classification task, it is usually decomposed or approximately converted into one.
+```typescript
+client.deletePayload(""{collection_name}"", {
+ keys: [""color"", ""price""],
-However, despite its simplicity, the classification task has requirements that could complicate its production integration and scaling.
+ points: [0, 3, 100],
-E.g. it requires a fixed number of classes, where each class should have a sufficient number of training samples.
+});
+```
-In this article, I will describe how we overcome these limitations by switching to metric learning.
-By the example of matching job positions and candidates, I will show how to train metric learning model with no manually labeled data, how to estimate prediction confidence, and how to serve metric learning in production.
+```rust
+use qdrant_client::qdrant::{DeletePayloadPointsBuilder, PointsIdsList};
+client
-## What is metric learning and why using it?
+ .delete_payload(
+ DeletePayloadPointsBuilder::new(
+ ""{collection_name}"",
-According to Wikipedia, metric learning is the task of learning a distance function over objects.
+ vec![""color"".to_string(), ""price"".to_string()],
-In practice, it means that we can train a model that tells a number for any pair of given objects.
+ )
-And this number should represent a degree or score of similarity between those given objects.
+ .points_selector(PointsIdsList {
-For example, objects with a score of 0.9 could be more similar than objects with a score of 0.5
+ ids: vec![0.into(), 3.into(), 10.into()],
-Actual scores and their direction could vary among different implementations.
+ })
+ .wait(true),
+ )
-In practice, there are two main approaches to metric learning and two corresponding types of NN architectures.
+ .await?;
-The first is the interaction-based approach, which first builds local interactions (i.e., local matching signals) between two objects. Deep neural networks learn hierarchical interaction patterns for matching.
+```
-Examples of neural network architectures include MV-LSTM, ARC-II, and MatchPyramid.
+```java
-![MV-LSTM, example of interaction-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/mv_lstm.png)
+import java.util.List;
-> MV-LSTM, example of interaction-based model, [Shengxian Wan et al.
-](https://www.researchgate.net/figure/Illustration-of-MV-LSTM-S-X-and-S-Y-are-the-in_fig1_285271115) via Researchgate
+import static io.qdrant.client.PointIdFactory.id;
-The second is the representation-based approach.
-In this case distance function is composed of 2 components:
+client
-the Encoder transforms an object into embedded representation - usually a large float point vector, and the Comparator takes embeddings of a pair of objects from the Encoder and calculates their similarity.
+ .deletePayloadAsync(
-The most well-known example of this embedding representation is Word2Vec.
+ ""{collection_name}"",
+ List.of(""color"", ""price""),
+ List.of(id(0), id(3), id(100)),
-Examples of neural network architectures also include DSSM, C-DSSM, and ARC-I.
+ true,
+ null,
+ null)
-The Comparator is usually a very simple function that could be calculated very quickly.
+ .get();
-It might be cosine similarity or even a dot production.
+```
-Two-stage schema allows performing complex calculations only once per object.
-Once transformed, the Comparator can calculate object similarity independent of the Encoder much more quickly.
-For more convenience, embeddings can be placed into specialized storages or vector search engines.
+```csharp
-These search engines allow to manage embeddings using API, perform searches and other operations with vectors.
+using Qdrant.Client;
-![C-DSSM, example of representation-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/cdssm.png)
+var client = new QdrantClient(""localhost"", 6334);
-> C-DSSM, example of representation-based model, [Xue Li et al.](https://arxiv.org/abs/1901.10710v2) via arXiv
+await client.DeletePayloadAsync(
-Pre-trained NNs can also be used. The output of the second-to-last layer could work as an embedded representation.
+ collectionName: ""{collection_name}"",
-Further in this article, I would focus on the representation-based approach, as it proved to be more flexible and fast.
+ keys: [""color"", ""price""],
+ ids: new ulong[] { 0, 3, 100 }
+);
-So what are the advantages of using metric learning comparing to classification?
+```
-Object Encoder does not assume the number of classes.
-So if you can't split your object into classes,
-if the number of classes is too high, or you suspect that it could grow in the future - consider using metric learning.
+```go
+import (
+ ""context""
-In our case, business goal was to find suitable vacancies for candidates who specify the title of the desired position.
-To solve this, we used to apply a classifier to determine the job category of the vacancy and the candidate.
-But this solution was limited to only a few hundred categories.
+ ""github.com/qdrant/go-client/qdrant""
-Candidates were complaining that they couldn't find the right category for them.
+)
-Training the classifier for new categories would be too long and require new training data for each new category.
-Switching to metric learning allowed us to overcome these limitations, the resulting solution could compare any pair position descriptions, even if we don't have this category reference yet.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-![T-SNE with job samples](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/embeddings.png)
+ Port: 6334,
-> T-SNE with job samples, Image by Author. Play with [Embedding Projector](https://projector.tensorflow.org/?config=https://gist.githubusercontent.com/generall/7e712425e3b340c2c4dbc1a29f515d91/raw/b45b2b6f6c1d5ab3d3363c50805f3834a85c8879/config.json) yourself.
+})
-With metric learning, we learn not a concrete job type but how to match job descriptions from a candidate's CV and a vacancy.
+client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{
-Secondly, with metric learning, it is easy to add more reference occupations without model retraining.
+ CollectionName: ""{collection_name}"",
-We can then add the reference to a vector search engine.
+ Keys: []string{""color"", ""price""},
-Next time we will match occupations - this new reference vector will be searchable.
+ PointsSelector: qdrant.NewPointsSelector(
+ qdrant.NewIDNum(0),
+ qdrant.NewIDNum(3)),
+})
+```
-## Data for metric learning
+Alternatively, you can use filters to delete payload keys from the points.
-Unlike classifiers, a metric learning training does not require specific class labels.
-All that is required are examples of similar and dissimilar objects.
-We would call them positive and negative samples.
+```http
+POST /collections/{collection_name}/points/payload/delete
+{
-At the same time, it could be a relative similarity between a pair of objects.
+ ""keys"": [""color"", ""price""],
-For example, twins look more alike to each other than a pair of random people.
+ ""filter"": {
-And random people are more similar to each other than a man and a cat.
+ ""must"": [
-A model can use such relative examples for learning.
+ {
+ ""key"": ""color"",
+ ""match"": {
-The good news is that the division into classes is only a special case of determining similarity.
+ ""value"": ""red""
-To use such datasets, it is enough to declare samples from one class as positive and samples from another class as negative.
+ }
-In this way, it is possible to combine several datasets with mismatched classes into one generalized dataset for metric learning.
+ }
+ ]
+ }
-But not only datasets with division into classes are suitable for extracting positive and negative examples.
+}
-If, for example, there are additional features in the description of the object, the value of these features can also be used as a similarity factor.
+```
-It may not be as explicit as class membership, but the relative similarity is also suitable for learning.
+```python
-In the case of job descriptions, there are many ontologies of occupations, which were able to be combined into a single dataset thanks to this approach.
+client.delete_payload(
-We even went a step further and used identical job titles to find similar descriptions.
+ collection_name=""{collection_name}"",
+ keys=[""color"", ""price""],
+ points=models.Filter(
-As a result, we got a self-supervised universal dataset that did not require any manual labeling.
+ must=[
+ models.FieldCondition(
+ key=""color"",
-Unfortunately, universality does not allow some techniques to be applied in training.
+ match=models.MatchValue(value=""red""),
-Next, I will describe how to overcome this disadvantage.
+ ),
+ ],
+ ),
-## Training the model
+)
+```
-There are several ways to train a metric learning model.
-Among the most popular is the use of Triplet or Contrastive loss functions, but I will not go deep into them in this article.
+```typescript
-However, I will tell you about one interesting trick that helped us work with unified training examples.
+client.deletePayload(""{collection_name}"", {
+ keys: [""color"", ""price""],
+ filter: {
-One of the most important practices to efficiently train the metric learning model is hard negative mining.
+ must: [
-This technique aims to include negative samples on which model gave worse predictions during the last training epoch.
+ {
-Most articles that describe this technique assume that training data consists of many small classes (in most cases it is people's faces).
+ key: ""color"",
-With data like this, it is easy to find bad samples - if two samples from different classes have a high similarity score, we can use it as a negative sample.
+ match: {
-But we had no such classes in our data, the only thing we have is occupation pairs assumed to be similar in some way.
+ value: ""red"",
-We cannot guarantee that there is no better match for each job occupation among this pair.
+ },
-That is why we can't use hard negative mining for our model.
+ },
+ ],
+ },
+});
+```
-![Loss variations](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/losses.png)
-> [Alfonso Medela et al.](https://arxiv.org/abs/1905.10675) via arXiv
+```rust
+use qdrant_client::qdrant::{Condition, DeletePayloadPointsBuilder, Filter};
-To compensate for this limitation we can try to increase the number of random (weak) negative samples.
+client
-One way to achieve this is to train the model longer, so it will see more samples by the end of the training.
+ .delete_payload(
-But we found a better solution in adjusting our loss function.
+ DeletePayloadPointsBuilder::new(
-In a regular implementation of Triplet or Contractive loss, each positive pair is compared with some or a few negative samples.
+ ""{collection_name}"",
-What we did is we allow pair comparison amongst the whole batch.
+ vec![""color"".to_string(), ""price"".to_string()],
-That means that loss-function penalizes all pairs of random objects if its score exceeds any of the positive scores in a batch.
+ )
-This extension gives `~ N * B^2` comparisons where `B` is a size of batch and `N` is a number of batches.
+ .points_selector(Filter::must([Condition::matches(
-Much bigger than `~ N * B` in regular triplet loss.
+ ""color"",
-This means that increasing the size of the batch significantly increases the number of negative comparisons, and therefore should improve the model performance.
+ ""red"".to_string(),
-We were able to observe this dependence in our experiments.
+ )]))
-Similar idea we also found in the article [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362).
+ .wait(true),
+ )
+ .await?;
+```
-## Model confidence
+```java
+import java.util.List;
-In real life it is often needed to know how confident the model was in the prediction.
-Whether manual adjustment or validation of the result is required.
+import static io.qdrant.client.ConditionFactory.matchKeyword;
-With conventional classification, it is easy to understand by scores how confident the model is in the result.
-If the probability values of different classes are close to each other, the model is not confident.
+client
-If, on the contrary, the most probable class differs greatly, then the model is confident.
+ .deletePayloadAsync(
+ ""{collection_name}"",
+ List.of(""color"", ""price""),
-At first glance, this cannot be applied to metric learning.
-
-Even if the predicted object similarity score is small it might only mean that the reference set has no proper objects to compare with.
-
-Conversely, the model can group garbage objects with a large score.
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
+ true,
+ null,
-Fortunately, we found a small modification to the embedding generator, which allows us to define confidence in the same way as it is done in conventional classifiers with a Softmax activation function.
+ null)
-The modification consists in building an embedding as a combination of feature groups.
+ .get();
-Each feature group is presented as a one-hot encoded sub-vector in the embedding.
+```
-If the model can confidently predict the feature value - the corresponding sub-vector will have a high absolute value in some of its elements.
-For a more intuitive understanding, I recommend thinking about embeddings not as points in space, but as a set of binary features.
+```csharp
+using Qdrant.Client;
-To implement this modification and form proper feature groups we would need to change a regular linear output layer to a concatenation of several Softmax layers.
+using static Qdrant.Client.Grpc.Conditions;
-Each softmax component would represent an independent feature and force the neural network to learn them.
+var client = new QdrantClient(""localhost"", 6334);
-Let's take for example that we have 4 softmax components with 128 elements each.
-Every such component could be roughly imagined as a one-hot-encoded number in the range of 0 to 127.
-Thus, the resulting vector will represent one of `128^4` possible combinations.
+await client.DeletePayloadAsync(
-If the trained model is good enough, you can even try to interpret the values of singular features individually.
+ collectionName: ""{collection_name}"",
+ keys: [""color"", ""price""],
+ filter: MatchKeyword(""color"", ""red"")
+);
+```
-![Softmax feature embeddings](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/feature_embedding.png)
-> Softmax feature embeddings, Image by Author.
+```go
+import (
+ ""context""
-## Neural rules
+ ""github.com/qdrant/go-client/qdrant""
+)
-Machine learning models rarely train to 100% accuracy.
-In a conventional classifier, errors can only be eliminated by modifying and repeating the training process.
-Metric training, however, is more flexible in this matter and allows you to introduce additional steps that allow you to correct the errors of an already trained model.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-A common error of the metric learning model is erroneously declaring objects close although in reality they are not.
+})
-To correct this kind of error, we introduce exclusion rules.
+client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{
-Rules consist of 2 object anchors encoded into vector space.
+ CollectionName: ""{collection_name}"",
-If the target object falls into one of the anchors' effects area - it triggers the rule. It will exclude all objects in the second anchor area from the prediction result.
+ Keys: []string{""color"", ""price""},
+ PointsSelector: qdrant.NewPointsSelectorFilter(
+ &qdrant.Filter{
-![Exclusion rules](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/exclusion_rule.png)
+ Must: []*qdrant.Condition{qdrant.NewMatch(""color"", ""red"")},
-> Neural exclusion rules, Image by Author.
+ },
+ ),
+})
-The convenience of working with embeddings is that regardless of the number of rules,
+```
-you only need to perform the encoding once per object.
-Then to find a suitable rule, it is enough to compare the target object's embedding and the pre-calculated embeddings of the rule's anchors.
-Which, when implemented, translates into just one additional query to the vector search engine.
+## Payload indexing
-
+To search more efficiently with filters, Qdrant allows you to create indexes for payload fields by specifying the name and type of field it is intended to be.
-## Vector search in production
+The indexed fields also affect the vector index. See [Indexing](../indexing/) for details.
-When implementing a metric learning model in production, the question arises about the storage and management of vectors.
-It should be easy to add new vectors if new job descriptions appear in the service.
+In practice, we recommend creating an index on those fields that could potentially constrain the results the most.
+For example, using an index for the object ID will be much more efficient, being unique for each record, than an index by its color, which has only a few possible values.
-In our case, we also needed to apply additional conditions to the search.
-We needed to filter, for example, the location of candidates and the level of language proficiency.
+In compound queries involving multiple fields, Qdrant will attempt to use the most restrictive index first.
-We did not find a ready-made tool for such vector management, so we created [Qdrant](https://github.com/qdrant/qdrant) - open-source vector search engine.
+To create index for the field, you can use the following:
-It allows you to add and delete vectors with a simple API, independent of a programming language you are using.
-You can also assign the payload to vectors.
+REST API ([Schema](https://api.qdrant.tech/api-reference/indexes/create-field-index))
-This payload allows additional filtering during the search request.
+```http
-Qdrant has a pre-built docker image and start working with it is just as simple as running
+PUT /collections/{collection_name}/index
+{
+ ""field_name"": ""name_of_the_field_to_index"",
-```bash
+ ""field_schema"": ""keyword""
-docker run -p 6333:6333 qdrant/qdrant
+}
```
-Documentation with examples could be found [here](https://qdrant.github.io/qdrant/redoc/index.html).
-
+```python
+client.create_payload_index(
+ collection_name=""{collection_name}"",
+ field_name=""name_of_the_field_to_index"",
-## Conclusion
+ field_schema=""keyword"",
+)
+```
-In this article, I have shown how metric learning can be more scalable and flexible than the classification models.
-I suggest trying similar approaches in your tasks - it might be matching similar texts, images, or audio data.
-With the existing variety of pre-trained neural networks and a vector search engine, it is easy to build your metric learning-based application.
+```typescript
+client.createPayloadIndex(""{collection_name}"", {
+ field_name: ""name_of_the_field_to_index"",
+ field_schema: ""keyword"",
-",articles/metric-learning-tips.md
-"---
+});
-title: Qdrant 0.10 released
+```
-short_description: A short review of all the features introduced in Qdrant 0.10
-description: Qdrant 0.10 brings a lot of changes. Check out what's new!
-preview_dir: /articles_data/qdrant-0-10-release/preview
+```rust
-small_preview_image: /articles_data/qdrant-0-10-release/new-svgrepo-com.svg
+use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType};
-social_preview_image: /articles_data/qdrant-0-10-release/preview/social_preview.jpg
-weight: 70
-author: Kacper Łukawski
+client
-author_link: https://medium.com/@lukawskikacper
+ .create_field_index(
-date: 2022-09-19T13:30:00+02:00
+ CreateFieldIndexCollectionBuilder::new(
-draft: false
+ ""{collection_name}"",
----
+ ""name_of_the_field_to_index"",
+ FieldType::Keyword,
+ )
-[Qdrant 0.10 is a new version](https://github.com/qdrant/qdrant/releases/tag/v0.10.0) that brings a lot of performance
+ .wait(true),
-improvements, but also some new features which were heavily requested by our users. Here is an overview of what has changed.
+ )
+ .await?;
+```
-## Storing multiple vectors per object
+```java
-Previously, if you wanted to use semantic search with multiple vectors per object, you had to create separate collections
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
-for each vector type. This was even if the vectors shared some other attributes in the payload. With Qdrant 0.10, you can
-now store all of these vectors together in the same collection, which allows you to share a single copy of the payload.
-This makes it easier to use semantic search with multiple vector types, and reduces the amount of work you need to do to
+client.createPayloadIndexAsync(
-set up your collections.
+ ""{collection_name}"",
+ ""name_of_the_field_to_index"",
+ PayloadSchemaType.Keyword,
-## Batch vector search
+ null,
+ true,
+ null,
-Previously, you had to send multiple requests to the Qdrant API to perform multiple non-related tasks. However, this
+ null);
-can cause significant network overhead and slow down the process, especially if you have a poor connection speed.
+```
-Fortunately, the [new batch search feature](https://blog.qdrant.tech/batch-vector-search-with-qdrant-8c4d598179d5) allows
-you to avoid this issue. With just one API call, Qdrant will handle multiple search requests in the most efficient way
-possible. This means that you can perform multiple tasks simultaneously without having to worry about network overhead
+```csharp
-or slow performance.
+using Qdrant.Client;
-## Built-in ARM support
+var client = new QdrantClient(""localhost"", 6334);
-To make our application accessible to ARM users, we have compiled it specifically for that platform. If it is not
+await client.CreatePayloadIndexAsync(
-compiled for ARM, the device will have to emulate it, which can slow down performance. To ensure the best possible
+ collectionName: ""{collection_name}"",
-experience for ARM users, we have created Docker images specifically for that platform. Keep in mind that using
+ fieldName: ""name_of_the_field_to_index""
-a limited set of processor instructions may affect the performance of your vector search. Therefore, [we have tested
+);
-both ARM and non-ARM architectures using similar setups to understand the potential impact on performance
+```
-](https://blog.qdrant.tech/qdrant-supports-arm-architecture-363e92aa5026).
+```go
-## Full-text filtering
+import (
+ ""context""
-Qdrant is a vector database that allows you to quickly search for the nearest neighbors. However, you may need to apply
-additional filters on top of the semantic search. Up until version 0.10, Qdrant only supported keyword filters. With the
+ ""github.com/qdrant/go-client/qdrant""
-release of Qdrant 0.10, [you can now use full-text filters](https://blog.qdrant.tech/qdrant-introduces-full-text-filters-and-indexes-9a032fcb5fa)
+)
-as well. This new filter type can be used on its own or in combination with other filter types to provide even more
-flexibility in your searches.
-",articles/qdrant-0-10-release.md
-"---
-title: ""Question Answering with LangChain and Qdrant without boilerplate""
+client, err := qdrant.NewClient(&qdrant.Config{
-short_description: ""Large Language Models might be developed fast with modern tool. Here is how!""
+ Host: ""localhost"",
-description: ""We combined LangChain, pretrained LLM from OpenAI, SentenceTransformers and Qdrant to create a Q&A system with just a few lines of code.""
+ Port: 6334,
-social_preview_image: /articles_data/langchain-integration/social_preview.png
+})
-small_preview_image: /articles_data/langchain-integration/chain.svg
-preview_dir: /articles_data/langchain-integration/preview
-weight: 6
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
-author: Kacper Łukawski
+ CollectionName: ""{collection_name}"",
-author_link: https://medium.com/@lukawskikacper
+ FieldName: ""name_of_the_field_to_index"",
-date: 2023-01-31T10:53:20+01:00
+ FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
-draft: false
+})
-keywords:
+```
- - vector search
- - langchain
- - llm
+The index usage flag is displayed in the payload schema with the [collection info API](https://api.qdrant.tech/api-reference/collections/get-collection).
- - large language models
- - question answering
- - openai
+Payload schema example:
- - embeddings
----
+```json
+{
-Building applications with Large Language Models don't have to be complicated. A lot has been going on recently to simplify the development,
+ ""payload_schema"": {
-so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. [LangChain](https://langchain.readthedocs.io)
+ ""property1"": {
-provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring.
+ ""data_type"": ""keyword""
+ },
+ ""property2"": {
-## Question Answering with Qdrant in the loop
+ ""data_type"": ""integer""
+ }
+ }
-It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context
+}
-is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we
+```
+",documentation/concepts/payload.md
+"---
-know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. Vector database,
+title: Collections
-such as Qdrant, is of great help here, as their ability to perform a semantic search over a huge knowledge base is crucial to preselect some possibly valid
+weight: 30
-documents, so they can be provided into the LLM. That's also one of the **chains** implemented in LangChain, which is called `VectorDBQA`. And Qdrant got
+aliases:
-integrated with the library, so it might be used to build it effortlessly.
+ - ../collections
+ - /concepts/collections/
+ - /documentation/frameworks/fondant/documentation/concepts/collections/
-### What do we need?
+---
-Surprisingly enough, there will be two models required to set things up. First of all, we need an embedding model that will convert the set of facts into
+# Collections
-vectors, and store those into Qdrant. That's an identical process to any other semantic search application. We're going to use one of the
-`SentenceTransformers` models, so it can be hosted locally. The embeddings created by that model will be put into Qdrant and used to retrieve the most
-similar documents, given the query.
+A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
-However, when we receive a query, there are two steps involved. First of all, we ask Qdrant to provide the most relevant documents and simply combine all
+Distance metrics are used to measure similarities among vectors.
-of them into a single text. Then, we build a prompt to the LLM (in our case OpenAI), including those documents as a context, of course together with the
+The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training.
-question asked. So the input to the LLM looks like the following:
+Qdrant supports these most popular types of metrics:
-```text
-Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
-It's as certain as 2 + 2 = 4
+* Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product)
-...
+* Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity)
+* Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance)
+* Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry)
-Question: How much is 2 + 2?
-Helpful Answer:
-```
+
-There might be several context documents combined, and it is solely up to LLM to choose the right piece of content. But our expectation is, the model should
+In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum.
-respond with just `4`.
+These settings can be changed at any time by a corresponding request.
-Why do we need two different models? Both solve some different tasks. The first model performs feature extraction, by converting the text into vectors, while
+## Setting up multitenancy
-the second one helps in text generation or summarization. Disclaimer: This is not the only way to solve that task with LangChain. Such a chain is called `stuff`
-in the library nomenclature.
+**How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/)
-![](/articles_data/langchain-integration/flow-diagram.png)
+**When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise.
-Enough theory! This sounds like a pretty complex application, as it involves several systems. But with LangChain, it might be implemented in just a few lines
-of code, thanks to the recent integration with Qdrant. We're not even going to work directly with `QdrantClient`, as everything is already done in the background
+## Create a collection
-by LangChain. If you want to get into the source code right away, all the processing is available as a
-[Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
-## Implementing Question Answering with LangChain and Qdrant
+```http
+PUT /collections/{collection_name}
+{
-### Configuration
+ ""vectors"": {
+ ""size"": 300,
+ ""distance"": ""Cosine""
-A journey of a thousand miles begins with a single step, in our case with the configuration of all the services. We'll be using [Qdrant Cloud](https://qdrant.tech),
+ }
-so we need an API key. The same is for OpenAI - the API key has to be obtained from their website.
+}
+```
-![](/articles_data/langchain-integration/code-configuration.png)
+```bash
+curl -X PUT http://localhost:6333/collections/{collection_name} \
-### Building the knowledge base
+ -H 'Content-Type: application/json' \
+ --data-raw '{
+ ""vectors"": {
-We also need some facts from which the answers will be generated. There is plenty of public datasets available, and
+ ""size"": 300,
-[Natural Questions](https://ai.google.com/research/NaturalQuestions/visualization) is one of them. It consists of the whole HTML content of the websites they were
+ ""distance"": ""Cosine""
-scraped from. That means we need some preprocessing to extract plain text content. As a result, we’re going to have two lists of strings - one for questions and
+ }
-the other one for the answers.
+ }'
+```
-The answers have to be vectorized with the first of our models. The `sentence-transformers/all-mpnet-base-v2` is one of the possibilities, but there are some
-other options available. LangChain will handle that part of the process in a single function call.
+```python
+from qdrant_client import QdrantClient, models
-![](/articles_data/langchain-integration/code-qdrant.png)
+client = QdrantClient(url=""http://localhost:6333"")
-### Setting up QA with Qdrant in a loop
+client.create_collection(
+ collection_name=""{collection_name}"",
-`VectorDBQA` is a chain that performs the process described above. So it, first of all, loads some facts from Qdrant and then feeds them into OpenAI LLM which
+ vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
-should analyze them to find the answer to a given question. The only last thing to do before using it is to put things together, also with a single function call.
+)
+```
-![](/articles_data/langchain-integration/code-vectordbqa.png)
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-## Testing out the chain
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-And that's it! We can put some queries, and LangChain will perform all the required processing to find the answer in the provided context.
+client.createCollection(""{collection_name}"", {
-![](/articles_data/langchain-integration/code-answering.png)
+ vectors: { size: 100, distance: ""Cosine"" },
+});
+```
-```text
-> what kind of music is scott joplin most famous for
- Scott Joplin is most famous for composing ragtime music.
+```rust
+use qdrant_client::Qdrant;
+use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder};
-> who died from the band faith no more
- Chuck Mosley
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-> when does maggie come on grey's anatomy
- Maggie first appears in season 10, episode 1, which aired on September 26, 2013.
+client
+ .create_collection(
+ CreateCollectionBuilder::new(""{collection_name}"")
-> can't take my eyes off you lyrics meaning
+ .vectors_config(VectorParamsBuilder::new(100, Distance::Cosine)),
- I don't know.
+ )
+ .await?;
+```
-> who lasted the longest on alone season 2
- David McIntyre lasted the longest on Alone season 2, with a total of 66 days.
-```
+```java
+import io.qdrant.client.grpc.Collections.Distance;
+import io.qdrant.client.grpc.Collections.VectorParams;
-The great thing about such a setup is that the knowledge base might be easily extended with some new facts and those will be included in the prompts
+import io.qdrant.client.QdrantClient;
-sent to LLM later on. Of course, assuming their similarity to the given question will be in the top results returned by Qdrant.
+import io.qdrant.client.QdrantGrpcClient;
-If you want to run the chain on your own, the simplest way to reproduce it is to open the
+QdrantClient client = new QdrantClient(
-[Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
-",articles/langchain-integration.md
-"---
+ QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-title: ""Enhance OpenAI Embeddings with Qdrant's Binary Quantization""
-draft: false
-slug: binary-quantization-openai
+client.createCollectionAsync(""{collection_name}"",
-short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings
+ VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get();
-description: Use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings
+```
-preview_dir: /articles_data/binary-quantization-openai/preview
-preview_image: /articles-data/binary-quantization-openai/Article-Image.png # Change this
-small_preview_image: /articles_data/binary-quantization-openai/icon.svg
+```csharp
-social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png
+using Qdrant.Client;
-title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp # Optional image used for blog post title
+using Qdrant.Client.Grpc;
-date: 2024-02-21T13:12:08-08:00
+var client = new QdrantClient(""localhost"", 6334);
-author: Nirant Kasliwal
-author_link: https://www.linkedin.com/in/nirant/
+await client.CreateCollectionAsync(
+ collectionName: ""{collection_name}"",
-featured: false
+ vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }
-tags:
+);
- - OpenAI
+```
- - binary quantization
- - embeddings
-weight: -130
+```go
+import (
+ ""context""
-aliases: [ /blog/binary-quantization-openai/ ]
----
+ ""github.com/qdrant/go-client/qdrant""
+)
-OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings.
+client, err := qdrant.NewClient(&qdrant.Config{
-In this post, we discuss:
+ Host: ""localhost"",
+ Port: 6334,
+})
-- The significance of OpenAI embeddings and real-world challenges.
-- Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings
-- Results of an experiment that highlights improvements in search efficiency and accuracy
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-- Implications of these findings for real-world applications
+ CollectionName: ""{collection_name}"",
-- Best practices for leveraging Binary Quantization to enhance OpenAI embeddings
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 100,
+ Distance: qdrant.Distance_Cosine,
-You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks.
+ }),
+})
+```
-## New OpenAI Embeddings: Performance and Changes
+In addition to the required options, you can also specify custom values for the following collection options:
-As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates).
+* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
-These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL).
+* `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning)
+* `optimizers_config` - see [optimizer](../optimizer/) for details.
+* `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment/#sharding) section for details.
-#### Matryoshka Representation Learning
+* `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload.
+* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
-The new OpenAI models have been trained with a novel approach called ""[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)"". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size.
+Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml).
-Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models.
+See [schema definitions](https://api.qdrant.tech/api-reference/collections/create-collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters.
-## Enhanced Performance and Efficiency with Binary Quantization
+*Available as of v1.2.0*
-By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets.
+Vectors all live in RAM for very quick access. The `on_disk` parameter can be
+set in the vector configuration. If true, all vectors will live on disk. This
-The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications.
+will enable the use of
+[memmaps](../../concepts/storage/#configuring-memmap-storage),
+which is suitable for ingesting a large amount of data.
-![](/blog/openai/Accuracy_Models.png)
+### Create collection from another collection
-The efficiency gains from Binary Quantization are as follows:
+*Available as of v1.0.0*
-- Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost.
-- Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches.
-- Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases.
+It is possible to initialize a collection from another existing collection.
-### Experiment Setup: OpenAI Embeddings in Focus
+This might be useful for experimenting quickly with different configurations for the same data set.
-To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature.
+Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample
+code, `""size"": 300` and `""distance"": ""Cosine""`.
-This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny.
-#### Dataset
+```http
+PUT /collections/{collection_name}
+{
- The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset.
+ ""vectors"": {
+ ""size"": 100,
+ ""distance"": ""Cosine""
-#### Parameters: Oversampling, Rescoring, and Search Limits
+ },
+ ""init_from"": {
+ ""collection"": ""{from_collection_name}""
-For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters:
+ }
+}
+```
-- **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources.
+```bash
-- **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance.
+curl -X PUT http://localhost:6333/collections/{collection_name} \
+ -H 'Content-Type: application/json' \
+ --data-raw '{
-- **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements.
+ ""vectors"": {
+ ""size"": 300,
+ ""distance"": ""Cosine""
-Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs.
+ },
+ ""init_from"": {
+ ""collection"": {from_collection_name}
-### Results: Binary Quantization's Impact on OpenAI Embeddings
+ }
+ }'
+```
-To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query.
+```python
-#### Rescoring
+from qdrant_client import QdrantClient
-![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png)
+client = QdrantClient(url=""http://localhost:6333"")
-Here are some key observations, which analyzes the impact of rescoring (`True` or `False`):
+client.create_collection(
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE),
-1. **Significantly Improved Accuracy**:
+ init_from=models.InitFrom(collection=""{from_collection_name}""),
- - Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`).
+)
- - The improvement in accuracy is true across various search limits (10, 20, 50, 100).
+```
-2. **Model and Dimension Specific Observations**:
+```typescript
- - For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate.
+import { QdrantClient } from ""@qdrant/js-client-rest"";
- - The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results.
- - With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions.
- - For higher dimension models (such as text-embedding-3-large with 3072 dimensions),
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces.
+client.createCollection(""{collection_name}"", {
-3. **Influence of Search Limit**:
+ vectors: { size: 100, distance: ""Cosine"" },
- - The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered.
+ init_from: { collection: ""{from_collection_name}"" },
+});
+```
-In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction.
+```rust
-### Dataset Combinations
+use qdrant_client::Qdrant;
+use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder};
-For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes:
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-1. **Model Name**: Signifying the specific text embedding model variant, such as ""text-embedding-3-large"" or ""text-embedding-3-small"". This distinction correlates with the model's capacity, with ""large"" models offering more detailed embeddings at the cost of increased computational resources.
+client
+ .create_collection(
-2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant.
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .vectors_config(VectorParamsBuilder::new(100, Distance::Cosine))
+ .init_from_collection(""{from_collection_name}""),
-Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results.
+ )
+ .await?;
+```
-```python
+```java
-dataset_combinations = [
+import io.qdrant.client.QdrantClient;
- {
+import io.qdrant.client.QdrantGrpcClient;
- ""model_name"": ""text-embedding-3-large"",
+import io.qdrant.client.grpc.Collections.CreateCollection;
- ""dimensions"": 3072,
+import io.qdrant.client.grpc.Collections.Distance;
- },
+import io.qdrant.client.grpc.Collections.VectorParams;
- {
+import io.qdrant.client.grpc.Collections.VectorsConfig;
- ""model_name"": ""text-embedding-3-large"",
- ""dimensions"": 1024,
- },
+QdrantClient client =
- {
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""model_name"": ""text-embedding-3-large"",
- ""dimensions"": 1536,
- },
+client
- {
+ .createCollectionAsync(
- ""model_name"": ""text-embedding-3-small"",
+ CreateCollection.newBuilder()
- ""dimensions"": 512,
+ .setCollectionName(""{collection_name}"")
- },
+ .setVectorsConfig(
- {
+ VectorsConfig.newBuilder()
- ""model_name"": ""text-embedding-3-small"",
+ .setParams(
- ""dimensions"": 1024,
+ VectorParams.newBuilder()
- },
+ .setSize(100)
- {
+ .setDistance(Distance.Cosine)
- ""model_name"": ""text-embedding-3-small"",
+ .build()))
- ""dimensions"": 1536,
+ .setInitFromCollection(""{from_collection_name}"")
- },
+ .build())
-]
+ .get();
```
-#### Exploring Dataset Combinations and Their Impacts on Model Performance
+```csharp
-The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step.
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup.
+var client = new QdrantClient(""localhost"", 6334);
-After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring.
+await client.CreateCollectionAsync(
+ collectionName: ""{collection_name}"",
-```python
+ vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine },
-import pandas as pd
+ initFromCollection: ""{from_collection_name}""
+);
+```
-for combination in dataset_combinations:
- model_name = combination[""model_name""]
- dimensions = combination[""dimensions""]
+```go
- print(f""Model: {model_name}, dimensions: {dimensions}"")
+import (
- results = pd.read_json(f""../results/results-{model_name}-{dimensions}.json"", lines=True)
+ ""context""
- average_accuracy = results[results[""limit""] != 1]
- average_accuracy = average_accuracy[average_accuracy[""limit""] != 5]
- average_accuracy = average_accuracy.groupby([""oversampling"", ""rescore"", ""limit""])[
+ ""github.com/qdrant/go-client/qdrant""
- ""accuracy""
+)
- ].mean()
- average_accuracy = average_accuracy.reset_index()
- acc = average_accuracy.pivot(
+client, err := qdrant.NewClient(&qdrant.Config{
- index=""limit"", columns=[""oversampling"", ""rescore""], values=""accuracy""
+ Host: ""localhost"",
- )
+ Port: 6334,
- print(acc)
+})
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-#### Impact of Oversampling
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 100,
-You can use oversampling in machine learning to counteract imbalances in datasets.
+ Distance: qdrant.Distance_Cosine,
-It works well when one class significantly outnumbers others. This imbalance
+ }),
-can skew the performance of models, which favors the majority class at the
+ InitFromCollection: qdrant.PtrOf(""{from_collection_name}""),
-expense of others. By creating additional samples from the minority classes,
+})
-oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios.
+```
-The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one.
+### Collection with multiple vectors
-Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects.
+*Available as of v0.10.0*
-![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png)
+It is possible to have multiple vectors per record.
+This feature allows for multiple vector storages per collection.
+To distinguish vectors in one record, they should have a unique name defined when creating the collection.
-### Leveraging Binary Quantization: Best Practices
+Each named vector in this mode has its distance and size:
-We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings:
+```http
-1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested.
+PUT /collections/{collection_name}
-2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages.
+{
-3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications.
+ ""vectors"": {
-4. Rescoring: Enable rescoring to improve the accuracy of search results.
+ ""image"": {
-5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible.
+ ""size"": 4,
+ ""distance"": ""Dot""
+ },
-Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant)
+ ""text"": {
+ ""size"": 8,
+ ""distance"": ""Cosine""
-Learn more about how to boost your vector search speed and accuracy while reducing costs: [Binary Quantization.](https://qdrant.tech/documentation/guides/quantization/?selector=aHRtbCA%2BIGJvZHkgPiBkaXY6bnRoLW9mLXR5cGUoMSkgPiBzZWN0aW9uID4gZGl2ID4gZGl2ID4gZGl2Om50aC1vZi10eXBlKDIpID4gYXJ0aWNsZSA%2BIGgyOm50aC1vZi10eXBlKDIp)
-",articles/binary-quantization-openai.md
-"---
+ }
-title: ""Best Practices for Massive-Scale Deployments: Multitenancy and Custom Sharding""
+ }
-short_description: ""Combining our most popular features to support scalable machine learning solutions.""
+}
-description: ""Combining our most popular features to support scalable machine learning solutions.""
+```
-social_preview_image: /articles_data/multitenancy/social_preview.png
-preview_dir: /articles_data/multitenancy/preview
-small_preview_image: /articles_data/multitenancy/icon.svg
+```bash
-weight: -120
+curl -X PUT http://localhost:6333/collections/{collection_name} \
-author: David Myriel
+ -H 'Content-Type: application/json' \
-date: 2024-02-06T13:21:00.000Z
+ --data-raw '{
-draft: false
+ ""vectors"": {
-keywords:
+ ""image"": {
- - multitenancy
+ ""size"": 4,
- - custom sharding
+ ""distance"": ""Dot""
- - multiple partitions
+ },
- - vector database
+ ""text"": {
----
+ ""size"": 8,
+ ""distance"": ""Cosine""
+ }
-We are seeing the topics of [multitenancy](https://qdrant.tech/documentation/guides/multiple-partitions/) and [distributed deployment](https://qdrant.tech/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup.
+ }
+ }'
+```
-Whether you are building a bank fraud-detection system, RAG for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product.
-In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs.
+```python
+from qdrant_client import QdrantClient, models
-We have developed two major features just for this. __You can now scale a single Qdrant cluster and support all of your customers worldwide.__ Under [multitenancy](https://qdrant.tech/documentation/guides/multiple-partitions/), each customer's data is completely isolated and only accessible by them. At times, if this data is location-sensitive, Qdrant also gives you the option to divide your cluster by region or other criteria that further secure your customer's access. This is called [custom sharding](https://qdrant.tech/documentation/guides/distributed_deployment/#user-defined-sharding).
-Combining these two will result in an efficiently-partitioned architecture that further leverages the convenience of a single Qdrant cluster. This article will briefly explain the benefits and show how you can get started using both features.
+client = QdrantClient(url=""http://localhost:6333"")
-## One collection, many tenants
+client.create_collection(
+ collection_name=""{collection_name}"",
-When working with Qdrant, you can upsert all your data to a single collection, and then partition each vector via its payload. This means that all your users are leveraging the power of a single Qdrant cluster, but their data is still isolated within the collection. Let's take a look at a two-tenant collection:
+ vectors_config={
+ ""image"": models.VectorParams(size=4, distance=models.Distance.DOT),
+ ""text"": models.VectorParams(size=8, distance=models.Distance.COSINE),
-**Figure 1:** Each individual vector is assigned a specific payload that denotes which tenant it belongs to. This is how a large number of different tenants can share a single Qdrant collection.
+ },
-![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy-single.png)
+)
+```
-Qdrant is built to excel in a single collection with a vast number of tenants. You should only create multiple collections when your data is not homogenous or if users' vectors are created by different embedding models. Creating too many collections may result in resource overhead and cause dependencies. This can increase costs and affect overall performance.
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-## Sharding your database
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-With Qdrant, you can also specify a shard for each vector individually. This feature is useful if you want to [control where your data is kept in the cluster](https://qdrant.tech/documentation/guides/distributed_deployment/#sharding). For example, one set of vectors can be assigned to one shard on its own node, while another set can be on a completely different node.
+client.createCollection(""{collection_name}"", {
+ vectors: {
-During vector search, your operations will be able to hit only the subset of shards they actually need. In massive-scale deployments, __this can significantly improve the performance of operations that do not require the whole collection to be scanned__.
+ image: { size: 4, distance: ""Dot"" },
+ text: { size: 8, distance: ""Cosine"" },
+ },
-This works in the other direction as well. Whenever you search for something, you can specify a shard or several shards and Qdrant will know where to find them. It will avoid asking all machines in your cluster for results. This will minimize overhead and maximize performance.
+});
+
+```
-### Common use cases
+```rust
+use qdrant_client::Qdrant;
+use qdrant_client::qdrant::{
-A clear use-case for this feature is managing a multitenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. Sharding solves the problem of region-based data placement, whereby certain data needs to be kept within specific locations. To do this, however, you will need to [move your shards between nodes](https://qdrant.tech/documentation/guides/distributed_deployment/#moving-shards).
+ CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder,
+};
-**Figure 2:** Users can both upsert and query shards that are relevant to them, all within the same collection. Regional sharding can help avoid cross-continental traffic.
-![Qdrant Multitenancy](/articles_data/multitenancy/shards.png)
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-Custom sharding also gives you precise control over other use cases. A time-based data placement means that data streams can index shards that represent latest updates. If you organize your shards by date, you can have great control over the recency of retrieved data. This is relevant for social media platforms, which greatly rely on time-sensitive data.
+let mut vectors_config = VectorsConfigBuilder::default();
+vectors_config
+ .add_named_vector_params(""image"", VectorParamsBuilder::new(4, Distance::Dot).build());
-## Before I go any further.....how secure is my user data?
+vectors_config.add_named_vector_params(
+ ""text"",
+ VectorParamsBuilder::new(8, Distance::Cosine).build(),
-By design, Qdrant offers three levels of isolation. We initially introduced collection-based isolation, but your scaled setup has to move beyond this level. In this scenario, you will leverage payload-based isolation (from multitenancy) and resource-based isolation (from sharding). The ultimate goal is to have a single collection, where you can manipulate and customize placement of shards inside your cluster more precisely and avoid any kind of overhead. The diagram below shows the arrangement of your data within a two-tier isolation arrangement.
+);
-**Figure 3:** Users can query the collection based on two filters: the `group_id` and the individual `shard_key_selector`. This gives your data two additional levels of isolation.
+client
-![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy.png)
+ .create_collection(
+ CreateCollectionBuilder::new(""{collection_name}"").vectors_config(vectors_config),
+ )
-## Create custom shards for a single collection
+ .await?;
+```
-When creating a collection, you will need to configure user-defined sharding. This lets you control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations, since you won't need to go through the entire collection to retrieve data.
+```java
+import java.util.Map;
-```python
-client.create_collection(
- collection_name=""{tenant_data}"",
+import io.qdrant.client.QdrantClient;
- shard_number=2,
+import io.qdrant.client.QdrantGrpcClient;
- sharding_method=models.ShardingMethod.CUSTOM,
+import io.qdrant.client.grpc.Collections.Distance;
- # ... other collection parameters
+import io.qdrant.client.grpc.Collections.VectorParams;
-)
-client.create_shard_key(""{tenant_data}"", ""canada"")
-client.create_shard_key(""{tenant_data}"", ""germany"")
+QdrantClient client =
-```
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
-In this example, your cluster is divided between Germany and Canada. Canadian and German law differ when it comes to international data transfer. Let's say you are creating a RAG application that supports the healthcare industry. Your Canadian customer data will have to be clearly separated for compliance purposes from your German customer.
+client
-Even though it is part of the same collection, data from each shard is isolated from other shards and can be retrieved as such. For additional examples on shards and retrieval, consult [Distributed Deployments](https://qdrant.tech/documentation/guides/distributed_deployment/) documentation and [Qdrant Client specification](https://python-client.qdrant.tech).
+ .createCollectionAsync(
+ ""{collection_name}"",
+ Map.of(
-## Configure a multitenant setup for users
+ ""image"", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(),
+ ""text"",
+ VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build()))
-Let's continue and start adding data. As you upsert your vectors to your new collection, you can add a `group_id` field to each vector. If you do this, Qdrant will assign each vector to its respective group.
+ .get();
+```
-Additionally, each vector can now be allocated to a shard. You can specify the `shard_key_selector` for each individual vector. In this example, you are upserting data belonging to `tenant_1` to the Canadian region.
+```csharp
+using Qdrant.Client;
-```python
+using Qdrant.Client.Grpc;
-client.upsert(
- collection_name=""{tenant_data}"",
- points=[
+var client = new QdrantClient(""localhost"", 6334);
- models.PointStruct(
- id=1,
- payload={""group_id"": ""tenant_1""},
+await client.CreateCollectionAsync(
- vector=[0.9, 0.1, 0.1],
+ collectionName: ""{collection_name}"",
- ),
+ vectorsConfig: new VectorParamsMap
- models.PointStruct(
+ {
- id=2,
+ Map =
- payload={""group_id"": ""tenant_1""},
+ {
- vector=[0.1, 0.9, 0.1],
+ [""image""] = new VectorParams { Size = 4, Distance = Distance.Dot },
- ),
+ [""text""] = new VectorParams { Size = 8, Distance = Distance.Cosine },
- ],
+ }
- shard_key_selector=""canada"",
+ }
-)
+);
```
-Keep in mind that the data for each `group_id` is isolated. In the example below, `tenant_1` vectors are kept separate from `tenant_2`. The first tenant will be able to access their data in the Canadian portion of the cluster. However, as shown below `tenant_2 `might only be able to retrieve information hosted in Germany.
+```go
-```python
+import (
-client.upsert(
+ ""context""
- collection_name=""{tenant_data}"",
- points=[
- models.PointStruct(
+ ""github.com/qdrant/go-client/qdrant""
- id=3,
+)
- payload={""group_id"": ""tenant_2""},
- vector=[0.1, 0.1, 0.9],
- ),
+client, err := qdrant.NewClient(&qdrant.Config{
- ],
+ Host: ""localhost"",
- shard_key_selector=""germany"",
+ Port: 6334,
-)
+})
-```
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-## Retrieve data via filters
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfigMap(
+ map[string]*qdrant.VectorParams{
-The access control setup is completed as you specify the criteria for data retrieval. When searching for vectors, you need to use a `query_filter` along with `group_id` to filter vectors for each user.
+ ""image"": {
+ Size: 4,
+ Distance: qdrant.Distance_Dot,
-```python
+ },
-client.search(
+ ""text"": {
- collection_name=""{tenant_data}"",
+ Size: 8,
- query_filter=models.Filter(
+ Distance: qdrant.Distance_Cosine,
- must=[
+ },
- models.FieldCondition(
+ }),
- key=""group_id"",
+})
- match=models.MatchValue(
+```
- value=""tenant_1"",
- ),
- ),
+For rare use cases, it is possible to create a collection without any vector storage.
- ]
- ),
- query_vector=[0.1, 0.1, 0.9],
+*Available as of v1.1.1*
- limit=10,
-)
-```
+For each named vector you can optionally specify
+[`hnsw_config`](../indexing/#vector-index) or
+[`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to
-## Performance considerations
+deviate from the collection configuration. This can be useful to fine-tune
+search performance on a vector level.
-The speed of indexation may become a bottleneck if you are adding large amounts of data in this way, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead.
+*Available as of v1.2.0*
-By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process.
+Vectors all live in RAM for very quick access. On a per-vector basis you can set
+`on_disk` to true to store all vectors on disk at all times. This will enable
-To implement this approach, you should:
+the use of
+[memmaps](../../concepts/storage/#configuring-memmap-storage),
+which is suitable for ingesting a large amount of data.
-1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16.
-2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection.
-```python
+### Vector datatypes
-from qdrant_client import QdrantClient, models
+*Available as of v1.9.0*
-client = QdrantClient(""localhost"", port=6333)
+Some embedding providers may provide embeddings in a pre-quantized format.
-client.create_collection(
+One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings).
- collection_name=""{tenant_data}"",
+Qdrant has direct support for uint8 embeddings, which you can also use in combination with binary quantization.
- vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
- hnsw_config=models.HnswConfigDiff(
- payload_m=16,
+To create a collection with uint8 embeddings, you can use the following configuration:
- m=0,
- ),
-)
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 1024,
+
+ ""distance"": ""Cosine"",
+
+ ""datatype"": ""uint8""
+
+ }
+
+}
```
-3. Create keyword payload index for `group_id` field.
+```bash
+curl -X PUT http://localhost:6333/collections/{collection_name} \
+ -H 'Content-Type: application/json' \
-```python
+ --data-raw '{
-client.create_payload_index(
+ ""vectors"": {
- collection_name=""{tenant_data}"",
+ ""size"": 1024,
- field_name=""group_id"",
+ ""distance"": ""Cosine"",
- field_schema=models.PayloadSchemaType.KEYWORD,
+ ""datatype"": ""uint8""
-)
+ }
+
+ }'
```
-> Note: Keep in mind that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
+```python
-## Next steps
+from qdrant_client import QdrantClient, models
-Qdrant is ready to support a massive-scale architecture for your machine learning project. If you want to see whether our vector database is right for you, try the [quickstart tutorial](https://qdrant.tech/documentation/quick-start/) or read our [docs and tutorials](https://qdrant.tech/documentation/).
+client = QdrantClient(url=""http://localhost:6333"")
-To spin up a free instance of Qdrant, sign up for [Qdrant Cloud](https://qdrant.to/cloud) - no strings attached.
+client.create_collection(
+ collection_name=""{collection_name}"",
+ vectors_config=models.VectorParams(
-Get support or share ideas in our [Discord](https://qdrant.to/discord) community. This is where we talk about vector search theory, publish examples and demos and discuss vector database setups.
+ size=1024,
+ distance=models.Distance.COSINE,
+ datatype=models.Datatype.UINT8,
+ ),
+)
+```
+```typescript
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+client.createCollection(""{collection_name}"", {
-",articles/multitenancy.md
-"---
+ vectors: {
-title: Semantic Search As You Type
+ image: { size: 1024, distance: ""Cosine"", datatype: ""uint8"" },
-short_description: ""Instant search using Qdrant""
+ },
-description: To show off Qdrant's performance, we show how to do a quick search-as-you-type that will come back within a few milliseconds.
+});
-social_preview_image: /articles_data/search-as-you-type/preview/social_preview.jpg
+```
-small_preview_image: /articles_data/search-as-you-type/icon.svg
-preview_dir: /articles_data/search-as-you-type/preview
-weight: -2
+```rust
-author: Andre Bogus
+use qdrant_client::Qdrant;
-author_link: https://llogiq.github.io
+use qdrant_client::qdrant::{
-date: 2023-08-14T00:00:00+01:00
+ CreateCollectionBuilder, Datatype, Distance, VectorParamsBuilder,
-draft: false
+};
-keywords: search, semantic, vector, llm, integration, benchmark, recommend, performance, rust
----
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-Qdrant is one of the fastest vector search engines out there, so while looking for a demo to show off, we came upon the idea to do a search-as-you-type box with a fully semantic search backend. Now we already have a semantic/keyword hybrid search on our website. But that one is written in Python, which incurs some overhead for the interpreter. Naturally, I wanted to see how fast I could go using Rust.
+client
+ .create_collection(
-Since Qdrant doesn't embed by itself, I had to decide on an embedding model. The prior version used the [SentenceTransformers](https://www.sbert.net/) package, which in turn employs Bert-based [All-MiniLM-L6-V2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main) model. This model is battle-tested and delivers fair results at speed, so not experimenting on this front I took an [ONNX version](https://huggingface.co/optimum/all-MiniLM-L6-v2/tree/main) and ran that within the service.
+ CreateCollectionBuilder::new(""{collection_name}"").vectors_config(
+ VectorParamsBuilder::new(1024, Distance::Cosine).datatype(Datatype::Uint8),
+ ),
-The workflow looks like this:
+ )
+ .await?;
+```
-![Search Qdrant by Embedding](/articles_data/search-as-you-type/Qdrant_Search_by_Embedding.png)
+```java
-This will, after tokenizing and embedding send a `/collections/site/points/search` POST request to Qdrant, sending the following JSON:
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.grpc.Collections.Datatype;
+import io.qdrant.client.grpc.Collections.Distance;
-```json
+import io.qdrant.client.grpc.Collections.VectorParams;
-POST collections/site/points/search
-{
- ""vector"": [-0.06716014,-0.056464013, ...(382 values omitted)],
+QdrantClient client = new QdrantClient(
- ""limit"": 5,
+ QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
- ""with_payload"": true,
-}
-```
+client
+ .createCollectionAsync(""{collection_name}"",
+ VectorParams.newBuilder()
-Even with avoiding a network round-trip, the embedding still takes some time. As always in optimization, if you cannot do the work faster, a good solution is to avoid work altogether (please don't tell my employer). This can be done by pre-computing common prefixes and calculating embeddings for them, then storing them in a `prefix_cache` collection. Now the [`recommend`](https://docs.rs/qdrant-client/latest/qdrant_client/client/struct.QdrantClient.html#method.recommend) API method can find the best matches without doing any embedding. For now, I use short (up to and including 5 letters) prefixes, but I can also parse the logs to get the most common search terms and add them to the cache later.
+ .setSize(1024)
+ .setDistance(Distance.Cosine)
+ .setDatatype(Datatype.Uint8)
-![Qdrant Recommendation](/articles_data/search-as-you-type/Qdrant_Recommendation.png)
+ .build())
+ .get();
+```
-Making that work requires setting up the `prefix_cache` collection with points that have the prefix as their `point_id` and the embedding as their `vector`, which lets us do the lookup with no search or index. The `prefix_to_id` function currently uses the `u64` variant of `PointId`, which can hold eight bytes, enough for this use. If the need arises, one could instead encode the names as UUID, hashing the input. Since I know all our prefixes are within 8 bytes, I decided against this for now.
+```csharp
-The `recommend` endpoint works roughly the same as `search_points`, but instead of searching for a vector, Qdrant searches for one or more points (you can also give negative example points the search engine will try to avoid in the results). It was built to help drive recommendation engines, saving the round-trip of sending the current point's vector back to Qdrant to find more similar ones. However Qdrant goes a bit further by allowing us to select a different collection to lookup the points, which allows us to keep our `prefix_cache` collection separate from the site data. So in our case, Qdrant first looks up the point from the `prefix_cache`, takes its vector and searches for that in the `site` collection, using the precomputed embeddings from the cache. The API endpoint expects a POST of the following JSON to `/collections/site/points/recommend`:
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-```json
-POST collections/site/points/recommend
+var client = new QdrantClient(""localhost"", 6334);
-{
- ""positive"": [1936024932],
- ""limit"": 5,
+await client.CreateCollectionAsync(
- ""with_payload"": true,
+ collectionName: ""{collection_name}"",
- ""lookup_from"": {
+ vectorsConfig: new VectorParams {
- ""collection"": ""prefix_cache""
+ Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Uint8
}
-}
+);
```
-Now I have, in the best Rust tradition, a blazingly fast semantic search.
+```go
+import (
+ ""context""
-To demo it, I used our [Qdrant documentation website](https://qdrant.tech/documentation)'s page search, replacing our previous Python implementation. So in order to not just spew empty words, here is a benchmark, showing different queries that exercise different code paths.
+ ""github.com/qdrant/go-client/qdrant""
-Since the operations themselves are far faster than the network whose fickle nature would have swamped most measurable differences, I benchmarked both the Python and Rust services locally. I'm measuring both versions on the same AMD Ryzen 9 5900HX with 16GB RAM running Linux. The table shows the average time and error bound in milliseconds. I only measured up to a thousand concurrent requests. None of the services showed any slowdown with more requests in that range. I do not expect our service to become DDOS'd, so I didn't benchmark with more load.
+)
-Without further ado, here are the results:
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
+})
-| query length | Short | Long |
-|---------------|-----------|------------|
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-| Python 🐍 | 16 ± 4 ms | 16 ± 4 ms |
+ CollectionName: ""{collection_name}"",
-| Rust 🦀 | 1½ ± ½ ms | 5 ± 1 ms |
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+ Size: 1024,
+ Distance: qdrant.Distance_Cosine,
-The Rust version consistently outperforms the Python version and offers a semantic search even on few-character queries. If the prefix cache is hit (as in the short query length), the semantic search can even get more than ten times faster than the Python version. The general speed-up is due to both the relatively lower overhead of Rust + Actix Web compared to Python + FastAPI (even if that already performs admirably), as well as using ONNX Runtime instead of SentenceTransformers for the embedding. The prefix cache gives the Rust version a real boost by doing a semantic search without doing any embedding work.
+ Datatype: qdrant.Datatype_Uint8.Enum(),
+ }),
+})
-As an aside, while the millisecond differences shown here may mean relatively little for our users, whose latency will be dominated by the network in between, when typing, every millisecond more or less can make a difference in user perception. Also search-as-you-type generates between three and five times as much load as a plain search, so the service will experience more traffic. Less time per request means being able to handle more of them.
+```
-Mission accomplished! But wait, there's more!
+Vectors with `uint8` datatype are stored in a more compact format, which can save memory and improve search speed at the cost of some precision.
+If you choose to use the `uint8` datatype, elements of the vector will be stored as unsigned 8-bit integers, which can take values **from 0 to 255**.
-### Prioritizing Exact Matches and Headings
-To improve on the quality of the results, Qdrant can do multiple searches in parallel, and then the service puts the results in sequence, taking the first best matches. The extended code searches:
+### Collection with sparse vectors
-1. Text matches in titles
+*Available as of v1.7.0*
-2. Text matches in body (paragraphs or lists)
-3. Semantic matches in titles
-4. Any Semantic matches
+Qdrant supports sparse vectors as a first-class citizen.
-Those are put together by taking them in the above order, deduplicating as necessary.
+Sparse vectors are useful for text search, where each word is represented as a separate dimension.
-![merge workflow](/articles_data/search-as-you-type/sayt_merge.png)
+Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point.
-Instead of sending a `search` or `recommend` request, one can also send a `search/batch` or `recommend/batch` request, respectively. Each of those contain a `""searches""` property with any number of search/recommend JSON requests:
+Unlike dense vectors, sparse vectors must be named.
+And additionally, sparse vectors and dense vectors must have different names within a collection.
-```json
-POST collections/site/points/search/batch
+```http
+
+PUT /collections/{collection_name}
{
- ""searches"": [
+ ""sparse_vectors"": {
- {
+ ""text"": { },
- ""vector"": [-0.06716014,-0.056464013, ...],
+ }
- ""filter"": {
+}
- ""must"": [
+```
- { ""key"": ""text"", ""match"": { ""text"": }},
- { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }},
- ]
+```bash
- }
+curl -X PUT http://localhost:6333/collections/{collection_name} \
- ...,
+ -H 'Content-Type: application/json' \
- },
+ --data-raw '{
- {
+ ""sparse_vectors"": {
- ""vector"": [-0.06716014,-0.056464013, ...],
+ ""text"": { }
- ""filter"": {
+ }
- ""must"": [ { ""key"": ""body"", ""match"": { ""text"": }} ]
+ }'
- }
+```
- ...,
- },
- {
- ""vector"": [-0.06716014,-0.056464013, ...],
- ""filter"": {
+```python
- ""must"": [ { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }} ]
+from qdrant_client import QdrantClient, models
- }
- ...,
- },
+client = QdrantClient(url=""http://localhost:6333"")
- {
- ""vector"": [-0.06716014,-0.056464013, ...],
- ...,
+client.create_collection(
- },
+ collection_name=""{collection_name}"",
- ]
+ sparse_vectors_config={
-}
+ ""text"": models.SparseVectorParams(),
-```
+ },
+)
+```
-As the queries are done in a batch request, there isn't any additional network overhead and only very modest computation overhead, yet the results will be better in many cases.
+```typescript
-The only additional complexity is to flatten the result lists and take the first 5 results, deduplicating by point ID. Now there is one final problem: The query may be short enough to take the recommend code path, but still not be in the prefix cache. In that case, doing the search *sequentially* would mean two round-trips between the service and the Qdrant instance. The solution is to *concurrently* start both requests and take the first successful non-empty result.
+import { QdrantClient } from ""@qdrant/js-client-rest"";
-![sequential vs. concurrent flow](/articles_data/search-as-you-type/sayt_concurrency.png)
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
-While this means more load for the Qdrant vector search engine, this is not the limiting factor. The relevant data is already in cache in many cases, so the overhead stays within acceptable bounds, and the maximum latency in case of prefix cache misses is measurably reduced.
+client.createCollection(""{collection_name}"", {
+ sparse_vectors: {
+ text: { },
-The code is available on the [Qdrant github](https://github.com/qdrant/page-search)
+ },
+});
+```
-To sum up: Rust is fast, recommend lets us use precomputed embeddings, batch requests are awesome and one can do a semantic search in mere milliseconds.
-",articles/search-as-you-type.md
-"---
-title: Vector Similarity beyond Search
-short_description: Harnessing the full capabilities of vector embeddings
+```rust
-description: We explore some of the promising new techniques that can be used to expand use-cases of unstructured data and unlock new similarities-based data exploration tools.
+use qdrant_client::Qdrant;
-preview_dir: /articles_data/vector-similarity-beyond-search/preview
+use qdrant_client::qdrant::{
-small_preview_image: /articles_data/vector-similarity-beyond-search/icon.svg
+ CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
-social_preview_image: /articles_data/vector-similarity-beyond-search/preview/social_preview.jpg
+};
-weight: -1
-author: Luis Cossío
-author_link: https://coszio.github.io/
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
-date: 2023-08-08T08:00:00+03:00
-draft: false
-keywords:
+let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
- - vector similarity
- - exploration
- - dissimilarity
+sparse_vector_config.add_named_vector_params(""text"", SparseVectorParamsBuilder::default());
- - discovery
- - diversity
- - recommendation
+client
----
+ .create_collection(
+ CreateCollectionBuilder::new(""{collection_name}"")
+ .sparse_vectors_config(sparse_vector_config),
+ )
+ .await?;
-When making use of unstructured data, there are traditional go-to solutions that are well-known for developers:
+```
-- **Full-text search** when you need to find documents that contain a particular word or phrase.
+```java
-- **Vector search** when you need to find documents that are semantically similar to a given query.
+import io.qdrant.client.QdrantClient;
+import io.qdrant.client.QdrantGrpcClient;
+import io.qdrant.client.grpc.Collections.CreateCollection;
-Sometimes people mix those two approaches, so it might look like the vector similarity is just an extension of full-text search. However, in this article, we will explore some promising new techniques that can be used to expand the use-case of unstructured data and demonstrate that vector similarity creates its own stack of data exploration tools.
+import io.qdrant.client.grpc.Collections.SparseVectorConfig;
+import io.qdrant.client.grpc.Collections.SparseVectorParams;
+QdrantClient client =
-{{< figure width=70% src=/articles_data/vector-similarity-beyond-search/venn-diagram.png caption=""Full-text search and Vector Similarity Functionality overlap"" >}}
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+client
+ .createCollectionAsync(
-While there is an intersection in the functionality of these two approaches, there is also a vast area of functions that is unique to each of them.
+ CreateCollection.newBuilder()
-For example, the exact phrase matching and counting of results are native to full-text search, while vector similarity support for this type of operation is limited.
+ .setCollectionName(""{collection_name}"")
-On the other hand, vector similarity easily allows cross-modal retrieval of images by text or vice-versa, which is impossible with full-text search.
+ .setSparseVectorsConfig(
+ SparseVectorConfig.newBuilder()
+ .putMap(""text"", SparseVectorParams.getDefaultInstance()))
-This mismatch in expectations might sometimes lead to confusion.
+ .build())
-Attempting to use a vector similarity as a full-text search can result in a range of frustrations, from slow response times to poor search results, to limited functionality.
+ .get();
-As an outcome, they are getting only a fraction of the benefits of vector similarity.
+```
-Below we will explore why vector similarity stack deserves new interfaces and design patterns that will unlock the full potential of this technology, which can still be used in conjunction with full-text search.
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-## New Ways to Interact with Similarities
+var client = new QdrantClient(""localhost"", 6334);
-Having a vector representation of unstructured data unlocks new ways of interacting with it.
+await client.CreateCollectionAsync(
-For example, it can be used to measure semantic similarity between words, to cluster words or documents based on their meaning, to find related images, or even to generate new text.
+ collectionName: ""{collection_name}"",
-However, these interactions can go beyond finding their nearest neighbors (kNN).
+ sparseVectorsConfig: (""text"", new SparseVectorParams())
+);
+```
-There are several other techniques that can be leveraged by vector representations beyond the traditional kNN search. These include dissimilarity search, diversity search, recommendations and discovery functions.
+```go
+import (
+ ""context""
-## Dissimilarity Search
+ ""github.com/qdrant/go-client/qdrant""
-The Dissimilarity —or farthest— search is the most straightforward concept after the nearest search, which can’t be reproduced in a traditional full-text search.
+)
-It aims to find the most un-similar or distant documents across the collection.
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
+ Port: 6334,
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/dissimilarity.png caption=""Dissimilarity Search"" >}}
+})
-Unlike full-text match, Vector similarity can compare any pair of documents (or points) and assign a similarity score.
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
-It doesn’t rely on keywords or other metadata.
+ CollectionName: ""{collection_namee}"",
-With vector similarity, we can easily achieve a dissimilarity search by inverting the search objective from maximizing similarity to minimizing it.
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+ map[string]*qdrant.SparseVectorParams{
+ ""text"": {},
-The dissimilarity search can find items in areas where previously no other search could be used.
+ }),
-Let’s look at a few examples.
+})
+```
-### Case: Mislabeling Detection
+Outside of a unique name, there are no required configuration parameters for sparse vectors.
-For example, we have a dataset of furniture in which we have classified our items into what kind of furniture they are: tables, chairs, lamps, etc.
-To ensure our catalog is accurate, we can use a dissimilarity search to highlight items that are most likely mislabeled.
+The distance function for sparse vectors is always `Dot` and does not need to be specified.
-To do this, we only need to search for the most dissimilar items using the
+However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index).
-embedding of the category title itself as a query.
-This can be too broad, so, combining it with filters —a [Qdrant superpower](/articles/filtrable-hnsw)—, we can narrow down the search to a specific category.
+### Check collection existence
+*Available as of v1.8.0*
-{{< figure src=/articles_data/vector-similarity-beyond-search/mislabelling.png caption=""Mislabeling Detection"" >}}
+```http
-The output of this search can be further processed with heavier models or human supervision to detect actual mislabeling.
+GET http://localhost:6333/collections/{collection_name}/exists
+```
-### Case: Outlier Detection
+```bash
+curl -X GET http://localhost:6333/collections/{collection_name}/exists
-In some cases, we might not even have labels, but it is still possible to try to detect anomalies in our dataset.
+```
-Dissimilarity search can be used for this purpose as well.
+```python
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/anomaly-detection.png caption=""Anomaly Detection"" >}}
+client.collection_exists(collection_name=""{collection_name}"")
+```
-The only thing we need is a bunch of reference points that we consider ""normal"".
-Then we can search for the most dissimilar points to this reference set and use them as candidates for further analysis.
+```typescript
+client.collectionExists(""{collection_name}"");
+```
-## Diversity Search
+```rust
+client.collection_exists(""{collection_name}"").await?;
+```
-Even with no input provided vector, (dis-)similarity can improve an overall selection of items from the dataset.
+```java
-The naive approach is to do random sampling.
+client.collectionExistsAsync(""{collection_name}"").get();
-However, unless our dataset has a uniform distribution, the results of such sampling might be biased toward more frequent types of items.
+```
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-random.png caption=""Example of random sampling"" >}}
+```csharp
+await client.CollectionExistsAsync(""{collection_name}"");
+```
-The similarity information can increase the diversity of those results and make the first overview more interesting.
+```go
-That is especially useful when users do not yet know what they are looking for and want to explore the dataset.
+import ""context""
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-force.png caption=""Example of similarity-based sampling"" >}}
+client.CollectionExists(context.Background(), ""my_collection"")
+```
+### Delete collection
-The power of vector similarity, in the context of being able to compare any two points, allows making a diverse selection of the collection possible without any labeling efforts.
-By maximizing the distance between all points in the response, we can have an algorithm that will sequentially output dissimilar results.
+```http
+DELETE http://localhost:6333/collections/{collection_name}
-{{< figure src=/articles_data/vector-similarity-beyond-search/diversity.png caption=""Diversity Search"" >}}
+```
+```bash
+curl -X DELETE http://localhost:6333/collections/{collection_name}
-Some forms of diversity sampling are already used in the industry and are known as [Maximum Margin Relevance](https://python.langchain.com/docs/integrations/vectorstores/qdrant#maximum-marginal-relevance-search-mmr) (MMR). Techniques like this were developed to enhance similarity on a universal search API.
+```
-However, there is still room for new ideas, particularly regarding diversity retrieval.
-By utilizing more advanced vector-native engines, it could be possible to take use cases to the next level and achieve even better results.
+```python
+client.delete_collection(collection_name=""{collection_name}"")
+```
-## Recommendations
+```typescript
+client.deleteCollection(""{collection_name}"");
-Vector similarity can go above a single query vector.
+```
-It can combine multiple positive and negative examples for a more accurate retrieval.
-Building a recommendation API in a vector database can take advantage of using already stored vectors as part of the queries, by specifying the point id.
-Doing this, we can skip query-time neural network inference, and make the recommendation search faster.
+```rust
+client.delete_collection(""{collection_name}"").await?;
+```
-There are multiple ways to implement recommendations with vectors.
+```java
-### Vector-Features Recommendations
+client.deleteCollectionAsync(""{collection_name}"").get();
+```
-The first approach is to take all positive and negative examples and average them to create a single query vector.
-In this technique, the more significant components of positive vectors are canceled out by the negative ones, and the resulting vector is a combination of all the features present in the positive examples, but not in the negative ones.
+```csharp
+await client.DeleteCollectionAsync(""{collection_name}"");
+```
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/feature-based-recommendations.png caption=""Vector-Features Based Recommendations"" >}}
+```go
-This approach is already implemented in Qdrant, and while it works great when the vectors are assumed to have each of their dimensions represent some kind of feature of the data, sometimes distances are a better tool to judge negative and positive examples.
+import ""context""
-### Relative Distance Recommendations
+client.DeleteCollection(context.Background(), ""{collection_name}"")
+```
-Another approach is to use the distance between negative examples to the candidates to help them create exclusion areas.
-In this technique, we perform searches near the positive examples while excluding the points that are closer to a negative example than to a positive one.
+### Update collection parameters
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/relative-distance-recommendations.png caption=""Relative Distance Recommendations"" >}}
+Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors.
+For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished.
+As a result, you will not waste extra computation resources on rebuilding the index.
-The main use-case of both approaches —of course— is to take some history of user interactions and recommend new items based on it.
+The following command enables indexing for segments that have more than 10000 kB of vectors stored:
-## Discovery
+```http
-In many exploration scenarios, the desired destination is not known in advance.
+PATCH /collections/{collection_name}
-The search process in this case can consist of multiple steps, where each step would provide a little more information to guide the search in the right direction.
+{
+ ""optimizers_config"": {
+ ""indexing_threshold"": 10000
-To get more intuition about the possible ways to implement this approach, let’s take a look at how similarity modes are trained in the first place:
+ }
+}
+```
-The most well-known loss function used to train similarity models is a [triplet-loss](https://en.wikipedia.org/wiki/Triplet_loss).
-In this loss, the model is trained by fitting the information of relative similarity of 3 objects: the Anchor, Positive, and Negative examples.
+```bash
+curl -X PATCH http://localhost:6333/collections/{collection_name} \
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/triplet-loss.png caption=""Triplet Loss"" >}}
+ -H 'Content-Type: application/json' \
+ --data-raw '{
+ ""optimizers_config"": {
-Using the same mechanics, we can look at the training process from the other side.
+ ""indexing_threshold"": 10000
-Given a trained model, the user can provide positive and negative examples, and the goal of the discovery process is then to find suitable anchors across the stored collection of vectors.
+ }
+ }'
+```
-
-{{< figure width=60% src=/articles_data/vector-similarity-beyond-search/discovery.png caption=""Reversed triplet loss"" >}}
+```python
+client.update_collection(
-Multiple positive-negative pairs can be provided to make the discovery process more accurate.
+ collection_name=""{collection_name}"",
-Worth mentioning, that as well as in NN training, the dataset may contain noise and some portion of contradictory information, so a discovery process should be tolerant to this kind of data imperfections.
+ optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000),
+)
+```
-
+```typescript
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-noise.png caption=""Sample pairs"" >}}
+client.updateCollection(""{collection_name}"", {
+ optimizers_config: {
+ indexing_threshold: 10000,
-The important difference between this and recommendation method is that the positive-negative pairs in discovery method doesn’t assume that the final result should be close to positive, it only assumes that it should be closer than the negative one.
+ },
+});
+```
-{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-vs-recommendations.png caption=""Discovery vs Recommendation"" >}}
+```rust
-In combination with filtering or similarity search, the additional context information provided by the discovery pairs can be used as a re-ranking factor.
+use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder};
-## A New API Stack for Vector Databases
+client
+ .update_collection(
+ UpdateCollectionBuilder::new(""{collection_name}"").optimizers_config(
-When you introduce vector similarity capabilities into your text search engine, you extend its functionality.
+ OptimizersConfigDiffBuilder::default().indexing_threshold(10000),
-However, it doesn't work the other way around, as the vector similarity as a concept is much broader than some task-specific implementations of full-text search.
+ ),
+ )
+ .await?;
-Vector Databases, which introduce built-in full-text functionality, must make several compromises:
+```
-- Choose a specific full-text search variant.
+```java
-- Either sacrifice API consistency or limit vector similarity functionality to only basic kNN search.
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
-- Introduce additional complexity to the system.
+import io.qdrant.client.grpc.Collections.UpdateCollection;
+client.updateCollectionAsync(
+ UpdateCollection.newBuilder()
-Qdrant, on the contrary, puts vector similarity in the center of it's API and architecture, such that it allows us to move towards a new stack of vector-native operations.
+ .setCollectionName(""{collection_name}"")
-We believe that this is the future of vector databases, and we are excited to see what new use-cases will be unlocked by these techniques.
+ .setOptimizersConfig(
+ OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build())
+ .build());
+```
-## Wrapping up
+```csharp
+using Qdrant.Client;
-Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines.
+using Qdrant.Client.Grpc;
-From dissimilarity search to diversity and recommendation, these methods can expand the cases in which vectors are useful.
+var client = new QdrantClient(""localhost"", 6334);
-Vector Databases, which are designed to store and process immense amounts of vectors, are the first candidates to implement these new techniques and allow users to exploit their data to its fullest.
-",articles/vector-similarity-beyond-search.md
-"---
-title: Q&A with Similarity Learning
-short_description: A complete guide to building a Q&A system with similarity learning.
+await client.UpdateCollectionAsync(
-description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers.
+ collectionName: ""{collection_name}"",
-social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg
+ optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 }
-preview_dir: /articles_data/faq-question-answering/preview
+);
-small_preview_image: /articles_data/faq-question-answering/icon.svg
+```
-weight: 9
-author: George Panchuk
-author_link: https://medium.com/@george.panchuk
+```go
-date: 2022-06-28T08:57:07.604Z
+import (
-# aliases: [ /articles/faq-question-answering/ ]
+ ""context""
----
+ ""github.com/qdrant/go-client/qdrant""
-# Question-answering system with Similarity Learning and Quaterion
+)
+client, err := qdrant.NewClient(&qdrant.Config{
+ Host: ""localhost"",
-Many problems in modern machine learning are approached as classification tasks.
+ Port: 6334,
-Some are the classification tasks by design, but others are artificially transformed into such.
+})
-And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions.
-In some cases, you would even get worse performance.
+client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
+ CollectionName: ""{collection_name}"",
-Imagine that you got a new task and decided to solve it with a good old classification approach.
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{
-Firstly, you will need labeled data.
+ IndexingThreshold: qdrant.PtrOf(uint64(10000)),
-If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually.
+ },
-And I guess you are already familiar with how painful it might be.
+})
+```
-Assuming you somehow labeled all required data and trained a model.
-It shows good performance - well done!
+The following parameters can be updated:
-But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle.
-You repeat your pipeline.
-Then, two days later, you've been reached out one more time.
+* `optimizers_config` - see [optimizer](../optimizer/) for details.
-You need to update the model again, and again, and again.
+* `hnsw_config` - see [indexing](../indexing/#vector-index) for details.
-Sounds tedious and expensive for me, does not it for you?
+* `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details.
-
+* `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings.
-## Automating customer support
+* `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`.
-Let's now take a look at the concrete example. There is a pressing problem with automating customer support.
+Full API specification is available in [schema definitions](https://api.qdrant.tech/api-reference/collections/update-collection).
-The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement.
+Calls to this endpoint may be blocking as it waits for existing optimizers to
-With the classification approach, you need to build a hierarchy of classification models to determine the question's topic.
+finish. We recommended against using this in a production database as it may
-You have to collect and label a whole custom dataset of your private documentation topics to train that.
+introduce huge overhead due to the rebuilding of the index.
-And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data.
-Can we make it easier?
-
+#### Update vector parameters
-## Similarity option
+*Available as of v1.4.0*
-One of the possible alternatives is Similarity Learning, which we are going to discuss in this article.
-It suggests getting rid of the classes and making decisions based on the similarity between objects instead.
-To do it quickly, we would need some intermediate representation - embeddings.
+
-Embeddings are high-dimensional vectors with semantic information accumulated in them.
+Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW
-As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance.
+index, quantization and disk configurations can now be changed without
-So with similarity learning, all we need to do is provide pairs of correct questions and answers.
+recreating a collection. Segments (with index and quantized data) will
-And then, the model will learn to distinguish proper answers by the similarity of embeddings.
+automatically be rebuilt in the background to match updated parameters.
->If you want to learn more about similarity learning and applications, check out this [article](https://blog.qdrant.tech/neural-search-tutorial-3f034ab13adc) which might be an asset.
+To put vector data on disk for a collection that **does not have** named vectors,
+use `""""` as name:
-## Let's build
-Similarity learning approach seems a lot simpler than classification in this case, and if you have some
+```http
-doubts on your mind, let me dispel them.
+PATCH /collections/{collection_name}
+{
+ ""vectors"": {
-As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers.
+ """": {
-The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq).
+ ""on_disk"": true
+ }
+ }
-Once we have data, we need to obtain embeddings for it.
+}
-It is not a novel technique in NLP to represent texts as embeddings.
+```
-There are plenty of algorithms and models to calculate them.
-You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings.
+```bash
+curl -X PATCH http://localhost:6333/collections/{collection_name} \
-However, it is better to produce embeddings with a model trained for semantic similarity tasks.
+ -H 'Content-Type: application/json' \
-For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html).
+ --data-raw '{
-Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial
+ ""vectors"": {
-as it is 5x faster and still offers good results.
+ """": {
+ ""on_disk"": true
+ }
-Having all this, we can test our approach. We won't take all our dataset at the moment, but only
+ }
-a part of it. To measure model's performance we will use two metrics -
+ }'
-[mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and
+```
-[precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k).
-We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py)
-for this experiment, let's just launch it now.
+To put vector data on disk for a collection that **does have** named vectors:
-
+Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name).
-| precision@1 | reciprocal_rank |
-|-------------|-----------------|
-| 0.564 | 0.663 |
+```http
-
+PATCH /collections/{collection_name}
+{
+ ""vectors"": {
-That's already quite decent quality, but maybe we can do better?
+ ""my_vector"": {
+ ""on_disk"": true
+ }
-## Improving results with fine-tuning
+ }
+}
+```
-Actually, we can! Model we used has a good natural language understanding, but it has never seen
-our data. An approach called `fine-tuning` might be helpful to overcome this issue. With
-fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on
+```bash
-another task, apply a couple of layers on top and train its parameters.
+curl -X PATCH http://localhost:6333/collections/{collection_name} \
+ -H 'Content-Type: application/json' \
+ --data-raw '{
-Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools.
+ ""vectors"": {
-For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models.
+ ""my_vector"": {
-Let's see how we can train models with it
+ ""on_disk"": true
+ }
+ }
-First, create our project and call it `faq`.
+ }'
+```
-> All project dependencies, utils scripts not covered in the tutorial can be found in the
-> [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
+In the following example the HNSW index and quantization parameters are updated,
+both for the whole collection, and for `my_vector` specifically:
-### Configure training
-The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html).
+```http
-This class makes model's building process fast and convenient.
+PATCH /collections/{collection_name}
+{
+ ""vectors"": {
-`TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html).
+ ""my_vector"": {
+ ""hnsw_config"": {
+ ""m"": 32,
-[Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually.
+ ""ef_construct"": 123
-Also Lightning's modularity is worth to be mentioned.
+ },
-It improves separation of responsibilities, makes code more readable, robust and easy to write.
+ ""quantization_config"": {
-All these features make Pytorch Lightning a perfect training backend for Quaterion.
+ ""product"": {
+ ""compression"": ""x32"",
+ ""always_ram"": true
-To use `TrainableModel` you need to inherit your model class from it.
+ }
-The same way you would use `LightningModule` in pure `pytorch_lightning`.
+ },
-Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`,
+ ""on_disk"": true
-`configure_optimizers`.
+ }
+ },
+ ""hnsw_config"": {
-The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of
+ ""ef_construct"": 123
-imports to do that. But `configure_encoders` requires some code:)
+ },
+ ""quantization_config"": {
+ ""scalar"": {
-Let's create a `model.py` with model's template and a placeholder for `configure_encoders`
+ ""type"": ""int8"",
-for the moment.
+ ""quantile"": 0.8,
+ ""always_ram"": false
+ }
-```python
+ }
-from typing import Union, Dict, Optional
+}
+```
-from torch.optim import Adam
+```bash
+curl -X PATCH http://localhost:6333/collections/{collection_name} \
-from quaterion import TrainableModel
+ -H 'Content-Type: application/json' \
-from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss
+ --data-raw '{
-from quaterion_models.encoders import Encoder
+ ""vectors"": {
-from quaterion_models.heads import EncoderHead
+ ""my_vector"": {
-from quaterion_models.heads.skip_connection_head import SkipConnectionHead
+ ""hnsw_config"": {
+ ""m"": 32,
+ ""ef_construct"": 123
+ },
+ ""quantization_config"": {
-class FAQModel(TrainableModel):
+ ""product"": {
- def __init__(self, lr=10e-5, *args, **kwargs):
+ ""compression"": ""x32"",
- self.lr = lr
+ ""always_ram"": true
- super().__init__(*args, **kwargs)
+ }
-
+ },
- def configure_optimizers(self):
+ ""on_disk"": true
- return Adam(self.model.parameters(), lr=self.lr)
+ }
-
+ },
- def configure_loss(self) -> SimilarityLoss:
+ ""hnsw_config"": {
- return MultipleNegativesRankingLoss(symmetric=True)
+ ""ef_construct"": 123
-
+ },
- def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+ ""quantization_config"": {
- ... # ToDo
+ ""scalar"": {
-
+ ""type"": ""int8"",
- def configure_head(self, input_embedding_size: int) -> EncoderHead:
+ ""quantile"": 0.8,
- return SkipConnectionHead(input_embedding_size)
+ ""always_ram"": false
-```
+ }
+ }
+}'
-- `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice
+```
-mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later.
-- `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion.
-However, since Quaterion's purpose is not to cover all possible losses, or other entities and
+```python
-features of similarity learning, but to provide a convenient framework to build and use such models,
+client.update_collection(
-there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html)
+ collection_name=""{collection_name}"",
-to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses.
+ vectors_config={
-You can also implement a custom loss yourself.
+ ""my_vector"": models.VectorParamsDiff(
-- `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head.
+ hnsw_config=models.HnswConfigDiff(
-As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html).
+ m=32,
+ ef_construct=123,
+ ),
-At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html).
+ quantization_config=models.ProductQuantization(
-This loss is especially good for training retrieval tasks.
+ product=models.ProductQuantizationConfig(
-It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples.
+ compression=models.CompressionRatio.X32,
+ always_ram=True,
+ ),
-`MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter.
+ ),
-Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html).
+ on_disk=True,
+ ),
+ },
-Now we can come back to `configure_encoders`:)
+ hnsw_config=models.HnswConfigDiff(
+ ef_construct=123,
+ ),
-### Configure Encoder
+ quantization_config=models.ScalarQuantization(
+ scalar=models.ScalarQuantizationConfig(
+ type=models.ScalarType.INT8,
-The encoder task is to convert objects into embeddings.
+ quantile=0.8,
-They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`.
+ always_ram=False,
-In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class.
+ ),
+ ),
+)
-Let's create our encoder in `encoder.py`
+```
-```python
+```typescript
-import os
+client.updateCollection(""{collection_name}"", {
+ vectors: {
+ my_vector: {
-from torch import Tensor, nn
+ hnsw_config: {
-from sentence_transformers.models import Transformer, Pooling
+ m: 32,
+ ef_construct: 123,
+ },
-from quaterion_models.encoders import Encoder
+ quantization_config: {
-from quaterion_models.types import TensorInterchange, CollateFnType
+ product: {
+ compression: ""x32"",
+ always_ram: true,
+ },
+ },
-class FAQEncoder(Encoder):
+ on_disk: true,
- def __init__(self, transformer, pooling):
+ },
- super().__init__()
+ },
- self.transformer = transformer
+ hnsw_config: {
- self.pooling = pooling
+ ef_construct: 123,
- self.encoder = nn.Sequential(self.transformer, self.pooling)
+ },
-
+ quantization_config: {
- @property
+ scalar: {
- def trainable(self) -> bool:
+ type: ""int8"",
- # Defines if we want to train encoder itself, or head layer only
+ quantile: 0.8,
- return False
+ always_ram: true,
-
+ },
- @property
+ },
- def embedding_size(self) -> int:
+});
- return self.transformer.get_word_embedding_dimension()
+```
-
- def forward(self, batch: TensorInterchange) -> Tensor:
- return self.encoder(batch)[""sentence_embedding""]
+```rust
-
+use std::collections::HashMap;
- def get_collate_fn(self) -> CollateFnType:
- return self.transformer.tokenize
-
+use qdrant_client::qdrant::{
- @staticmethod
+ quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiffBuilder,
- def _transformer_path(path: str):
+ QuantizationType, ScalarQuantizationBuilder, UpdateCollectionBuilder, VectorParamsDiffBuilder,
- return os.path.join(path, ""transformer"")
+ VectorParamsDiffMap,
-
+};
- @staticmethod
- def _pooling_path(path: str):
- return os.path.join(path, ""pooling"")
+client
-
+ .update_collection(
- def save(self, output_path: str):
+ UpdateCollectionBuilder::new(""{collection_name}"")
- transformer_path = self._transformer_path(output_path)
+ .hnsw_config(HnswConfigDiffBuilder::default().ef_construct(123))
- os.makedirs(transformer_path, exist_ok=True)
+ .vectors_config(Config::ParamsMap(VectorParamsDiffMap {
- pooling_path = self._pooling_path(output_path)
+ map: HashMap::from([(
- os.makedirs(pooling_path, exist_ok=True)
+ (""my_vector"".into()),
- self.transformer.save(transformer_path)
+ VectorParamsDiffBuilder::default()
- self.pooling.save(pooling_path)
+ .hnsw_config(HnswConfigDiffBuilder::default().m(32).ef_construct(123))
-
+ .build(),
- @classmethod
+ )]),
- def load(cls, input_path: str) -> Encoder:
+ }))
- transformer = Transformer.load(cls._transformer_path(input_path))
+ .quantization_config(Quantization::Scalar(
- pooling = Pooling.load(cls._pooling_path(input_path))
+ ScalarQuantizationBuilder::default()
- return cls(transformer=transformer, pooling=pooling)
+ .r#type(QuantizationType::Int8.into())
+
+ .quantile(0.8)
+
+ .always_ram(true)
+
+ .build(),
+
+ )),
+
+ )
+
+ .await?;
```
-As you can notice, there are more methods implemented, then we've already discussed. Let's go
+```java
-through them now!
+import io.qdrant.client.grpc.Collections.HnswConfigDiff;
-- In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant.
+import io.qdrant.client.grpc.Collections.QuantizationConfigDiff;
+import io.qdrant.client.grpc.Collections.QuantizationType;
+import io.qdrant.client.grpc.Collections.ScalarQuantization;
-- `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen.
+import io.qdrant.client.grpc.Collections.UpdateCollection;
+import io.qdrant.client.grpc.Collections.VectorParamsDiff;
+import io.qdrant.client.grpc.Collections.VectorParamsDiffMap;
-- `embedding_size` is a size of encoder's output, it is required for proper `head` configuration.
+import io.qdrant.client.grpc.Collections.VectorsConfigDiff;
-- `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw
+client
-data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used.
+ .updateCollectionAsync(
+ UpdateCollection.newBuilder()
+ .setCollectionName(""{collection_name}"")
+ .setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build())
+ .setVectorsConfig(
-The remaining methods are considered self-describing.
+ VectorsConfigDiff.newBuilder()
+ .setParamsMap(
+ VectorParamsDiffMap.newBuilder()
-As our encoder is ready, we now are able to fill `configure_encoders`.
+ .putMap(
-Just insert the following code into `model.py`:
+ ""my_vector"",
+ VectorParamsDiff.newBuilder()
+ .setHnswConfig(
-```python
+ HnswConfigDiff.newBuilder()
-...
+ .setM(3)
-from sentence_transformers import SentenceTransformer
+ .setEfConstruct(123)
-from sentence_transformers.models import Transformer, Pooling
+ .build())
-from faq.encoder import FAQEncoder
+ .build())))
+ .setQuantizationConfig(
+ QuantizationConfigDiff.newBuilder()
-class FAQModel(TrainableModel):
+ .setScalar(
- ...
+ ScalarQuantization.newBuilder()
- def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+ .setType(QuantizationType.Int8)
- pre_trained_model = SentenceTransformer(""all-MiniLM-L6-v2"")
+ .setQuantile(0.8f)
- transformer: Transformer = pre_trained_model[0]
+ .setAlwaysRam(true)
- pooling: Pooling = pre_trained_model[1]
+ .build()))
- encoder = FAQEncoder(transformer, pooling)
+ .build())
- return encoder
+ .get();
```
-### Data preparation
+```csharp
+using Qdrant.Client;
+using Qdrant.Client.Grpc;
-Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model.
+var client = new QdrantClient(""localhost"", 6334);
-Currently, Quaterion takes two types of similarity representation - pairs and groups.
+await client.UpdateCollectionAsync(
+ collectionName: ""{collection_name}"",
+ hnswConfig: new HnswConfigDiff { EfConstruct = 123 },
-The groups format assumes that all objects split into groups of similar objects. All objects inside
+ vectorsConfig: new VectorParamsDiffMap
-one group are similar, and all other objects outside this group considered dissimilar to them.
+ {
+ Map =
+ {
-But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects.
+ {
+ ""my_vector"",
+ new VectorParamsDiff
-We can apply any of the approaches with our data, but pairs one seems more intuitive.
+ {
+ HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 }
+ }
-The format in which Similarity is represented determines which loss can be used.
+ }
-For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format.
+ }
+ },
+ quantizationConfig: new QuantizationConfigDiff
-[SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs.
+ {
-Let's take a look at it:
+ Scalar = new ScalarQuantization
+ {
+ Type = QuantizationType.Int8,
-```python
+ Quantile = 0.8f,
-@dataclass
+ AlwaysRam = true
-class SimilarityPairSample:
+ }
- obj_a: Any
+ }
- obj_b: Any
+);
- score: float = 1.0
+```
- subgroup: int = 0
-```
+```go
+import (
-Here might be some questions: what `score` and `subgroup` are?
+ ""context""
-Well, `score` is a measure of expected samples similarity.
+ ""github.com/qdrant/go-client/qdrant""
-If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively.
+)
-`subgroups` parameter is required for more granular description of what negative examples could be.
+client, err := qdrant.NewClient(&qdrant.Config{
-By default, all pairs belong the subgroup zero.
+ Host: ""localhost"",
-That means that we would need to specify all negative examples manually.
+ Port: 6334,
-But in most cases, we can avoid this by enabling different subgroups.
+})
-All objects from different subgroups will be considered as negative examples in loss, and thus it
-provides a way to set negative examples implicitly.
+client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
+ CollectionName: ""{collection_name}"",
+ VectorsConfig: qdrant.NewVectorsConfigDiffMap(
+ map[string]*qdrant.VectorParamsDiff{
-With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model:
+ ""my_vector"": {
+ HnswConfig: &qdrant.HnswConfigDiff{
+ M: qdrant.PtrOf(uint64(3)),
-```python
+ EfConstruct: qdrant.PtrOf(uint64(123)),
-import json
+ },
-from typing import List, Dict
+ },
+ }),
+ QuantizationConfig: qdrant.NewQuantizationDiffScalar(
-from torch.utils.data import Dataset
+ &qdrant.ScalarQuantization{
-from quaterion.dataset.similarity_samples import SimilarityPairSample
+ Type: qdrant.QuantizationType_Int8,
+ Quantile: qdrant.PtrOf(float32(0.8)),
+ AlwaysRam: qdrant.PtrOf(true),
+ }),
+})
-class FAQDataset(Dataset):
+```
- """"""Dataset class to process .jsonl files with FAQ from popular cloud providers.""""""
-
- def __init__(self, dataset_path):
+## Collection info
- self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path)
-
- def __getitem__(self, index) -> SimilarityPairSample:
+Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are
- line = self.dataset[index]
+distributed and indexed.
- question = line[""question""]
- # All questions have a unique subgroup
- # Meaning that all other answers are considered negative pairs
+```http
- subgroup = hash(question)
+GET /collections/{collection_name}
- return SimilarityPairSample(
+```
- obj_a=question,
- obj_b=line[""answer""],
- score=1,
+```bash
- subgroup=subgroup
+curl -X GET http://localhost:6333/collections/{collection_name}
- )
+```
-
- def __len__(self):
- return len(self.dataset)
+```python
-
+client.get_collection(collection_name=""{collection_name}"")
- @staticmethod
+```
- def read_dataset(dataset_path) -> List[Dict[str, str]]:
- """"""Read jsonl-file into a memory.""""""
- with open(dataset_path, ""r"") as fd:
+```typescript
- return [json.loads(json_line) for json_line in fd]
+client.getCollection(""{collection_name}"");
```
-We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples.
+```rust
+client.collection_info(""{collection_name}"").await?;
+```
-### Evaluation Metric
+```java
-We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`.
+client.getCollectionInfoAsync(""{collection_name}"").get();
-We just need to override it and attach interested metrics.
+```
-Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_.
+```csharp
-They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package.
+await client.GetCollectionInfoAsync(""{collection_name}"");
-But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries.
+```
-You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one.
+```go
-In `configure_metrics` we need to return a list of `AttachedMetric`.
+import ""context""
-They are just wrappers around metric instances and helps to log metrics more easily.
-Under the hood `logging` is handled by `pytorch-lightning`.
-You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`.
+client.GetCollectionInfo(context.Background(), ""{collection_name}"")
-For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html)
+```
-Let's add mentioned metrics for our `FAQModel`.
+
-Add this code to `model.py`:
+Expected result
-```python
+```json
-...
+{
-from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank
+ ""result"": {
-from quaterion.eval.attached_metric import AttachedMetric
+ ""status"": ""green"",
+ ""optimizer_status"": ""ok"",
+ ""vectors_count"": 1068786,
+ ""indexed_vectors_count"": 1024232,
+ ""points_count"": 1068786,
-class FAQModel(TrainableModel):
+ ""segments_count"": 31,
- def __init__(self, lr=10e-5, *args, **kwargs):
+ ""config"": {
- self.lr = lr
+ ""params"": {
- super().__init__(*args, **kwargs)
+ ""vectors"": {
-
+ ""size"": 384,
- ...
+ ""distance"": ""Cosine""
- def configure_metrics(self):
+ },
- return [
+ ""shard_number"": 1,
- AttachedMetric(
+ ""replication_factor"": 1,
- ""RetrievalPrecision"",
+ ""write_consistency_factor"": 1,
- RetrievalPrecision(k=1),
+ ""on_disk_payload"": false
- prog_bar=True,
+ },
- on_epoch=True,
+ ""hnsw_config"": {
- ),
+ ""m"": 16,
- AttachedMetric(
+ ""ef_construct"": 100,
- ""RetrievalReciprocalRank"",
+ ""full_scan_threshold"": 10000,
- RetrievalReciprocalRank(),
+ ""max_indexing_threads"": 0
- prog_bar=True,
+ },
- on_epoch=True
+ ""optimizer_config"": {
- ),
+ ""deleted_threshold"": 0.2,
- ]
+ ""vacuum_min_vector_number"": 1000,
+
+ ""default_segment_number"": 0,
+
+ ""max_segment_size"": null,
+
+ ""memmap_threshold"": null,
+
+ ""indexing_threshold"": 20000,
+
+ ""flush_interval_sec"": 5,
+
+ ""max_optimization_threads"": 1
+
+ },
+
+ ""wal_config"": {
+
+ ""wal_capacity_mb"": 32,
+
+ ""wal_segments_ahead"": 0
+
+ }
+
+ },
+
+ ""payload_schema"": {}
+
+ },
+
+ ""status"": ""ok"",
+
+ ""time"": 0.00010143
+
+}
```
-### Fast training with Cache
+
-Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders.
+If you insert the vectors into the collection, the `status` field may become
-If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch.
+`yellow` whilst it is optimizing. It will become `green` once all the points are
-It provides a way to avoid repeated calculations and reduce training time.
+successfully processed.
-For this purpose Quaterion has a cache functionality.
+The following color statuses are possible:
-Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU).
+- 🟢 `green`: collection is ready
-Everything you need is to define which encoders are trainable or not and set cache settings.
+- 🟡 `yellow`: collection is optimizing
-And that's it: everything else Quaterion will handle for you.
+- ⚫ `grey`: collection is pending optimization ([help](#grey-collection-status))
+- 🔴 `red`: an error occurred which the engine could not recover from
-To configure cache you need to override `configure_cache` method in `TrainableModel`.
-This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig).
+### Grey collection status
-Let's add cache to our model:
+_Available as of v1.9.0_
-```python
-...
-from quaterion.train.cache import CacheConfig, CacheType
+A collection may have the grey ⚫ status or show ""optimizations pending,
-...
+awaiting update operation"" as optimization status. This state is normally caused
-class FAQModel(TrainableModel):
+by restarting a Qdrant instance while optimizations were ongoing.
- ...
- def configure_caches(self) -> Optional[CacheConfig]:
- return CacheConfig(CacheType.AUTO)
+It means the collection has optimizations pending, but they are paused. You must
+
+send any update operation to trigger and start the optimizations again.
- ...
-```
+For example:
-[CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory.
+```http
+
+PATCH /collections/{collection_name}
+{
+ ""optimizers_config"": {}
+}
-### Training
+```
-Now we need to combine all our code together in `train.py` and launch a training process.
+```bash
+
+curl -X PATCH http://localhost:6333/collections/{collection_name} \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""optimizers_config"": {}
+
+ }'
+
+```
```python
-import torch
+client.update_collection(
-import pytorch_lightning as pl
+ collection_name=""{collection_name}"",
+ optimizer_config=models.OptimizersConfigDiff(),
+)
-from quaterion import Quaterion
+```
-from quaterion.dataset import PairsSimilarityDataLoader
+```typescript
-from faq.dataset import FAQDataset
+client.updateCollection(""{collection_name}"", {
+ optimizers_config: {},
+});
+```
-def train(model, train_dataset_path, val_dataset_path, params):
- use_gpu = params.get(""cuda"", torch.cuda.is_available())
+```rust
-
+use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder};
- trainer = pl.Trainer(
- min_epochs=params.get(""min_epochs"", 1),
- max_epochs=params.get(""max_epochs"", 500),
+client
- auto_select_gpus=use_gpu,
+ .update_collection(
- log_every_n_steps=params.get(""log_every_n_steps"", 1),
+ UpdateCollectionBuilder::new(""{collection_name}"")
- gpus=int(use_gpu),
+ .optimizers_config(OptimizersConfigDiffBuilder::default()),
)
- train_dataset = FAQDataset(train_dataset_path)
+ .await?;
- val_dataset = FAQDataset(val_dataset_path)
+```
- train_dataloader = PairsSimilarityDataLoader(
- train_dataset, batch_size=1024
- )
+```java
- val_dataloader = PairsSimilarityDataLoader(
+import io.qdrant.client.grpc.Collections.OptimizersConfigDiff;
- val_dataset, batch_size=1024
+import io.qdrant.client.grpc.Collections.UpdateCollection;
- )
-
- Quaterion.fit(model, trainer, train_dataloader, val_dataloader)
+client.updateCollectionAsync(
-
+ UpdateCollection.newBuilder()
-if __name__ == ""__main__"":
+ .setCollectionName(""{collection_name}"")
- import os
+ .setOptimizersConfig(
- from pytorch_lightning import seed_everything
+ OptimizersConfigDiff.getDefaultInstance())
- from faq.model import FAQModel
+ .build());
- from faq.config import DATA_DIR, ROOT_DIR
+```
- seed_everything(42, workers=True)
- faq_model = FAQModel()
- train_path = os.path.join(
+```csharp
- DATA_DIR,
+using Qdrant.Client;
- ""train_cloud_faq_dataset.jsonl""
+using Qdrant.Client.Grpc;
- )
- val_path = os.path.join(
- DATA_DIR,
+var client = new QdrantClient(""localhost"", 6334);
- ""val_cloud_faq_dataset.jsonl""
- )
- train(faq_model, train_path, val_path, {})
+await client.UpdateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ optimizersConfig: new OptimizersConfigDiff { }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ OptimizersConfig: &qdrant.OptimizersConfigDiff{},
+
+})
+
+```
+
+
+
+### Approximate point and vector counts
+
+
+
+You may be interested in the count attributes:
+
+
+
+- `points_count` - total number of objects (vectors and their payloads) stored in the collection
+
+- `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point
+
+- `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration.
+
+
+
+The above counts are not exact, but should be considered approximate. Depending
+
+on how you use Qdrant these may give very different numbers than what you may
+
+expect. It's therefore important **not** to rely on them.
+
+
+
+More specifically, these numbers represent the count of points and vectors in
+
+Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points
+
+as part of automatic optimizations. It may keep changed or deleted points for a
+
+bit. And it may delay indexing of new points. All of that is for optimization
+
+reasons.
+
+
+
+Updates you do are therefore not directly reflected in these numbers. If you see
+
+a wildly different count of points, it will likely resolve itself once a new
+
+round of automatic optimizations has completed.
+
+
+
+To clarify: these numbers don't represent the exact amount of points or vectors
+
+you have inserted, nor does it represent the exact number of distinguishable
+
+points or vectors you can query. If you want to know exact counts, refer to the
+
+[count API](../points/#counting-points).
+
+
+
+_Note: these numbers may be removed in a future version of Qdrant._
+
+
+
+### Indexing vectors in HNSW
+
+
+
+In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and
+
+depends on the [optimizer configuration](../optimizer/). A new index segment is built if the size of non-indexed vectors is higher than the
+
+value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment
+
+created and `indexed_vectors_count` might be equal to `0`.
+
+
+
+It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters).
+
+
+
+## Collection aliases
+
+
+
+In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly.
+
+For example, when upgrading to a new version of the neural network.
+
+
+
+There is no way to stop the service and rebuild the collection with new vectors in these situations.
+
+Aliases are additional names for existing collections.
+
+All queries to the collection can also be done identically, using an alias instead of the collection name.
+
+
+
+Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection.
+
+Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch.
+
+
+
+### Create alias
+
+
+
+```http
+
+POST /collections/aliases
+
+{
+
+ ""actions"": [
+
+ {
+
+ ""create_alias"": {
+
+ ""collection_name"": ""example_collection"",
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```bash
+
+curl -X POST http://localhost:6333/collections/aliases \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""actions"": [
+
+ {
+
+ ""create_alias"": {
+
+ ""collection_name"": ""example_collection"",
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}'
+
+```
+
+
+
+```python
+
+client.update_collection_aliases(
+
+ change_aliases_operations=[
+
+ models.CreateAliasOperation(
+
+ create_alias=models.CreateAlias(
+
+ collection_name=""example_collection"", alias_name=""production_collection""
+
+ )
+
+ )
+
+ ]
+
+)
+
+```
+
+
+
+```typescript
+
+client.updateCollectionAliases({
+
+ actions: [
+
+ {
+
+ create_alias: {
+
+ collection_name: ""example_collection"",
+
+ alias_name: ""production_collection"",
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::CreateAliasBuilder;
+
+
+
+client
+
+ .create_alias(CreateAliasBuilder::new(
+
+ ""example_collection"",
+
+ ""production_collection"",
+
+ ))
+
+ .await?;
+
+```
+
+
+
+```java
+
+client.createAliasAsync(""production_collection"", ""example_collection"").get();
+
+```
+
+
+
+```csharp
+
+await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection"");
+
+```
+
+
+
+```go
+
+import ""context""
+
+
+
+client.CreateAlias(context.Background(), ""production_collection"", ""example_collection"")
+
+```
+
+
+
+### Remove alias
+
+
+
+```bash
+
+curl -X POST http://localhost:6333/collections/aliases \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""actions"": [
+
+ {
+
+ ""delete_alias"": {
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}'
+
+```
+
+
+
+```http
+
+POST /collections/aliases
+
+{
+
+ ""actions"": [
+
+ {
+
+ ""delete_alias"": {
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.update_collection_aliases(
+
+ change_aliases_operations=[
+
+ models.DeleteAliasOperation(
+
+ delete_alias=models.DeleteAlias(alias_name=""production_collection"")
+
+ ),
+
+ ]
+
+)
+
+```
+
+
+
+```typescript
+
+client.updateCollectionAliases({
+
+ actions: [
+
+ {
+
+ delete_alias: {
+
+ alias_name: ""production_collection"",
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+client.delete_alias(""production_collection"").await?;
+
+```
+
+
+
+```java
+
+client.deleteAliasAsync(""production_collection"").get();
+
+```
+
+
+
+```csharp
+
+await client.DeleteAliasAsync(""production_collection"");
+
+```
+
+
+
+```go
+
+import ""context""
+
+
+
+client.DeleteAlias(context.Background(), ""production_collection"")
+
+```
+
+
+
+### Switch collection
+
+
+
+Multiple alias actions are performed atomically.
+
+For example, you can switch underlying collection with the following command:
+
+
+
+```http
+
+POST /collections/aliases
+
+{
+
+ ""actions"": [
+
+ {
+
+ ""delete_alias"": {
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ },
+
+ {
+
+ ""create_alias"": {
+
+ ""collection_name"": ""example_collection"",
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```bash
+
+curl -X POST http://localhost:6333/collections/aliases \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""actions"": [
+
+ {
+
+ ""delete_alias"": {
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ },
+
+ {
+
+ ""create_alias"": {
+
+ ""collection_name"": ""example_collection"",
+
+ ""alias_name"": ""production_collection""
+
+ }
+
+ }
+
+ ]
+
+}'
+
+```
+
+
+
+```python
+
+client.update_collection_aliases(
+
+ change_aliases_operations=[
+
+ models.DeleteAliasOperation(
+
+ delete_alias=models.DeleteAlias(alias_name=""production_collection"")
+
+ ),
+
+ models.CreateAliasOperation(
+
+ create_alias=models.CreateAlias(
+
+ collection_name=""example_collection"", alias_name=""production_collection""
+
+ )
+
+ ),
+
+ ]
+
+)
+
+```
+
+
+
+```typescript
+
+client.updateCollectionAliases({
+
+ actions: [
+
+ {
+
+ delete_alias: {
+
+ alias_name: ""production_collection"",
+
+ },
+
+ },
+
+ {
+
+ create_alias: {
+
+ collection_name: ""example_collection"",
+
+ alias_name: ""production_collection"",
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::CreateAliasBuilder;
+
+
+
+client.delete_alias(""production_collection"").await?;
+
+client
+
+ .create_alias(CreateAliasBuilder::new(
+
+ ""example_collection"",
+
+ ""production_collection"",
+
+ ))
+
+ .await?;
+
+```
+
+
+
+```java
+
+client.deleteAliasAsync(""production_collection"").get();
+
+client.createAliasAsync(""production_collection"", ""example_collection"").get();
+
+```
+
+
+
+```csharp
+
+await client.DeleteAliasAsync(""production_collection"");
+
+await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection"");
+
+```
+
+
+
+```go
+
+import ""context""
+
+
+
+client.DeleteAlias(context.Background(), ""production_collection"")
+
+client.CreateAlias(context.Background(), ""production_collection"", ""example_collection"")
+
+```
+
+
+
+### List collection aliases
+
+
+
+```http
+
+GET /collections/{collection_name}/aliases
+
+```
+
+
+
+```bash
+
+curl -X GET http://localhost:6333/collections/{collection_name}/aliases
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.get_collection_aliases(collection_name=""{collection_name}"")
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.getCollectionAliases(""{collection_name}"");
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.list_collection_aliases(""{collection_name}"").await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.listCollectionAliasesAsync(""{collection_name}"").get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ListCollectionAliasesAsync(""{collection_name}"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.ListCollectionAliases(context.Background(), ""{collection_name}"")
+
+```
+
+
+
+### List all aliases
+
+
+
+```http
+
+GET /aliases
+
+```
+
+
+
+```bash
+
+curl -X GET http://localhost:6333/aliases
+
+```
+
+
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.get_aliases()
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.getAliases();
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.list_aliases().await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.listAliasesAsync().get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ListAliasesAsync();
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.ListAliases(context.Background())
+
+```
+
+
+
+### List all collections
+
+
+
+```http
+
+GET /collections
+
+```
+
+
+
+```bash
+
+curl -X GET http://localhost:6333/collections
+
+```
+
+
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.get_collections()
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.getCollections();
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.list_collections().await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.listCollectionsAsync().get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ListCollectionsAsync();
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.ListCollections(context.Background())
+
+```
+",documentation/concepts/collections.md
+"---
+
+title: Indexing
+
+weight: 90
+
+aliases:
+
+ - ../indexing
+
+---
+
+
+
+# Indexing
+
+
+
+A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering.
+
+
+
+The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection.
+
+
+
+Not all segments automatically have indexes.
+
+Their necessity is determined by the [optimizer](../optimizer/) settings and depends, as a rule, on the number of stored points.
+
+
+
+## Payload Index
+
+
+
+Payload index in Qdrant is similar to the index in conventional document-oriented databases.
+
+This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition.
+
+
+
+The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search/#query-planning) choose a search strategy.
+
+
+
+Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user.
+
+
+
+To mark a field as indexable, you can use the following:
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""name_of_the_field_to_index"",
+
+ ""field_schema"": ""keyword""
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""name_of_the_field_to_index"",
+
+ field_schema=""keyword"",
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""name_of_the_field_to_index"",
+
+ field_schema: ""keyword"",
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType};
+
+
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .create_field_index(CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ FieldType::Keyword,
+
+ ))
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ PayloadSchemaType.Keyword,
+
+ null,
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
+
+})
+
+```
+
+
+
+You can use dot notation to specify a nested field for indexing. Similar to specifying [nested filters](../filtering/#nested-key).
+
+
+
+Available field types are:
+
+
+
+* `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions.
+
+* `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions.
+
+* `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions.
+
+* `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of v1.4.0).
+
+* `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions.
+
+* `datetime` - for [datetime](../payload/#datetime) payload, affects [Range](../filtering/#range) filtering conditions (available as of v1.8.0).
+
+* `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions.
+
+* `uuid` - a special type of index, similar to `keyword`, but optimized for [UUID values](../payload/#uuid).
+
+Affects [Match](../filtering/#match) filtering conditions. (available as of v1.11.0)
+
+
+
+Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions.
+
+If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most.
+
+As a rule, the more different values a payload value has, the more efficiently the index will be used.
+
+
+
+### Full-text index
+
+
+
+*Available as of v0.10.0*
+
+
+
+Qdrant supports full-text search for string payload.
+
+Full-text index allows you to filter points by the presence of a word or a phrase in the payload field.
+
+
+
+Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters.
+
+Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index.
+
+
+
+To create a full-text index, you can use the following:
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""name_of_the_field_to_index"",
+
+ ""field_schema"": {
+
+ ""type"": ""text"",
+
+ ""tokenizer"": ""word"",
+
+ ""min_token_len"": 2,
+
+ ""max_token_len"": 20,
+
+ ""lowercase"": true
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""name_of_the_field_to_index"",
+
+ field_schema=models.TextIndexParams(
+
+ type=""text"",
+
+ tokenizer=models.TokenizerType.WORD,
+
+ min_token_len=2,
+
+ max_token_len=15,
+
+ lowercase=True,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient, Schemas } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""name_of_the_field_to_index"",
+
+ field_schema: {
+
+ type: ""text"",
+
+ tokenizer: ""word"",
+
+ min_token_len: 2,
+
+ max_token_len: 15,
+
+ lowercase: true,
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType,
+
+ PayloadIndexParams, TextIndexParams, TokenizerType,
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .create_field_index(
+
+ CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ FieldType::Text,
+
+ )
+
+ .field_index_params(PayloadIndexParams {
+
+ index_params: Some(IndexParams::TextIndexParams(TextIndexParams {
+
+ tokenizer: TokenizerType::Word as i32,
+
+ min_token_len: Some(2),
+
+ max_token_len: Some(10),
+
+ lowercase: Some(true),
+
+ })),
+
+ }),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+import io.qdrant.client.grpc.Collections.TextIndexParams;
+
+import io.qdrant.client.grpc.Collections.TokenizerType;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ PayloadSchemaType.Text,
+
+ PayloadIndexParams.newBuilder()
+
+ .setTextIndexParams(
+
+ TextIndexParams.newBuilder()
+
+ .setTokenizer(TokenizerType.Word)
+
+ .setMinTokenLen(2)
+
+ .setMaxTokenLen(10)
+
+ .setLowercase(true)
+
+ .build())
+
+ .build(),
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(
+
+ collectionName: ""{collection_name}"",
+
+ fieldName: ""name_of_the_field_to_index"",
+
+ schemaType: PayloadSchemaType.Text,
+
+ indexParams: new PayloadIndexParams
+
+ {
+
+ TextIndexParams = new TextIndexParams
+
+ {
+
+ Tokenizer = TokenizerType.Word,
+
+ MinTokenLen = 2,
+
+ MaxTokenLen = 10,
+
+ Lowercase = true
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeText.Enum(),
+
+ FieldIndexParams: qdrant.NewPayloadIndexParamsText(
+
+ &qdrant.TextIndexParams{
+
+ Tokenizer: qdrant.TokenizerType_Whitespace,
+
+ MinTokenLen: qdrant.PtrOf(uint64(2)),
+
+ MaxTokenLen: qdrant.PtrOf(uint64(10)),
+
+ Lowercase: qdrant.PtrOf(true),
+
+ }),
+
+})
+
+```
+
+
+
+Available tokenizers are:
+
+
+
+* `word` - splits the string into words, separated by spaces, punctuation marks, and special characters.
+
+* `whitespace` - splits the string into words, separated by spaces.
+
+* `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`.
+
+* `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags.
+
+
+
+See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index.
+
+
+
+### Parameterized index
+
+
+
+*Available as of v1.8.0*
+
+
+
+We've added a parameterized variant to the `integer` index, which allows
+
+you to fine-tune indexing and search performance.
+
+
+
+Both the regular and parameterized `integer` indexes use the following flags:
+
+
+
+- `lookup`: enables support for direct lookup using
+
+ [Match](/documentation/concepts/filtering/#match) filters.
+
+- `range`: enables support for
+
+ [Range](/documentation/concepts/filtering/#range) filters.
+
+
+
+The regular `integer` index assumes both `lookup` and `range` are `true`. In
+
+contrast, to configure a parameterized index, you would set only one of these
+
+filters to `true`:
+
+
+
+| `lookup` | `range` | Result |
+
+|----------|---------|-----------------------------|
+
+| `true` | `true` | Regular integer index |
+
+| `true` | `false` | Parameterized integer index |
+
+| `false` | `true` | Parameterized integer index |
+
+| `false` | `false` | No integer index |
+
+
+
+The parameterized index can enhance performance in collections with millions
+
+of points. We encourage you to try it out. If it does not enhance performance
+
+in your use case, you can always restore the regular `integer` index.
+
+
+
+Note: If you set `""lookup"": true` with a range filter, that may lead to
+
+significant performance issues.
+
+
+
+For example, the following code sets up a parameterized integer index which
+
+supports only range filters:
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""name_of_the_field_to_index"",
+
+ ""field_schema"": {
+
+ ""type"": ""integer"",
+
+ ""lookup"": false,
+
+ ""range"": true
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""name_of_the_field_to_index"",
+
+ field_schema=models.IntegerIndexParams(
+
+ type=models.IntegerIndexType.INTEGER,
+
+ lookup=False,
+
+ range=True,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient, Schemas } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""name_of_the_field_to_index"",
+
+ field_schema: {
+
+ type: ""integer"",
+
+ lookup: false,
+
+ range: true,
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType,
+
+ IntegerIndexParams, PayloadIndexParams,
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .create_field_index(
+
+ CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ FieldType::Integer,
+
+ )
+
+ .field_index_params(PayloadIndexParams {
+
+ index_params: Some(IndexParams::IntegerIndexParams(IntegerIndexParams {
+
+ lookup: false,
+
+ range: true,
+
+ })),
+
+ }),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.IntegerIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""name_of_the_field_to_index"",
+
+ PayloadSchemaType.Integer,
+
+ PayloadIndexParams.newBuilder()
+
+ .setIntegerIndexParams(
+
+ IntegerIndexParams.newBuilder().setLookup(false).setRange(true).build())
+
+ .build(),
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(
+
+ collectionName: ""{collection_name}"",
+
+ fieldName: ""name_of_the_field_to_index"",
+
+ schemaType: PayloadSchemaType.Integer,
+
+ indexParams: new PayloadIndexParams
+
+ {
+
+ IntegerIndexParams = new()
+
+ {
+
+ Lookup = false,
+
+ Range = true
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeInteger.Enum(),
+
+ FieldIndexParams: qdrant.NewPayloadIndexParamsInt(
+
+ &qdrant.IntegerIndexParams{
+
+ Lookup: false,
+
+ Range: true,
+
+ }),
+
+})
+
+```
+
+
+
+### On-disk payload index
+
+
+
+*Available as of v1.11.0*
+
+
+
+By default all payload-related structures are stored in memory. In this way, the vector index can quickly access payload values during search.
+
+As latency in this case is critical, it is recommended to keep hot payload indexes in memory.
+
+
+
+There are, however, cases when payload indexes are too large or rarely used. In those cases, it is possible to store payload indexes on disk.
+
+
+
+
+
+
+
+To configure on-disk payload index, you can use the following index parameters:
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""payload_field_name"",
+
+ ""field_schema"": {
+
+ ""type"": ""keyword"",
+
+ ""on_disk"": true
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""payload_field_name"",
+
+ field_schema=models.KeywordIndexParams(
+
+ type=""keyword"",
+
+ on_disk=True,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""payload_field_name"",
+
+ field_schema: {
+
+ type: ""keyword"",
+
+ on_disk: true
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateFieldIndexCollectionBuilder,
+
+ KeywordIndexParamsBuilder,
+
+ FieldType
+
+};
+
+use qdrant_client::{Qdrant, QdrantError};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.create_field_index(
+
+ CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""payload_field_name"",
+
+ FieldType::Keyword,
+
+ )
+
+ .field_index_params(
+
+ KeywordIndexParamsBuilder::default()
+
+ .on_disk(true),
+
+ ),
+
+);
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+import io.qdrant.client.grpc.Collections.KeywordIndexParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""payload_field_name"",
+
+ PayloadSchemaType.Keyword,
+
+ PayloadIndexParams.newBuilder()
+
+ .setKeywordIndexParams(
+
+ KeywordIndexParams.newBuilder()
+
+ .setOnDisk(true)
+
+ .build())
+
+ .build(),
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(
+
+ collectionName: ""{collection_name}"",
+
+ fieldName: ""payload_field_name"",
+
+ schemaType: PayloadSchemaType.Keyword,
+
+ indexParams: new PayloadIndexParams
+
+ {
+
+ KeywordIndexParams = new KeywordIndexParams
+
+ {
+
+ OnDisk = true
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
+
+ FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword(
+
+ &qdrant.KeywordIndexParams{
+
+ OnDisk: qdrant.PtrOf(true),
+
+ }),
+
+})
+
+```
+
+
+
+Payload index on-disk is supported for following types:
+
+
+
+* `keyword`
+
+* `integer`
+
+* `float`
+
+* `datetime`
+
+* `uuid`
+
+
+
+The list will be extended in future versions.
+
+
+
+### Tenant Index
+
+
+
+*Available as of v1.11.0*
+
+
+
+Many vector search use-cases require multitenancy. In a multi-tenant scenario the collection is expected to contain multiple subsets of data, where each subset belongs to a different tenant.
+
+
+
+Qdrant supports efficient multi-tenant search by enabling [special configuration](../guides/multiple-partitions/) vector index, which disables global search and only builds sub-indexes for each tenant.
+
+
+
+
+
+
+
+However, knowing that the collection contains multiple tenants unlocks more opportunities for optimization.
+
+To optimize storage in Qdrant further, you can enable tenant indexing for payload fields.
+
+
+
+This option will tell Qdrant which fields are used for tenant identification and will allow Qdrant to structure storage for faster search of tenant-specific data.
+
+One example of such optimization is localizing tenant-specific data closer on disk, which will reduce the number of disk reads during search.
+
+
+
+To enable tenant index for a field, you can use the following index parameters:
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""payload_field_name"",
+
+ ""field_schema"": {
+
+ ""type"": ""keyword"",
+
+ ""is_tenant"": true
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""payload_field_name"",
+
+ field_schema=models.KeywordIndexParams(
+
+ type=""keyword"",
+
+ is_tenant=True,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""payload_field_name"",
+
+ field_schema: {
+
+ type: ""keyword"",
+
+ is_tenant: true
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateFieldIndexCollectionBuilder,
+
+ KeywordIndexParamsBuilder,
+
+ FieldType
+
+};
+
+use qdrant_client::{Qdrant, QdrantError};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.create_field_index(
+
+ CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""payload_field_name"",
+
+ FieldType::Keyword,
+
+ )
+
+ .field_index_params(
+
+ KeywordIndexParamsBuilder::default()
+
+ .is_tenant(true),
+
+ ),
+
+);
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+import io.qdrant.client.grpc.Collections.KeywordIndexParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""payload_field_name"",
+
+ PayloadSchemaType.Keyword,
+
+ PayloadIndexParams.newBuilder()
+
+ .setKeywordIndexParams(
+
+ KeywordIndexParams.newBuilder()
+
+ .setIsTenant(true)
+
+ .build())
+
+ .build(),
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(
+
+ collectionName: ""{collection_name}"",
+
+ fieldName: ""payload_field_name"",
+
+ schemaType: PayloadSchemaType.Keyword,
+
+ indexParams: new PayloadIndexParams
+
+ {
+
+ KeywordIndexParams = new KeywordIndexParams
+
+ {
+
+ IsTenant = true
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(),
+
+ FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword(
+
+ &qdrant.KeywordIndexParams{
+
+ IsTenant: qdrant.PtrOf(true),
+
+ }),
+
+})
+
+```
+
+
+
+Tenant optimization is supported for the following datatypes:
+
+
+
+* `keyword`
+
+* `uuid`
+
+
+
+### Principal Index
+
+
+
+*Available as of v1.11.0*
+
+
+
+Similar to the tenant index, the principal index is used to optimize storage for faster search, assuming that the search request is primarily filtered by the principal field.
+
+
+
+A good example of a use case for the principal index is time-related data, where each point is associated with a timestamp. In this case, the principal index can be used to optimize storage for faster search with time-based filters.
+
+
+
+```http
+
+PUT /collections/{collection_name}/index
+
+{
+
+ ""field_name"": ""timestamp"",
+
+ ""field_schema"": {
+
+ ""type"": ""integer"",
+
+ ""is_principal"": true
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.create_payload_index(
+
+ collection_name=""{collection_name}"",
+
+ field_name=""timestamp"",
+
+ field_schema=models.KeywordIndexParams(
+
+ type=""integer"",
+
+ is_principal=True,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.createPayloadIndex(""{collection_name}"", {
+
+ field_name: ""timestamp"",
+
+ field_schema: {
+
+ type: ""integer"",
+
+ is_principal: true
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateFieldIndexCollectionBuilder,
+
+ IntegerdIndexParamsBuilder,
+
+ FieldType
+
+};
+
+use qdrant_client::{Qdrant, QdrantError};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.create_field_index(
+
+ CreateFieldIndexCollectionBuilder::new(
+
+ ""{collection_name}"",
+
+ ""timestamp"",
+
+ FieldType::Integer,
+
+ )
+
+ .field_index_params(
+
+ IntegerdIndexParamsBuilder::default()
+
+ .is_principal(true),
+
+ ),
+
+);
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.PayloadIndexParams;
+
+import io.qdrant.client.grpc.Collections.PayloadSchemaType;
+
+import io.qdrant.client.grpc.Collections.IntegerIndexParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createPayloadIndexAsync(
+
+ ""{collection_name}"",
+
+ ""timestamp"",
+
+ PayloadSchemaType.Integer,
+
+ PayloadIndexParams.newBuilder()
+
+ .setIntegerIndexParams(
+
+ KeywordIndexParams.newBuilder()
+
+ .setIsPrincipa(true)
+
+ .build())
+
+ .build(),
+
+ null,
+
+ null,
+
+ null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreatePayloadIndexAsync(
+
+ collectionName: ""{collection_name}"",
+
+ fieldName: ""timestamp"",
+
+ schemaType: PayloadSchemaType.Integer,
+
+ indexParams: new PayloadIndexParams
+
+ {
+
+ IntegerIndexParams = new IntegerIndexParams
+
+ {
+
+ IsPrincipal = true
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ FieldName: ""name_of_the_field_to_index"",
+
+ FieldType: qdrant.FieldType_FieldTypeInteger.Enum(),
+
+ FieldIndexParams: qdrant.NewPayloadIndexParamsInt(
+
+ &qdrant.IntegerIndexParams{
+
+ IsPrincipal: qdrant.PtrOf(true),
+
+ }),
+
+})
+
+```
+
+
+
+Principal optimization is supported for following types:
+
+
+
+* `integer`
+
+* `float`
+
+* `datetime`
+
+
+
+
+
+## Vector Index
+
+
+
+A vector index is a data structure built on vectors through a specific mathematical model.
+
+Through the vector index, we can efficiently query several vectors similar to the target vector.
+
+
+
+Qdrant currently only uses HNSW as a dense vector index.
+
+
+
+[HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position.
+
+
+
+In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range.
+
+
+
+The corresponding parameters could be configured in the configuration file:
+
+
+
+```yaml
+
+storage:
+
+ # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually
+
+ hnsw_index:
+
+ # Number of edges per node in the index graph.
+
+ # Larger the value - more accurate the search, more space required.
+
+ m: 16
+
+ # Number of neighbours to consider during the index building.
+
+ # Larger the value - more accurate the search, more time required to build index.
+
+ ef_construct: 100
+
+ # Minimal size (in KiloBytes) of vectors for additional payload-based indexing.
+
+ # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used -
+
+ # in this case full-scan search should be preferred by query planner and additional indexing is not required.
+
+ # Note: 1Kb = 1 vector of size 256
+
+ full_scan_threshold: 10000
+
+
+
+```
+
+
+
+And so in the process of creating a [collection](../collections/). The `ef` parameter is configured during [the search](../search/) and by default is equal to `ef_construct`.
+
+
+
+HNSW is chosen for several reasons.
+
+First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search.
+
+Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks).
+
+
+
+*Available as of v1.1.1*
+
+
+
+The HNSW parameters can also be configured on a collection and named vector
+
+level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search
+
+performance.
+
+
+
+## Sparse Vector Index
+
+
+
+*Available as of v1.7.0*
+
+
+
+Sparse vectors in Qdrant are indexed with a special data structure, which is optimized for vectors that have a high proportion of zeroes. In some ways, this indexing method is similar to the inverted index, which is used in text search engines.
+
+
+
+- A sparse vector index in Qdrant is exact, meaning it does not use any approximation algorithms.
+
+- All sparse vectors added to the collection are immediately indexed in the mutable version of a sparse index.
+
+
+
+With Qdrant, you can benefit from a more compact and efficient immutable sparse index, which is constructed during the same optimization process as the dense vector index.
+
+
+
+This approach is particularly useful for collections storing both dense and sparse vectors.
+
+
+
+To configure a sparse vector index, create a collection with the following parameters:
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""sparse_vectors"": {
+
+ ""text"": {
+
+ ""index"": {
+
+ ""on_disk"": false
+
+ }
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ sparse_vectors={
+
+ ""text"": models.SparseVectorIndexParams(
+
+ index=models.SparseVectorIndexType(
+
+ on_disk=False,
+
+ ),
+
+ ),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient, Schemas } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ sparse_vectors: {
+
+ ""splade-model-name"": {
+
+ index: {
+
+ on_disk: false
+
+ }
+
+ }
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, SparseIndexConfigBuilder, SparseVectorParamsBuilder,
+
+ SparseVectorsConfigBuilder,
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
+
+
+
+sparse_vectors_config.add_named_vector_params(
+
+ ""splade-model-name"",
+
+ SparseVectorParamsBuilder::default()
+
+ .index(SparseIndexConfigBuilder::default().on_disk(true)),
+
+);
+
+
+
+client
+
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"")
+
+ .sparse_vectors_config(sparse_vectors_config),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+import io.qdrant.client.grpc.Collections;
+
+
+
+QdrantClient client = new QdrantClient(
+
+ QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.createCollectionAsync(
+
+ Collections.CreateCollection.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setSparseVectorsConfig(
+
+ Collections.SparseVectorConfig.newBuilder().putMap(
+
+ ""splade-model-name"",
+
+ Collections.SparseVectorParams.newBuilder()
+
+ .setIndex(
+
+ Collections.SparseIndexConfig
+
+ .newBuilder()
+
+ .setOnDisk(false)
+
+ .build()
+
+ ).build()
+
+ ).build()
+
+ ).build()
+
+).get();
+
+
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ sparseVectorsConfig: (""splade-model-name"", new SparseVectorParams{
+
+ Index = new SparseIndexConfig {
+
+ OnDisk = false,
+
+ }
+
+ })
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+
+ map[string]*qdrant.SparseVectorParams{
+
+ ""splade-model-name"": {
+
+ Index: &qdrant.SparseIndexConfig{
+
+ OnDisk: qdrant.PtrOf(false),
+
+ }},
+
+ }),
+
+})
+
+````
+
+
+
+The following parameters may affect performance:
+
+
+
+- `on_disk: true` - The index is stored on disk, which lets you save memory. This may slow down search performance.
+
+- `on_disk: false` - The index is still persisted on disk, but it is also loaded into memory for faster search.
+
+
+
+Unlike a dense vector index, a sparse vector index does not require a pre-defined vector size. It automatically adjusts to the size of the vectors added to the collection.
+
+
+
+**Note:** A sparse vector index only supports dot-product similarity searches. It does not support other distance metrics.
+
+
+
+### IDF Modifier
+
+
+
+*Available as of v1.10.0*
+
+
+
+For many search algorithms, it is important to consider how often an item occurs in a collection.
+
+Intuitively speaking, the less frequently an item appears in a collection, the more important it is in a search.
+
+
+
+This is also known as the Inverse Document Frequency (IDF). It is used in text search engines to rank search results based on the rarity of a word in a collection.
+
+
+
+IDF depends on the currently stored documents and therefore can't be pre-computed in the sparse vectors in streaming inference mode.
+
+In order to support IDF in the sparse vector index, Qdrant provides an option to modify the sparse vector query with the IDF statistics automatically.
+
+
+
+The only requirement is to enable the IDF modifier in the collection configuration:
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""sparse_vectors"": {
+
+ ""text"": {
+
+ ""modifier"": ""idf""
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ sparse_vectors={
+
+ ""text"": models.SparseVectorParams(
+
+ modifier=models.Modifier.IDF,
+
+ ),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient, Schemas } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ sparse_vectors: {
+
+ ""text"": {
+
+ modifier: ""idf""
+
+ }
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, Modifier, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
+
+};
+
+use qdrant_client::{Qdrant, QdrantError};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
+
+sparse_vectors_config.add_named_vector_params(
+
+ ""text"",
+
+ SparseVectorParamsBuilder::default().modifier(Modifier::Idf),
+
+);
+
+
+
+client
+
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"")
+
+ .sparse_vectors_config(sparse_vectors_config),
+
+ )
+
+ .await?;
+
+```
+
+
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.Modifier;
+
+import io.qdrant.client.grpc.Collections.SparseVectorConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createCollectionAsync(
+
+ CreateCollection.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setSparseVectorsConfig(
+
+ SparseVectorConfig.newBuilder()
+
+ .putMap(""text"", SparseVectorParams.newBuilder().setModifier(Modifier.Idf).build()))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ sparseVectorsConfig: (""text"", new SparseVectorParams {
+
+ Modifier = Modifier.Idf,
+
+ })
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+
+ map[string]*qdrant.SparseVectorParams{
+
+ ""text"": {
+
+ Modifier: qdrant.Modifier_Idf.Enum(),
+
+ },
+
+ }),
+
+})
+
+```
+
+
+
+Qdrant uses the following formula to calculate the IDF modifier:
+
+
+
+$$
+
+\text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right)
+
+$$
+
+
+
+Where:
+
+
+
+- `N` is the total number of documents in the collection.
+
+- `n` is the number of documents containing non-zero values for the given vector element.
+
+
+
+## Filtrable Index
+
+
+
+Separately, a payload index and a vector index cannot solve the problem of search using the filter completely.
+
+
+
+In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore.
+
+However, for cases in the middle, this approach does not work well.
+
+
+
+On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters.
+
+
+
+![HNSW fail](/docs/precision_by_m.png)
+
+
+
+![hnsw graph](/docs/graph.gif)
+
+
+
+You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/).
+
+Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values.
+
+
+
+Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph.
+
+
+
+This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search.
+",documentation/concepts/indexing.md
+"---
+
+title: Points
+
+weight: 40
+
+aliases:
+
+ - ../points
+
+---
+
+
+
+# Points
+
+
+
+The points are the central entity that Qdrant operates with.
+
+A point is a record consisting of a [vector](../vectors/) and an optional [payload](../payload/).
+
+
+
+It looks like this:
+
+
+
+```json
+
+// This is a simple point
+
+{
+
+ ""id"": 129,
+
+ ""vector"": [0.1, 0.2, 0.3, 0.4],
+
+ ""payload"": {""color"": ""red""},
+
+}
+
+```
+
+
+
+You can search among the points grouped in one [collection](../collections/) based on vector similarity.
+
+This procedure is described in more detail in the [search](../search/) and [filtering](../filtering/) sections.
+
+
+
+This section explains how to create and manage vectors.
+
+
+
+Any point modification operation is asynchronous and takes place in 2 steps.
+
+At the first stage, the operation is written to the Write-ahead-log.
+
+
+
+After this moment, the service will not lose the data, even if the machine loses power supply.
+
+
+
+
+
+## Point IDs
+
+
+
+Qdrant supports using both `64-bit unsigned integers` and `UUID` as identifiers for points.
+
+
+
+Examples of UUID string representations:
+
+
+
+- simple: `936DA01F9ABD4d9d80C702AF85C822A8`
+
+- hyphenated: `550e8400-e29b-41d4-a716-446655440000`
+
+- urn: `urn:uuid:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4`
+
+
+
+That means that in every request UUID string could be used instead of numerical id.
+
+Example:
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+
+ ""payload"": {""color"": ""red""},
+
+ ""vector"": [0.9, 0.1, 0.1]
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+
+ payload={
+
+ ""color"": ""red"",
+
+ },
+
+ vector=[0.9, 0.1, 0.1],
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+
+ payload: {
+
+ color: ""red"",
+
+ },
+
+ vector: [0.9, 0.1, 0.1],
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![PointStruct::new(
+
+ ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+
+ vec![0.9, 0.1, 0.1],
+
+ [(""color"", ""Red"".into())],
+
+ )],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+import java.util.UUID;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.ValueFactory.value;
+
+import static io.qdrant.client.VectorsFactory.vectors;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(UUID.fromString(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"")))
+
+ .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+
+ .putAllPayload(Map.of(""color"", value(""Red"")))
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new()
+
+ {
+
+ Id = Guid.Parse(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""),
+
+ Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+
+ Payload = { [""color""] = ""Red"" }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewID(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""),
+
+ Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
+
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""Red""}),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+and
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""payload"": {""color"": ""red""},
+
+ ""vector"": [0.9, 0.1, 0.1]
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ payload={
+
+ ""color"": ""red"",
+
+ },
+
+ vector=[0.9, 0.1, 0.1],
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ payload: {
+
+ color: ""red"",
+
+ },
+
+ vector: [0.9, 0.1, 0.1],
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![PointStruct::new(
+
+ 1,
+
+ vec![0.9, 0.1, 0.1],
+
+ [(""color"", ""Red"".into())],
+
+ )],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.ValueFactory.value;
+
+import static io.qdrant.client.VectorsFactory.vectors;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f))
+
+ .putAllPayload(Map.of(""color"", value(""Red"")))
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new()
+
+ {
+
+ Id = 1,
+
+ Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f },
+
+ Payload = { [""color""] = ""Red"" }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74),
+
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""Red""}),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+are both possible.
+
+
+
+## Vectors
+
+
+
+Each point in qdrant may have one or more vectors.
+
+Vectors are the central component of the Qdrant architecture,
+
+qdrant relies on different types of vectors to provide different types of data exploration and search.
+
+
+
+Here is a list of supported vector types:
+
+
+
+|||
+
+|-|-|
+
+| Dense Vectors | A regular vectors, generated by majority of the embedding models. |
+
+| Sparse Vectors | Vectors with no fixed length, but only a few non-zero elements. Useful for exact token match and collaborative filtering recommendations. |
+
+| MultiVectors | Matrices of numbers with fixed length but variable height. Usually obtained from late interraction models like ColBERT. |
+
+
+
+It is possible to attach more than one type of vector to a single point.
+
+In Qdrant we call it Named Vectors.
+
+
+
+Read more about vector types, how they are stored and optimized in the [vectors](../vectors/) section.
+
+
+
+
+
+## Upload points
+
+
+
+To optimize performance, Qdrant supports batch loading of points. I.e., you can load several points into the service in one API call.
+
+Batching allows you to minimize the overhead of creating a network connection.
+
+
+
+The Qdrant API supports two ways of creating batches - record-oriented and column-oriented.
+
+Internally, these options do not differ and are made only for the convenience of interaction.
+
+
+
+Create points with batch:
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""batch"": {
+
+ ""ids"": [1, 2, 3],
+
+ ""payloads"": [
+
+ {""color"": ""red""},
+
+ {""color"": ""green""},
+
+ {""color"": ""blue""}
+
+ ],
+
+ ""vectors"": [
+
+ [0.9, 0.1, 0.1],
+
+ [0.1, 0.9, 0.1],
+
+ [0.1, 0.1, 0.9]
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=models.Batch(
+
+ ids=[1, 2, 3],
+
+ payloads=[
+
+ {""color"": ""red""},
+
+ {""color"": ""green""},
+
+ {""color"": ""blue""},
+
+ ],
+
+ vectors=[
+
+ [0.9, 0.1, 0.1],
+
+ [0.1, 0.9, 0.1],
+
+ [0.1, 0.1, 0.9],
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.upsert(""{collection_name}"", {
+
+ batch: {
+
+ ids: [1, 2, 3],
+
+ payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }],
+
+ vectors: [
+
+ [0.9, 0.1, 0.1],
+
+ [0.1, 0.9, 0.1],
+
+ [0.1, 0.1, 0.9],
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+or record-oriented equivalent:
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""payload"": {""color"": ""red""},
+
+ ""vector"": [0.9, 0.1, 0.1]
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""payload"": {""color"": ""green""},
+
+ ""vector"": [0.1, 0.9, 0.1]
+
+ },
+
+ {
+
+ ""id"": 3,
+
+ ""payload"": {""color"": ""blue""},
+
+ ""vector"": [0.1, 0.1, 0.9]
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ payload={
+
+ ""color"": ""red"",
+
+ },
+
+ vector=[0.9, 0.1, 0.1],
+
+ ),
+
+ models.PointStruct(
+
+ id=2,
+
+ payload={
+
+ ""color"": ""green"",
+
+ },
+
+ vector=[0.1, 0.9, 0.1],
+
+ ),
+
+ models.PointStruct(
+
+ id=3,
+
+ payload={
+
+ ""color"": ""blue"",
+
+ },
+
+ vector=[0.1, 0.1, 0.9],
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ payload: { color: ""red"" },
+
+ vector: [0.9, 0.1, 0.1],
+
+ },
+
+ {
+
+ id: 2,
+
+ payload: { color: ""green"" },
+
+ vector: [0.1, 0.9, 0.1],
+
+ },
+
+ {
+
+ id: 3,
+
+ payload: { color: ""blue"" },
+
+ vector: [0.1, 0.1, 0.9],
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![
+
+ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""city"", ""red"".into())]),
+
+ PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""city"", ""green"".into())]),
+
+ PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""city"", ""blue"".into())]),
+
+ ],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.ValueFactory.value;
+
+import static io.qdrant.client.VectorsFactory.vectors;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(vectors(0.9f, 0.1f, 0.1f))
+
+ .putAllPayload(Map.of(""color"", value(""red"")))
+
+ .build(),
+
+ PointStruct.newBuilder()
+
+ .setId(id(2))
+
+ .setVectors(vectors(0.1f, 0.9f, 0.1f))
+
+ .putAllPayload(Map.of(""color"", value(""green"")))
+
+ .build(),
+
+ PointStruct.newBuilder()
+
+ .setId(id(3))
+
+ .setVectors(vectors(0.1f, 0.1f, 0.9f))
+
+ .putAllPayload(Map.of(""color"", value(""blue"")))
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new()
+
+ {
+
+ Id = 1,
+
+ Vectors = new[] { 0.9f, 0.1f, 0.1f },
+
+ Payload = { [""color""] = ""red"" }
+
+ },
+
+ new()
+
+ {
+
+ Id = 2,
+
+ Vectors = new[] { 0.1f, 0.9f, 0.1f },
+
+ Payload = { [""color""] = ""green"" }
+
+ },
+
+ new()
+
+ {
+
+ Id = 3,
+
+ Vectors = new[] { 0.1f, 0.1f, 0.9f },
+
+ Payload = { [""color""] = ""blue"" }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectors(0.9, 0.1, 0.1),
+
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""red""}),
+
+ },
+
+ {
+
+ Id: qdrant.NewIDNum(2),
+
+ Vectors: qdrant.NewVectors(0.1, 0.9, 0.1),
+
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""green""}),
+
+ },
+
+ {
+
+ Id: qdrant.NewIDNum(3),
+
+ Vectors: qdrant.NewVectors(0.1, 0.1, 0.9),
+
+ Payload: qdrant.NewValueMap(map[string]any{""color"": ""blue""}),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+The Python client has additional features for loading points, which include:
+
+
+
+- Parallelization
+
+- A retry mechanism
+
+- Lazy batching support
+
+
+
+For example, you can read your data directly from hard drives, to avoid storing all data in RAM. You can use these
+
+features with the `upload_collection` and `upload_points` methods.
+
+Similar to the basic upsert API, these methods support both record-oriented and column-oriented formats.
+
+
+
+
+
+
+
+Column-oriented format:
+
+
+
+```python
+
+client.upload_collection(
+
+ collection_name=""{collection_name}"",
+
+ ids=[1, 2],
+
+ payload=[
+
+ {""color"": ""red""},
+
+ {""color"": ""green""},
+
+ ],
+
+ vectors=[
+
+ [0.9, 0.1, 0.1],
+
+ [0.1, 0.9, 0.1],
+
+ ],
+
+ parallel=4,
+
+ max_retries=3,
+
+)
+
+```
+
+
+
+
+
+
+
+Record-oriented format:
+
+
+
+```python
+
+client.upload_points(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ payload={
+
+ ""color"": ""red"",
+
+ },
+
+ vector=[0.9, 0.1, 0.1],
+
+ ),
+
+ models.PointStruct(
+
+ id=2,
+
+ payload={
+
+ ""color"": ""green"",
+
+ },
+
+ vector=[0.1, 0.9, 0.1],
+
+ ),
+
+ ],
+
+ parallel=4,
+
+ max_retries=3,
+
+)
+
+```
+
+
+
+All APIs in Qdrant, including point loading, are idempotent.
+
+It means that executing the same method several times in a row is equivalent to a single execution.
+
+
+
+In this case, it means that points with the same id will be overwritten when re-uploaded.
+
+
+
+Idempotence property is useful if you use, for example, a message queue that doesn't provide an exactly-ones guarantee.
+
+Even with such a system, Qdrant ensures data consistency.
+
+
+
+[_Available as of v0.10.0_](#create-vector-name)
+
+
+
+If the collection was created with multiple vectors, each vector data can be provided using the vector's name:
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": {
+
+ ""image"": [0.9, 0.1, 0.1, 0.2],
+
+ ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2]
+
+ }
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""vector"": {
+
+ ""image"": [0.2, 0.1, 0.3, 0.9],
+
+ ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9]
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ vector={
+
+ ""image"": [0.9, 0.1, 0.1, 0.2],
+
+ ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
+
+ },
+
+ ),
+
+ models.PointStruct(
+
+ id=2,
+
+ vector={
+
+ ""image"": [0.2, 0.1, 0.3, 0.9],
+
+ ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
+
+ },
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: {
+
+ image: [0.9, 0.1, 0.1, 0.2],
+
+ text: [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
+
+ },
+
+ },
+
+ {
+
+ id: 2,
+
+ vector: {
+
+ image: [0.2, 0.1, 0.3, 0.9],
+
+ text: [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use std::collections::HashMap;
+
+
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder};
+
+use qdrant_client::Payload;
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![
+
+ PointStruct::new(
+
+ 1,
+
+ HashMap::from([
+
+ (""image"".to_string(), vec![0.9, 0.1, 0.1, 0.2]),
+
+ (
+
+ ""text"".to_string(),
+
+ vec![0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2],
+
+ ),
+
+ ]),
+
+ Payload::default(),
+
+ ),
+
+ PointStruct::new(
+
+ 2,
+
+ HashMap::from([
+
+ (""image"".to_string(), vec![0.2, 0.1, 0.3, 0.9]),
+
+ (
+
+ ""text"".to_string(),
+
+ vec![0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9],
+
+ ),
+
+ ]),
+
+ Payload::default(),
+
+ ),
+
+ ],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.VectorFactory.vector;
+
+import static io.qdrant.client.VectorsFactory.namedVectors;
+
+
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(
+
+ namedVectors(
+
+ Map.of(
+
+ ""image"",
+
+ vector(List.of(0.9f, 0.1f, 0.1f, 0.2f)),
+
+ ""text"",
+
+ vector(List.of(0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f)))))
+
+ .build(),
+
+ PointStruct.newBuilder()
+
+ .setId(id(2))
+
+ .setVectors(
+
+ namedVectors(
+
+ Map.of(
+
+ ""image"",
+
+ List.of(0.2f, 0.1f, 0.3f, 0.9f),
+
+ ""text"",
+
+ List.of(0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f))))
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new()
+
+ {
+
+ Id = 1,
+
+ Vectors = new Dictionary
+
+ {
+
+ [""image""] = [0.9f, 0.1f, 0.1f, 0.2f],
+
+ [""text""] = [0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f]
+
+ }
+
+ },
+
+ new()
+
+ {
+
+ Id = 2,
+
+ Vectors = new Dictionary
+
+ {
+
+ [""image""] = [0.2f, 0.1f, 0.3f, 0.9f],
+
+ [""text""] = [0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f]
+
+ }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""image"": qdrant.NewVector(0.9, 0.1, 0.1, 0.2),
+
+ ""text"": qdrant.NewVector(0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2),
+
+ }),
+
+ },
+
+ {
+
+ Id: qdrant.NewIDNum(2),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""image"": qdrant.NewVector(0.2, 0.1, 0.3, 0.9),
+
+ ""text"": qdrant.NewVector(0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9),
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+_Available as of v1.2.0_
+
+
+
+Named vectors are optional. When uploading points, some vectors may be omitted.
+
+For example, you can upload one point with only the `image` vector and a second
+
+one with only the `text` vector.
+
+
+
+When uploading a point with an existing ID, the existing point is deleted first,
+
+then it is inserted with just the specified vectors. In other words, the entire
+
+point is replaced, and any unspecified vectors are set to null. To keep existing
+
+vectors unchanged and only update specified vectors, see [update vectors](#update-vectors).
+
+
+
+_Available as of v1.7.0_
+
+
+
+Points can contain dense and sparse vectors.
+
+
+
+A sparse vector is an array in which most of the elements have a value of zero.
+
+
+
+It is possible to take advantage of this property to have an optimized representation, for this reason they have a different shape than dense vectors.
+
+
+
+They are represented as a list of `(index, value)` pairs, where `index` is an integer and `value` is a floating point number. The `index` is the position of the non-zero value in the vector. The `values` is the value of the non-zero element.
+
+
+
+For example, the following vector:
+
+
+
+```
+
+[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0]
+
+```
+
+
+
+can be represented as a sparse vector:
+
+
+
+```
+
+[(6, 1.0), (7, 2.0)]
+
+```
+
+
+
+Qdrant uses the following JSON representation throughout its APIs.
+
+
+
+```json
+
+{
+
+ ""indices"": [6, 7],
+
+ ""values"": [1.0, 2.0]
+
+}
+
+```
+
+
+
+The `indices` and `values` arrays must have the same length.
+
+And the `indices` must be unique.
+
+
+
+If the `indices` are not sorted, Qdrant will sort them internally so you may not rely on the order of the elements.
+
+
+
+Sparse vectors must be named and can be uploaded in the same way as dense vectors.
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": {
+
+ ""text"": {
+
+ ""indices"": [6, 7],
+
+ ""values"": [1.0, 2.0]
+
+ }
+
+ }
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""vector"": {
+
+ ""text"": {
+
+ ""indices"": [1, 1, 2, 3, 4, 5],
+
+ ""values"": [0.1, 0.2, 0.3, 0.4, 0.5]
+
+ }
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ vector={
+
+ ""text"": models.SparseVector(
+
+ indices=[6, 7],
+
+ values=[1.0, 2.0],
+
+ )
+
+ },
+
+ ),
+
+ models.PointStruct(
+
+ id=2,
+
+ vector={
+
+ ""text"": models.SparseVector(
+
+ indices=[1, 2, 3, 4, 5],
+
+ values=[0.1, 0.2, 0.3, 0.4, 0.5],
+
+ )
+
+ },
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: {
+
+ text: {
+
+ indices: [6, 7],
+
+ values: [1.0, 2.0],
+
+ },
+
+ },
+
+ },
+
+ {
+
+ id: 2,
+
+ vector: {
+
+ text: {
+
+ indices: [1, 2, 3, 4, 5],
+
+ values: [0.1, 0.2, 0.3, 0.4, 0.5],
+
+ },
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use std::collections::HashMap;
+
+
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector};
+
+use qdrant_client::Payload;
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![
+
+ PointStruct::new(
+
+ 1,
+
+ HashMap::from([(""text"".to_string(), vec![(6, 1.0), (7, 2.0)])]),
+
+ Payload::default(),
+
+ ),
+
+ PointStruct::new(
+
+ 2,
+
+ HashMap::from([(
+
+ ""text"".to_string(),
+
+ vec![(1, 0.1), (2, 0.2), (3, 0.3), (4, 0.4), (5, 0.5)],
+
+ )]),
+
+ Payload::default(),
+
+ ),
+
+ ],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.VectorFactory.vector;
+
+
+
+import io.qdrant.client.grpc.Points.NamedVectors;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+import io.qdrant.client.grpc.Points.Vectors;
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(
+
+ Vectors.newBuilder()
+
+ .setVectors(
+
+ NamedVectors.newBuilder()
+
+ .putAllVectors(
+
+ Map.of(
+
+ ""text"", vector(List.of(1.0f, 2.0f), List.of(6, 7))))
+
+ .build())
+
+ .build())
+
+ .build(),
+
+ PointStruct.newBuilder()
+
+ .setId(id(2))
+
+ .setVectors(
+
+ Vectors.newBuilder()
+
+ .setVectors(
+
+ NamedVectors.newBuilder()
+
+ .putAllVectors(
+
+ Map.of(
+
+ ""text"",
+
+ vector(
+
+ List.of(0.1f, 0.2f, 0.3f, 0.4f, 0.5f),
+
+ List.of(1, 2, 3, 4, 5))))
+
+ .build())
+
+ .build())
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new()
+
+ {
+
+ Id = 1,
+
+ Vectors = new Dictionary { [""text""] = ([1.0f, 2.0f], [6, 7]) }
+
+ },
+
+ new()
+
+ {
+
+ Id = 2,
+
+ Vectors = new Dictionary
+
+ {
+
+ [""text""] = ([0.1f, 0.2f, 0.3f, 0.4f, 0.5f], [1, 2, 3, 4, 5])
+
+ }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""text"": qdrant.NewVectorSparse(
+
+ []uint32{6, 7},
+
+ []float32{1.0, 2.0}),
+
+ }),
+
+ },
+
+ {
+
+ Id: qdrant.NewIDNum(2),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""text"": qdrant.NewVectorSparse(
+
+ []uint32{1, 2, 3, 4, 5},
+
+ []float32{0.1, 0.2, 0.3, 0.4, 0.5}),
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+## Modify points
+
+
+
+To change a point, you can modify its vectors or its payload. There are several
+
+ways to do this.
+
+
+
+### Update vectors
+
+
+
+_Available as of v1.2.0_
+
+
+
+This method updates the specified vectors on the given points. Unspecified
+
+vectors are kept unchanged. All given points must exist.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/update-vectors)):
+
+
+
+```http
+
+PUT /collections/{collection_name}/points/vectors
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": {
+
+ ""image"": [0.1, 0.2, 0.3, 0.4]
+
+ }
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""vector"": {
+
+ ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.update_vectors(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointVectors(
+
+ id=1,
+
+ vector={
+
+ ""image"": [0.1, 0.2, 0.3, 0.4],
+
+ },
+
+ ),
+
+ models.PointVectors(
+
+ id=2,
+
+ vector={
+
+ ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
+
+ },
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.updateVectors(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: {
+
+ image: [0.1, 0.2, 0.3, 0.4],
+
+ },
+
+ },
+
+ {
+
+ id: 2,
+
+ vector: {
+
+ text: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use std::collections::HashMap;
+
+
+
+use qdrant_client::qdrant::{
+
+ PointVectors, UpdatePointVectorsBuilder,
+
+};
+
+
+
+client
+
+ .update_vectors(
+
+ UpdatePointVectorsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![
+
+ PointVectors {
+
+ id: Some(1.into()),
+
+ vectors: Some(
+
+ HashMap::from([(""image"".to_string(), vec![0.1, 0.2, 0.3, 0.4])]).into(),
+
+ ),
+
+ },
+
+ PointVectors {
+
+ id: Some(2.into()),
+
+ vectors: Some(
+
+ HashMap::from([(
+
+ ""text"".to_string(),
+
+ vec![0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2],
+
+ )])
+
+ .into(),
+
+ ),
+
+ },
+
+ ],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.VectorFactory.vector;
+
+import static io.qdrant.client.VectorsFactory.namedVectors;
+
+
+
+client
+
+ .updateVectorsAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointVectors.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(namedVectors(Map.of(""image"", vector(List.of(0.1f, 0.2f, 0.3f, 0.4f)))))
+
+ .build(),
+
+ PointVectors.newBuilder()
+
+ .setId(id(2))
+
+ .setVectors(
+
+ namedVectors(
+
+ Map.of(
+
+ ""text"", vector(List.of(0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f)))))
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpdateVectorsAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List
+
+ {
+
+ new() { Id = 1, Vectors = (""image"", new float[] { 0.1f, 0.2f, 0.3f, 0.4f }) },
+
+ new()
+
+ {
+
+ Id = 2,
+
+ Vectors = (""text"", new float[] { 0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f })
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.UpdateVectors(context.Background(), &qdrant.UpdatePointVectors{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointVectors{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""image"": qdrant.NewVector(0.1, 0.2, 0.3, 0.4),
+
+ }),
+
+ },
+
+ {
+
+ Id: qdrant.NewIDNum(2),
+
+ Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{
+
+ ""text"": qdrant.NewVector(0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2),
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+To update points and replace all of its vectors, see [uploading
+
+points](#upload-points).
+
+
+
+### Delete vectors
+
+
+
+_Available as of v1.2.0_
+
+
+
+This method deletes just the specified vectors from the given points. Other
+
+vectors are kept unchanged. Points are never deleted.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-vectors)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/vectors/delete
+
+{
+
+ ""points"": [0, 3, 100],
+
+ ""vectors"": [""text"", ""image""]
+
+}
+
+```
+
+
+
+```python
+
+client.delete_vectors(
+
+ collection_name=""{collection_name}"",
+
+ points=[0, 3, 100],
+
+ vectors=[""text"", ""image""],
+
+)
+
+```
+
+
+
+```typescript
+
+client.deleteVectors(""{collection_name}"", {
+
+ points: [0, 3, 10],
+
+ vectors: [""text"", ""image""],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ DeletePointVectorsBuilder, PointsIdsList,
+
+};
+
+
+
+client
+
+ .delete_vectors(
+
+ DeletePointVectorsBuilder::new(""{collection_name}"")
+
+ .points_selector(PointsIdsList {
+
+ ids: vec![0.into(), 3.into(), 10.into()],
+
+ })
+
+ .vectors(vec![""text"".into(), ""image"".into()])
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+
+
+client
+
+ .deleteVectorsAsync(
+
+ ""{collection_name}"", List.of(""text"", ""image""), List.of(id(0), id(3), id(10)))
+
+ .get();
+
+```
+
+
+
+To delete entire points, see [deleting points](#delete-points).
+
+
+
+### Update payload
+
+
+
+Learn how to modify the payload of a point in the [Payload](../payload/#update-payload) section.
+
+
+
+## Delete points
+
+
+
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-points)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/delete
+
+{
+
+ ""points"": [0, 3, 100]
+
+}
+
+```
+
+
+
+```python
+
+client.delete(
+
+ collection_name=""{collection_name}"",
+
+ points_selector=models.PointIdsList(
+
+ points=[0, 3, 100],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.delete(""{collection_name}"", {
+
+ points: [0, 3, 100],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{DeletePointsBuilder, PointsIdsList};
+
+
+
+client
+
+ .delete_points(
+
+ DeletePointsBuilder::new(""{collection_name}"")
+
+ .points(PointsIdsList {
+
+ ids: vec![0.into(), 3.into(), 100.into()],
+
+ })
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+
+
+client.deleteAsync(""{collection_name}"", List.of(id(0), id(3), id(100)));
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.DeleteAsync(collectionName: ""{collection_name}"", ids: [0, 3, 100]);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Delete(context.Background(), &qdrant.DeletePoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: qdrant.NewPointsSelector(
+
+ qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100),
+
+ ),
+
+})
+
+```
+
+
+
+Alternative way to specify which points to remove is to use filter.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/delete
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""red""
+
+ }
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.delete(
+
+ collection_name=""{collection_name}"",
+
+ points_selector=models.FilterSelector(
+
+ filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchValue(value=""red""),
+
+ ),
+
+ ],
+
+ )
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.delete(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""color"",
+
+ match: {
+
+ value: ""red"",
+
+ },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, DeletePointsBuilder, Filter};
+
+
+
+client
+
+ .delete_points(
+
+ DeletePointsBuilder::new(""{collection_name}"")
+
+ .points(Filter::must([Condition::matches(
+
+ ""color"",
+
+ ""red"".to_string(),
+
+ )]))
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+
+
+client
+
+ .deleteAsync(
+
+ ""{collection_name}"",
+
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.DeleteAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""color"", ""red""));
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Delete(context.Background(), &qdrant.DeletePoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: qdrant.NewPointsSelectorFilter(
+
+ &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+ ),
+
+})
+
+```
+
+
+
+This example removes all points with `{ ""color"": ""red"" }` from the collection.
+
+
+
+## Retrieve points
+
+
+
+There is a method for retrieving points by their ids.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-points)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points
+
+{
+
+ ""ids"": [0, 3, 100]
+
+}
+
+```
+
+
+
+```python
+
+client.retrieve(
+
+ collection_name=""{collection_name}"",
+
+ ids=[0, 3, 100],
+
+)
+
+```
+
+
+
+```typescript
+
+client.retrieve(""{collection_name}"", {
+
+ ids: [0, 3, 100],
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::GetPointsBuilder;
+
+
+
+client
+
+ .get_points(GetPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![0.into(), 30.into(), 100.into()],
+
+ ))
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+
+
+client
+
+ .retrieveAsync(""{collection_name}"", List.of(id(0), id(30), id(100)), false, false, null)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.RetrieveAsync(
+
+ collectionName: ""{collection_name}"",
+
+ ids: [0, 30, 100],
+
+ withPayload: false,
+
+ withVectors: false
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Get(context.Background(), &qdrant.GetPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Ids: []*qdrant.PointId{
+
+ qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100),
+
+ },
+
+})
+
+```
+
+
+
+This method has additional parameters `with_vectors` and `with_payload`.
+
+Using these parameters, you can select parts of the point you want as a result.
+
+Excluding helps you not to waste traffic transmitting useless data.
+
+
+
+The single point can also be retrieved via the API:
+
+
+
+REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-point)):
+
+
+
+```http
+
+GET /collections/{collection_name}/points/{point_id}
+
+```
+
+
+
+
+
+
+
+## Scroll points
+
+
+
+Sometimes it might be necessary to get all stored points without knowing ids, or iterate over points that correspond to a filter.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/scroll-points)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""red""
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""limit"": 1,
+
+ ""with_payload"": true,
+
+ ""with_vector"": false
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
+
+ ]
+
+ ),
+
+ limit=1,
+
+ with_payload=True,
+
+ with_vectors=False,
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""color"",
+
+ match: {
+
+ value: ""red"",
+
+ },
+
+ },
+
+ ],
+
+ },
+
+ limit: 1,
+
+ with_payload: true,
+
+ with_vector: false,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"")
+
+ .filter(Filter::must([Condition::matches(
+
+ ""color"",
+
+ ""red"".to_string(),
+
+ )]))
+
+ .limit(1)
+
+ .with_payload(true)
+
+ .with_vectors(false),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+import static io.qdrant.client.WithPayloadSelectorFactory.enable;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build())
+
+ .setLimit(1)
+
+ .setWithPayload(enable(true))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""color"", ""red""),
+
+ limit: 1,
+
+ payloadSelector: true
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+ client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+ })
+
+
+
+ client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+ Limit: qdrant.PtrOf(uint32(1)),
+
+ WithPayload: qdrant.NewWithPayload(true),
+
+ })
+
+```
+
+
+
+Returns all point with `color` = `red`.
+
+
+
+```json
+
+{
+
+ ""result"": {
+
+ ""next_page_offset"": 1,
+
+ ""points"": [
+
+ {
+
+ ""id"": 0,
+
+ ""payload"": {
+
+ ""color"": ""red""
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""status"": ""ok"",
+
+ ""time"": 0.0001
+
+}
+
+```
+
+
+
+The Scroll API will return all points that match the filter in a page-by-page manner.
+
+
+
+All resulting points are sorted by ID. To query the next page it is necessary to specify the largest seen ID in the `offset` field.
+
+For convenience, this ID is also returned in the field `next_page_offset`.
+
+If the value of the `next_page_offset` field is `null` - the last page is reached.
+
+
+
+### Order points by payload key
+
+
+
+_Available as of v1.8.0_
+
+
+
+When using the [`scroll`](#scroll-points) API, you can sort the results by payload key. For example, you can retrieve points in chronological order if your payloads have a `""timestamp""` field, as is shown from the example below:
+
+
+
+
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""limit"": 15,
+
+ ""order_by"": ""timestamp"", // <-- this!
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ limit=15,
+
+ order_by=""timestamp"", # <-- this!
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ limit: 15,
+
+ order_by: ""timestamp"", // <-- this!
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{OrderByBuilder, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"")
+
+ .limit(15)
+
+ .order_by(OrderByBuilder::new(""timestamp"")),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.grpc.Points.OrderBy;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client.scrollAsync(ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setLimit(15)
+
+ .setOrderBy(OrderBy.newBuilder().setKey(""timestamp"").build())
+
+ .build()).get();
+
+```
+
+
+
+```csharp
+
+await client.ScrollAsync(""{collection_name}"", limit: 15, orderBy: ""timestamp"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Limit: qdrant.PtrOf(uint32(15)),
+
+ OrderBy: &qdrant.OrderBy{
+
+ Key: ""timestamp"",
+
+ },
+
+})
+
+```
+
+
+
+You need to use the `order_by` `key` parameter to specify the payload key. Then you can add other fields to control the ordering, such as `direction` and `start_from`:
+
+
+
+```http
+
+""order_by"": {
+
+ ""key"": ""timestamp"",
+
+ ""direction"": ""desc"" // default is ""asc""
+
+ ""start_from"": 123, // start from this value
+
+}
+
+```
+
+
+
+```python
+
+order_by=models.OrderBy(
+
+ key=""timestamp"",
+
+ direction=""desc"", # default is ""asc""
+
+ start_from=123, # start from this value
+
+)
+
+```
+
+
+
+```typescript
+
+order_by: {
+
+ key: ""timestamp"",
+
+ direction: ""desc"", // default is ""asc""
+
+ start_from: 123, // start from this value
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{start_from::Value, Direction, OrderByBuilder};
+
+
+
+OrderByBuilder::new(""timestamp"")
+
+ .direction(Direction::Desc.into())
+
+ .start_from(Value::Integer(123))
+
+ .build();
+
+```
+
+
+
+```java
+
+import io.qdrant.client.grpc.Points.Direction;
+
+import io.qdrant.client.grpc.Points.OrderBy;
+
+import io.qdrant.client.grpc.Points.StartFrom;
+
+
+
+OrderBy.newBuilder()
+
+ .setKey(""timestamp"")
+
+ .setDirection(Direction.Desc)
+
+ .setStartFrom(StartFrom.newBuilder()
+
+ .setInteger(123)
+
+ .build())
+
+ .build();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+
+
+new OrderBy
+
+{
+
+ Key = ""timestamp"",
+
+ Direction = Direction.Desc,
+
+ StartFrom = 123
+
+};
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.OrderBy{
+
+ Key: ""timestamp"",
+
+ Direction: qdrant.Direction_Desc.Enum(),
+
+ StartFrom: qdrant.NewStartFromInt(123),
+
+}
+
+```
+
+
+
+
+
+
+
+When sorting is based on a non-unique value, it is not possible to rely on an ID offset. Thus, next_page_offset is not returned within the response. However, you can still do pagination by combining `""order_by"": { ""start_from"": ... }` with a `{ ""must_not"": [{ ""has_id"": [...] }] }` filter.
+
+
+
+## Counting points
+
+
+
+_Available as of v0.8.4_
+
+
+
+Sometimes it can be useful to know how many points fit the filter conditions without doing a real search.
+
+
+
+Among others, for example, we can highlight the following scenarios:
+
+
+
+- Evaluation of results size for faceted search
+
+- Determining the number of pages for pagination
+
+- Debugging the query execution speed
+
+
+
+REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/count-points)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/count
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""red""
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""exact"": true
+
+}
+
+```
+
+
+
+```python
+
+client.count(
+
+ collection_name=""{collection_name}"",
+
+ count_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
+
+ ]
+
+ ),
+
+ exact=True,
+
+)
+
+```
+
+
+
+```typescript
+
+client.count(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""color"",
+
+ match: {
+
+ value: ""red"",
+
+ },
+
+ },
+
+ ],
+
+ },
+
+ exact: true,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, CountPointsBuilder, Filter};
+
+
+
+client
+
+ .count(
+
+ CountPointsBuilder::new(""{collection_name}"")
+
+ .filter(Filter::must([Condition::matches(
+
+ ""color"",
+
+ ""red"".to_string(),
+
+ )]))
+
+ .exact(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+
+
+client
+
+ .countAsync(
+
+ ""{collection_name}"",
+
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(),
+
+ true)
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CountAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""color"", ""red""),
+
+ exact: true
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Count(context.Background(), &qdrant.CountPoints{
+
+ CollectionName: ""midlib"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Returns number of counts matching given filtering conditions:
+
+
+
+```json
+
+{
+
+ ""count"": 3811
+
+}
+
+```
+
+
+
+## Batch update
+
+
+
+_Available as of v1.5.0_
+
+
+
+You can batch multiple point update operations. This includes inserting,
+
+updating and deleting points, vectors and payload.
+
+
+
+A batch update request consists of a list of operations. These are executed in
+
+order. These operations can be batched:
+
+
+
+- [Upsert points](#upload-points): `upsert` or `UpsertOperation`
+
+- [Delete points](#delete-points): `delete_points` or `DeleteOperation`
+
+- [Update vectors](#update-vectors): `update_vectors` or `UpdateVectorsOperation`
+
+- [Delete vectors](#delete-vectors): `delete_vectors` or `DeleteVectorsOperation`
+
+- [Set payload](/documentation/concepts/payload/#set-payload): `set_payload` or `SetPayloadOperation`
+
+- [Overwrite payload](/documentation/concepts/payload/#overwrite-payload): `overwrite_payload` or `OverwritePayload`
+
+- [Delete payload](/documentation/concepts/payload/#delete-payload-keys): `delete_payload` or `DeletePayloadOperation`
+
+- [Clear payload](/documentation/concepts/payload/#clear-payload): `clear_payload` or `ClearPayloadOperation`
+
+
+
+The following example snippet makes use of all operations.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/batch-update)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/batch
+
+{
+
+ ""operations"": [
+
+ {
+
+ ""upsert"": {
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": [1.0, 2.0, 3.0, 4.0],
+
+ ""payload"": {}
+
+ }
+
+ ]
+
+ }
+
+ },
+
+ {
+
+ ""update_vectors"": {
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": [1.0, 2.0, 3.0, 4.0]
+
+ }
+
+ ]
+
+ }
+
+ },
+
+ {
+
+ ""delete_vectors"": {
+
+ ""points"": [1],
+
+ ""vector"": [""""]
+
+ }
+
+ },
+
+ {
+
+ ""overwrite_payload"": {
+
+ ""payload"": {
+
+ ""test_payload"": ""1""
+
+ },
+
+ ""points"": [1]
+
+ }
+
+ },
+
+ {
+
+ ""set_payload"": {
+
+ ""payload"": {
+
+ ""test_payload_2"": ""2"",
+
+ ""test_payload_3"": ""3""
+
+ },
+
+ ""points"": [1]
+
+ }
+
+ },
+
+ {
+
+ ""delete_payload"": {
+
+ ""keys"": [""test_payload_2""],
+
+ ""points"": [1]
+
+ }
+
+ },
+
+ {
+
+ ""clear_payload"": {
+
+ ""points"": [1]
+
+ }
+
+ },
+
+ {""delete"": {""points"": [1]}}
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+client.batch_update_points(
+
+ collection_name=""{collection_name}"",
+
+ update_operations=[
+
+ models.UpsertOperation(
+
+ upsert=models.PointsList(
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ vector=[1.0, 2.0, 3.0, 4.0],
+
+ payload={},
+
+ ),
+
+ ]
+
+ )
+
+ ),
+
+ models.UpdateVectorsOperation(
+
+ update_vectors=models.UpdateVectors(
+
+ points=[
+
+ models.PointVectors(
+
+ id=1,
+
+ vector=[1.0, 2.0, 3.0, 4.0],
+
+ )
+
+ ]
+
+ )
+
+ ),
+
+ models.DeleteVectorsOperation(
+
+ delete_vectors=models.DeleteVectors(points=[1], vector=[""""])
+
+ ),
+
+ models.OverwritePayloadOperation(
+
+ overwrite_payload=models.SetPayload(
+
+ payload={""test_payload"": 1},
+
+ points=[1],
+
+ )
+
+ ),
+
+ models.SetPayloadOperation(
+
+ set_payload=models.SetPayload(
+
+ payload={
+
+ ""test_payload_2"": 2,
+
+ ""test_payload_3"": 3,
+
+ },
+
+ points=[1],
+
+ )
+
+ ),
+
+ models.DeletePayloadOperation(
+
+ delete_payload=models.DeletePayload(keys=[""test_payload_2""], points=[1])
+
+ ),
+
+ models.ClearPayloadOperation(clear_payload=models.PointIdsList(points=[1])),
+
+ models.DeleteOperation(delete=models.PointIdsList(points=[1])),
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+client.batchUpdate(""{collection_name}"", {
+
+ operations: [
+
+ {
+
+ upsert: {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: [1.0, 2.0, 3.0, 4.0],
+
+ payload: {},
+
+ },
+
+ ],
+
+ },
+
+ },
+
+ {
+
+ update_vectors: {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: [1.0, 2.0, 3.0, 4.0],
+
+ },
+
+ ],
+
+ },
+
+ },
+
+ {
+
+ delete_vectors: {
+
+ points: [1],
+
+ vector: [""""],
+
+ },
+
+ },
+
+ {
+
+ overwrite_payload: {
+
+ payload: {
+
+ test_payload: 1,
+
+ },
+
+ points: [1],
+
+ },
+
+ },
+
+ {
+
+ set_payload: {
+
+ payload: {
+
+ test_payload_2: 2,
+
+ test_payload_3: 3,
+
+ },
+
+ points: [1],
+
+ },
+
+ },
+
+ {
+
+ delete_payload: {
+
+ keys: [""test_payload_2""],
+
+ points: [1],
+
+ },
+
+ },
+
+ {
+
+ clear_payload: {
+
+ points: [1],
+
+ },
+
+ },
+
+ {
+
+ delete: {
+
+ points: [1],
+
+ },
+
+ },
+
+ ],
+
+});
+
+```
+
+
+
+```rust
+
+use std::collections::HashMap;
+
+
+
+use qdrant_client::qdrant::{
+
+ points_update_operation::{
+
+ ClearPayload, DeletePayload, DeletePoints, DeleteVectors, Operation, OverwritePayload,
+
+ PointStructList, SetPayload, UpdateVectors,
+
+ },
+
+ PointStruct, PointVectors, PointsUpdateOperation, UpdateBatchPointsBuilder, VectorsSelector,
+
+};
+
+use qdrant_client::Payload;
+
+
+
+client
+
+ .update_points_batch(
+
+ UpdateBatchPointsBuilder::new(
+
+ ""{collection_name}"",
+
+ vec![
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::Upsert(PointStructList {
+
+ points: vec![PointStruct::new(
+
+ 1,
+
+ vec![1.0, 2.0, 3.0, 4.0],
+
+ Payload::default(),
+
+ )],
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::UpdateVectors(UpdateVectors {
+
+ points: vec![PointVectors {
+
+ id: Some(1.into()),
+
+ vectors: Some(vec![1.0, 2.0, 3.0, 4.0].into()),
+
+ }],
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::DeleteVectors(DeleteVectors {
+
+ points_selector: Some(vec![1.into()].into()),
+
+ vectors: Some(VectorsSelector {
+
+ names: vec!["""".into()],
+
+ }),
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::OverwritePayload(OverwritePayload {
+
+ points_selector: Some(vec![1.into()].into()),
+
+ payload: HashMap::from([(""test_payload"".to_string(), 1.into())]),
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::SetPayload(SetPayload {
+
+ points_selector: Some(vec![1.into()].into()),
+
+ payload: HashMap::from([
+
+ (""test_payload_2"".to_string(), 2.into()),
+
+ (""test_payload_3"".to_string(), 3.into()),
+
+ ]),
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::DeletePayload(DeletePayload {
+
+ points_selector: Some(vec![1.into()].into()),
+
+ keys: vec![""test_payload_2"".to_string()],
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::ClearPayload(ClearPayload {
+
+ points: Some(vec![1.into()].into()),
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ PointsUpdateOperation {
+
+ operation: Some(Operation::DeletePoints(DeletePoints {
+
+ points: Some(vec![1.into()].into()),
+
+ ..Default::default()
+
+ })),
+
+ },
+
+ ],
+
+ )
+
+ .wait(true),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.ValueFactory.value;
+
+import static io.qdrant.client.VectorsFactory.vectors;
+
+
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+import io.qdrant.client.grpc.Points.PointVectors;
+
+import io.qdrant.client.grpc.Points.PointsIdsList;
+
+import io.qdrant.client.grpc.Points.PointsSelector;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.ClearPayload;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePayload;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePoints;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeleteVectors;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.PointStructList;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.SetPayload;
+
+import io.qdrant.client.grpc.Points.PointsUpdateOperation.UpdateVectors;
+
+import io.qdrant.client.grpc.Points.VectorsSelector;
+
+
+
+client
+
+ .batchUpdateAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointsUpdateOperation.newBuilder()
+
+ .setUpsert(
+
+ PointStructList.newBuilder()
+
+ .addPoints(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
+
+ .build())
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setUpdateVectors(
+
+ UpdateVectors.newBuilder()
+
+ .addPoints(
+
+ PointVectors.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f))
+
+ .build())
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setDeleteVectors(
+
+ DeleteVectors.newBuilder()
+
+ .setPointsSelector(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .setVectors(VectorsSelector.newBuilder().addNames("""").build())
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setOverwritePayload(
+
+ SetPayload.newBuilder()
+
+ .setPointsSelector(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .putAllPayload(Map.of(""test_payload"", value(1)))
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setSetPayload(
+
+ SetPayload.newBuilder()
+
+ .setPointsSelector(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .putAllPayload(
+
+ Map.of(""test_payload_2"", value(2), ""test_payload_3"", value(3)))
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setDeletePayload(
+
+ DeletePayload.newBuilder()
+
+ .setPointsSelector(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .addKeys(""test_payload_2"")
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setClearPayload(
+
+ ClearPayload.newBuilder()
+
+ .setPoints(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .build())
+
+ .build(),
+
+ PointsUpdateOperation.newBuilder()
+
+ .setDeletePoints(
+
+ DeletePoints.newBuilder()
+
+ .setPoints(
+
+ PointsSelector.newBuilder()
+
+ .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build())
+
+ .build())
+
+ .build())
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+To batch many points with a single operation type, please use batching
+
+functionality in that operation directly.
+
+
+
+
+
+## Awaiting result
+
+
+
+If the API is called with the `&wait=false` parameter, or if it is not explicitly specified, the client will receive an acknowledgment of receiving data:
+
+
+
+```json
+
+{
+
+ ""result"": {
+
+ ""operation_id"": 123,
+
+ ""status"": ""acknowledged""
+
+ },
+
+ ""status"": ""ok"",
+
+ ""time"": 0.000206061
+
+}
+
+```
+
+
+
+This response does not mean that the data is available for retrieval yet. This
+
+uses a form of eventual consistency. It may take a short amount of time before it
+
+is actually processed as updating the collection happens in the background. In
+
+fact, it is possible that such request eventually fails.
+
+If inserting a lot of vectors, we also recommend using asynchronous requests to take advantage of pipelining.
+
+
+
+If the logic of your application requires a guarantee that the vector will be available for searching immediately after the API responds, then use the flag `?wait=true`.
+
+In this case, the API will return the result only after the operation is finished:
+
+
+
+```json
+
+{
+
+ ""result"": {
+
+ ""operation_id"": 0,
+
+ ""status"": ""completed""
+
+ },
+
+ ""status"": ""ok"",
+
+ ""time"": 0.000206061
+
+}
+
+```",documentation/concepts/points.md
+"---
+
+title: Vectors
+
+weight: 41
+
+aliases:
+
+ - /vectors
+
+---
+
+
+
+
+
+# Vectors
+
+
+
+Vectors (or embeddings) are the core concept of the Qdrant Vector Search engine.
+
+Vectors define the similarity between objects in the vector space.
+
+
+
+If a pair of vectors are similar in vector space, it means that the objects they represent are similar in some way.
+
+
+
+For example, if you have a collection of images, you can represent each image as a vector.
+
+If two images are similar, their vectors will be close to each other in the vector space.
+
+
+
+In order to obtain a vector representation of an object, you need to apply a vectorization algorithm to the object.
+
+Usually, this algorithm is a neural network that converts the object into a fixed-size vector.
+
+
+
+The neural network is usually [trained](/articles/metric-learning-tips/) on a pairs or [triplets](/articles/triplet-loss/) of similar and dissimilar objects, so it learns to recognize a specific type of similarity.
+
+
+
+By using this property of vectors, you can explore your data in a number of ways; e.g. by searching for similar objects, clustering objects, and more.
+
+
+
+
+
+## Vector Types
+
+
+
+Modern neural networks can output vectors in different shapes and sizes, and Qdrant supports most of them.
+
+Let's take a look at the most common types of vectors supported by Qdrant.
+
+
+
+
+
+### Dense Vectors
+
+
+
+This is the most common type of vector. It is a simple list of numbers, it has a fixed length and each element of the list is a floating-point number.
+
+
+
+It looks like this:
+
+
+
+```json
+
+
+
+// A piece of a real-world dense vector
+
+[
+
+ -0.013052909,
+
+ 0.020387933,
+
+ -0.007869,
+
+ -0.11111383,
+
+ -0.030188112,
+
+ -0.0053388323,
+
+ 0.0010654867,
+
+ 0.072027855,
+
+ -0.04167721,
+
+ 0.014839341,
+
+ -0.032948174,
+
+ -0.062975034,
+
+ -0.024837125,
+
+ ....
+
+]
+
+```
+
+
+
+The majority of neural networks create dense vectors, so you can use them with Qdrant without any additional processing.
+
+Although compatible with most embedding models out there, Qdrant has been tested with the following [verified embedding providers](/documentation/embeddings/).
+
+
+
+### Sparse Vectors
+
+
+
+Sparse vectors are a special type of vectors.
+
+Mathematically, they are the same as dense vectors, but they contain many zeros so they are stored in a special format.
+
+
+
+Sparse vectors in Qdrant don't have a fixed length, as it is dynamically allocated during vector insertion.
+
+
+
+In order to define a sparse vector, you need to provide a list of non-zero elements and their indexes.
+
+
+
+```json
+
+// A sparse vector with 4 non-zero elements
+
+{
+
+ ""indexes"": [1, 3, 5, 7],
+
+ ""values"": [0.1, 0.2, 0.3, 0.4]
+
+}
+
+```
+
+
+
+Sparse vectors in Qdrant are kept in special storage and indexed in a separate index, so their configuration is different from dense vectors.
+
+
+
+To create a collection with sparse vectors:
+
+
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""sparse_vectors"": {
+
+ ""text"": { },
+
+ }
+
+}
+
+```
+
+
+
+```bash
+
+curl -X PUT http://localhost:6333/collections/{collection_name} \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""sparse_vectors"": {
+
+ ""text"": { }
+
+ }
+
+ }'
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ sparse_vectors_config={
+
+ ""text"": models.SparseVectorParams(),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ sparse_vectors: {
+
+ text: { },
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder,
+
+};
+
+
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut sparse_vectors_config = SparseVectorsConfigBuilder::default();
+
+
+
+sparse_vectors_config.add_named_vector_params(""text"", SparseVectorParamsBuilder::default());
+
+
+
+client
+
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"")
+
+ .sparse_vectors_config(sparse_vectors_config),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.SparseVectorConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createCollectionAsync(
+
+ CreateCollection.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setSparseVectorsConfig(
+
+ SparseVectorConfig.newBuilder()
+
+ .putMap(""text"", SparseVectorParams.getDefaultInstance()))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ sparseVectorsConfig: (""text"", new SparseVectorParams())
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+
+ map[string]*qdrant.SparseVectorParams{
+
+ ""text"": {},
+
+ }),
+
+})
+
+```
+
+
+
+Insert a point with a sparse vector into the created collection:
+
+
+
+```http
+
+PUT /collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": {
+
+ ""text"": {
+
+ ""indices"": [1, 3, 5, 7],
+
+ ""values"": [0.1, 0.2, 0.3, 0.4]
+
+ }
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ payload={}, # Add any additional payload if necessary
+
+ vector={
+
+ ""text"": models.SparseVector(
+
+ indices=[1, 3, 5, 7],
+
+ values=[0.1, 0.2, 0.3, 0.4]
+
+ )
+
+ },
+
+ )
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: {
+
+ text: {
+
+ indices: [1, 3, 5, 7],
+
+ values: [0.1, 0.2, 0.3, 0.4]
+
+ },
+
+ },
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{NamedVectors, PointStruct, UpsertPointsBuilder, Vector};
+
+
+
+use qdrant_client::{Payload, Qdrant};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let points = vec![PointStruct::new(
+
+ 1,
+
+ NamedVectors::default().add_vector(
+
+ ""text"",
+
+ Vector::new_sparse(vec![1, 3, 5, 7], vec![0.1, 0.2, 0.3, 0.4]),
+
+ ),
+
+ Payload::new(),
+
+)];
+
+
+
+client
+
+ .upsert_points(UpsertPointsBuilder::new(""{collection_name}"", points))
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+import java.util.Map;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.VectorFactory.vector;
+
+import static io.qdrant.client.VectorsFactory.namedVectors;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(
+
+ namedVectors(Map.of(
+
+ ""text"", vector(List.of(1.0f, 2.0f), List.of(6, 7))))
+
+ )
+
+ .build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List < PointStruct > {
+
+ new() {
+
+ Id = 1,
+
+ Vectors = new Dictionary < string, Vector > {
+
+ [""text""] = ([0.1 f, 0.2 f, 0.3 f, 0.4 f], [1, 3, 5, 7])
+
+ }
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectorsMap(
+
+ map[string]*qdrant.Vector{
+
+ ""text"": qdrant.NewVectorSparse(
+
+ []uint32{1, 3, 5, 7},
+
+ []float32{0.1, 0.2, 0.3, 0.4}),
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Now you can run a search with sparse vectors:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""query"": {
+
+ ""indices"": [1, 3, 5, 7],
+
+ ""values"": [0.1, 0.2, 0.3, 0.4]
+
+ },
+
+ ""using"": ""text""
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+
+
+result = client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ query_vector=models.SparseVector(indices=[1, 3, 5, 7], values=[0.1, 0.2, 0.3, 0.4]),
+
+ using=""text"",
+
+).points
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::QueryPointsBuilder;
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .query(vec![(1, 0.2), (3, 0.1), (5, 0.9), (7, 0.7)])
+
+ .limit(10)
+
+ .using(""text""),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ query: {
+
+ indices: [1, 3, 5, 7],
+
+ values: [0.1, 0.2, 0.3, 0.4]
+
+ },
+
+ using: ""text"",
+
+ limit: 3,
+
+});
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setUsing(""text"")
+
+ .setQuery(nearest(List.of(0.1f, 0.2f, 0.3f, 0.4f), List.of(1, 3, 5, 7)))
+
+ .setLimit(3)
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ query: new (float, uint)[] {(0.1f, 1), (0.2f, 3), (0.3f, 5), (0.4f, 7)},
+
+ usingVector: ""text"",
+
+ limit: 3
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Query: qdrant.NewQuerySparse(
+
+ []uint32{1, 3, 5, 7},
+
+ []float32{0.1, 0.2, 0.3, 0.4}),
+
+ Using: qdrant.PtrOf(""text""),
+
+})
+
+```
+
+
+
+### Multivectors
+
+
+
+**Available as of v1.10.0**
+
+
+
+Qdrant supports the storing of a variable amount of same-shaped dense vectors in a single point.
+
+This means that instead of a single dense vector, you can upload a matrix of dense vectors.
+
+
+
+The length of the matrix is fixed, but the number of vectors in the matrix can be different for each point.
+
+
+
+Multivectors look like this:
+
+
+
+```json
+
+// A multivector of size 4
+
+""vector"": [
+
+ [-0.013, 0.020, -0.007, -0.111],
+
+ [-0.030, -0.055, 0.001, 0.072],
+
+ [-0.041, 0.014, -0.032, -0.062],
+
+ ....
+
+]
+
+
+
+```
+
+
+
+There are two scenarios where multivectors are useful:
+
+
+
+* **Multiple representation of the same object** - For example, you can store multiple embeddings for pictures of the same object, taken from different angles. This approach assumes that the payload is same for all vectors.
+
+* **Late interaction embeddings** - Some text embedding models can output multiple vectors for a single text.
+
+For example, a family of models such as ColBERT output a relatively small vector for each token in the text.
+
+
+
+In order to use multivectors, we need to specify a function that will be used to compare between matrices of vectors
+
+
+
+Currently, Qdrant supports `max_sim` function, which is defined as a sum of maximum similarities between each pair of vectors in the matrices.
+
+
+
+$$
+
+score = \sum_{i=1}^{N} \max_{j=1}^{M} \text{Sim}(\text{vectorA}_i, \text{vectorB}_j)
+
+$$
+
+
+
+Where $N$ is the number of vectors in the first matrix, $M$ is the number of vectors in the second matrix, and $\text{Sim}$ is a similarity function, for example, cosine similarity.
+
+
+
+To use multivectors, create a collection with the following configuration:
+
+
+
+```http
+
+PUT collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 128,
+
+ ""distance"": ""Cosine"",
+
+ ""multivector_config"": {
+
+ ""comparator"": ""max_sim""
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(
+
+ size=128,
+
+ distance=models.Distance.Cosine,
+
+ multivector_config=models.MultiVectorConfig(
+
+ comparator=models.MultiVectorComparator.MAX_SIM
+
+ ),
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 128,
+
+ distance: ""Cosine"",
+
+ multivector_config: {
+
+ comparator: ""max_sim""
+
+ }
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, Distance, VectorParamsBuilder,
+
+ MultiVectorComparator, MultiVectorConfigBuilder,
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"")
+
+ .vectors_config(
+
+ VectorParamsBuilder::new(100, Distance::Cosine)
+
+ .multivector_config(
+
+ MultiVectorConfigBuilder::new(MultiVectorComparator::MaxSim)
+
+ ),
+
+ ),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.Distance;
+
+import io.qdrant.client.grpc.Collections.MultiVectorComparator;
+
+import io.qdrant.client.grpc.Collections.MultiVectorConfig;
+
+import io.qdrant.client.grpc.Collections.VectorParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.createCollectionAsync(""{collection_name}"",
+
+ VectorParams.newBuilder().setSize(128)
+
+ .setDistance(Distance.Cosine)
+
+ .setMultivectorConfig(MultiVectorConfig.newBuilder()
+
+ .setComparator(MultiVectorComparator.MaxSim)
+
+ .build())
+
+ .build()).get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ vectorsConfig: new VectorParams {
+
+ Size = 128,
+
+ Distance = Distance.Cosine,
+
+ MultivectorConfig = new() {
+
+ Comparator = MultiVectorComparator.MaxSim
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+
+ Size: 128,
+
+ Distance: qdrant.Distance_Cosine,
+
+ MultivectorConfig: &qdrant.MultiVectorConfig{
+
+ Comparator: qdrant.MultiVectorComparator_MaxSim,
+
+ },
+
+ }),
+
+})
+
+```
+
+
+
+To insert a point with multivector:
+
+
+
+```http
+
+PUT collections/{collection_name}/points
+
+{
+
+ ""points"": [
+
+ {
+
+ ""id"": 1,
+
+ ""vector"": [
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ]
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.upsert(
+
+ collection_name=""{collection_name}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ vector=[
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ],
+
+ )
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.upsert(""{collection_name}"", {
+
+ points: [
+
+ {
+
+ id: 1,
+
+ vector: [
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ],
+
+ }
+
+ ]
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let points = vec![
+
+ PointStruct::new(
+
+ 1,
+
+ Vector::new_multi(vec![
+
+ vec![-0.013, 0.020, -0.007, -0.111],
+
+ vec![-0.030, -0.055, 0.001, 0.072],
+
+ vec![-0.041, 0.014, -0.032, -0.062],
+
+ ]),
+
+ Payload::new()
+
+ )
+
+];
+
+
+
+client
+
+ .upsert_points(
+
+ UpsertPointsBuilder::new(""{collection_name}"", points)
+
+ ).await?;
+
+
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.PointIdFactory.id;
+
+import static io.qdrant.client.VectorsFactory.vectors;
+
+import static io.qdrant.client.VectorFactory.multiVector;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PointStruct;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+.upsertAsync(
+
+ ""{collection_name}"",
+
+ List.of(
+
+ PointStruct.newBuilder()
+
+ .setId(id(1))
+
+ .setVectors(vectors(multiVector(new float[][] {
+
+ {-0.013f, 0.020f, -0.007f, -0.111f},
+
+ {-0.030f, -0.055f, 0.001f, 0.072f},
+
+ {-0.041f, 0.014f, -0.032f, -0.062f}
+
+ })))
+
+ .build()
+
+ ))
+
+.get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.UpsertAsync(
+
+ collectionName: ""{collection_name}"",
+
+ points: new List {
+
+ new() {
+
+ Id = 1,
+
+ Vectors = new float[][] {
+
+ [-0.013f, 0.020f, -0.007f, -0.111f],
+
+ [-0.030f, -0.05f, 0.001f, 0.072f],
+
+ [-0.041f, 0.014f, -0.032f, -0.062f ],
+
+ },
+
+ },
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Upsert(context.Background(), &qdrant.UpsertPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Points: []*qdrant.PointStruct{
+
+ {
+
+ Id: qdrant.NewIDNum(1),
+
+ Vectors: qdrant.NewVectorsMulti(
+
+ [][]float32{
+
+ {-0.013, 0.020, -0.007, -0.111},
+
+ {-0.030, -0.055, 0.001, 0.072},
+
+ {-0.041, 0.014, -0.032, -0.062}}),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+To search with multivector (available in `query` API):
+
+
+
+```http
+
+POST collections/{collection_name}/points/query
+
+{
+
+ ""query"": [
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ]
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ query=[
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ],
+
+)
+
+```
+
+
+
+```typescript
+
+
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ ""query"": [
+
+ [-0.013, 0.020, -0.007, -0.111, ...],
+
+ [-0.030, -0.055, 0.001, 0.072, ...],
+
+ [-0.041, 0.014, -0.032, -0.062, ...]
+
+ ]
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{ QueryPointsBuilder, VectorInput };
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let res = client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .query(VectorInput::new_multi(
+
+ vec![
+
+ vec![-0.013, 0.020, -0.007, -0.111, ...],
+
+ vec![-0.030, -0.055, 0.001, 0.072, ...],
+
+ vec![-0.041, 0.014, -0.032, -0.062, ...],
+
+ ]
+
+ ))
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.queryAsync(QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setQuery(nearest(new float[][] {
+
+ {-0.013f, 0.020f, -0.007f, -0.111f},
+
+ {-0.030f, -0.055f, 0.001f, 0.072f},
+
+ {-0.041f, 0.014f, -0.032f, -0.062f}
+
+ }))
+
+ .build()).get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ query: new float[][] {
+
+ [-0.013f, 0.020f, -0.007f, -0.111f],
+
+ [-0.030f, -0.055f, 0.001 , 0.072f],
+
+ [-0.041f, 0.014f, -0.032f, -0.062f],
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Query: qdrant.NewQueryMulti(
+
+ [][]float32{
+
+ {-0.013, 0.020, -0.007, -0.111},
+
+ {-0.030, -0.055, 0.001, 0.072},
+
+ {-0.041, 0.014, -0.032, -0.062},
+
+ }),
+
+})
+
+```
+
+
+
+
+
+## Named Vectors
+
+
+
+Aside from storing multiple vectors of the same shape in a single point, Qdrant supports storing multiple different vectors in a single point.
+
+
+
+Each of these vectors should have a unique configuration and should be addressed by a unique name.
+
+Also, each vector can be of a different type and be generated by a different embedding model.
+
+
+
+To create a collection with named vectors, you need to specify a configuration for each vector:
+
+
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""image"": {
+
+ ""size"": 4,
+
+ ""distance"": ""Dot""
+
+ },
+
+ ""text"": {
+
+ ""size"": 8,
+
+ ""distance"": ""Cosine""
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```bash
+
+curl -X PUT http://localhost:6333/collections/{collection_name} \
+
+ -H 'Content-Type: application/json' \
+
+ --data-raw '{
+
+ ""vectors"": {
+
+ ""image"": {
+
+ ""size"": 4,
+
+ ""distance"": ""Dot""
+
+ },
+
+ ""text"": {
+
+ ""size"": 8,
+
+ ""distance"": ""Cosine""
+
+ }
+
+ }
+
+ }'
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config={
+
+ ""image"": models.VectorParams(size=4, distance=models.Distance.DOT),
+
+ ""text"": models.VectorParams(size=8, distance=models.Distance.COSINE),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ image: { size: 4, distance: ""Dot"" },
+
+ text: { size: 8, distance: ""Cosine"" },
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder,
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut vector_config = VectorsConfigBuilder::default();
+
+vector_config.add_named_vector_params(""text"", VectorParamsBuilder::new(4, Distance::Dot));
+
+vector_config.add_named_vector_params(""image"", VectorParamsBuilder::new(8, Distance::Cosine));
+
+
+
+client
+
+ .create_collection(
+
+ CreateCollectionBuilder::new(""{collection_name}"").vectors_config(vector_config),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.Map;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.Distance;
+
+import io.qdrant.client.grpc.Collections.VectorParams;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createCollectionAsync(
+
+ ""{collection_name}"",
+
+ Map.of(
+
+ ""image"", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(),
+
+ ""text"",
+
+ VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build()))
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ vectorsConfig: new VectorParamsMap {
+
+ Map = {
+
+ [""image""] = new VectorParams {
+
+ Size = 4, Distance = Distance.Dot
+
+ },
+
+ [""text""] = new VectorParams {
+
+ Size = 8, Distance = Distance.Cosine
+
+ },
+
+ }
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ VectorsConfig: qdrant.NewVectorsConfigMap(
+
+ map[string]*qdrant.VectorParams{
+
+ ""image"": {
+
+ Size: 4,
+
+ Distance: qdrant.Distance_Dot,
+
+ },
+
+ ""text"": {
+
+ Size: 8,
+
+ Distance: qdrant.Distance_Cosine,
+
+ },
+
+ }),
+
+})
+
+```
+
+
+
+
+
+
+
+
+
+## Datatypes
+
+
+
+Newest versions of embeddings models generate vectors with very large dimentionalities.
+
+With OpenAI's `text-embedding-3-large` embedding model, the dimensionality can go up to 3072.
+
+
+
+The amount of memory required to store such vectors grows linearly with the dimensionality,
+
+so it is important to choose the right datatype for the vectors.
+
+
+
+The choice between datatypes is a trade-off between memory consumption and precision of vectors.
+
+
+
+Qdrant supports a number of datatypes for both dense and sparse vectors:
+
+
+
+**Float32**
+
+
+
+This is the default datatype for vectors in Qdrant. It is a 32-bit (4 bytes) floating-point number.
+
+The standard OpenAI embedding of 1536 dimensionality will require 6KB of memory to store in Float32.
+
+
+
+You don't need to specify the datatype for vectors in Qdrant, as it is set to Float32 by default.
+
+
+
+**Float16**
+
+
+
+This is a 16-bit (2 bytes) floating-point number. It is also known as half-precision float.
+
+Intuitively, it looks like this:
+
+
+
+```text
+
+float32 -> float16 delta (float32 - float16).abs
+
+
+
+0.79701585 -> 0.796875 delta 0.00014084578
+
+0.7850789 -> 0.78515625 delta 0.00007736683
+
+0.7775044 -> 0.77734375 delta 0.00016063452
+
+0.85776305 -> 0.85791016 delta 0.00014710426
+
+0.6616839 -> 0.6616211 delta 0.000062823296
+
+```
+
+
+
+The main advantage of Float16 is that it requires half the memory of Float32, while having virtually no impact on the quality of vector search.
+
+
+
+To use Float16, you need to specify the datatype for vectors in the collection configuration:
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 128,
+
+ ""distance"": ""Cosine"",
+
+ ""datatype"": ""float16"" // <-- For dense vectors
+
+ },
+
+ ""sparse_vectors"": {
+
+ ""text"": {
+
+ ""index"": {
+
+ ""datatype"": ""float16"" // <-- And for sparse vectors
+
+ }
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(
+
+ size=128,
+
+ distance=models.Distance.COSINE,
+
+ datatype=models.Datatype.FLOAT16
+
+ ),
+
+ sparse_vectors_config={
+
+ ""text"": models.SparseVectorParams(
+
+ index=models.SparseIndexConfig(datatype=models.Datatype.FLOAT16)
+
+ ),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 128,
+
+ distance: ""Cosine"",
+
+ datatype: ""float16""
+
+ },
+
+ sparse_vectors: {
+
+ text: {
+
+ index: {
+
+ datatype: ""float16""
+
+ }
+
+ }
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder
+
+};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
+
+sparse_vector_config.add_named_vector_params(
+
+ ""text"",
+
+ SparseVectorParamsBuilder::default()
+
+ .index(SparseIndexConfigBuilder::default().datatype(Datatype::Float32)),
+
+);
+
+
+
+let create_collection = CreateCollectionBuilder::new(""{collection_name}"")
+
+ .sparse_vectors_config(sparse_vector_config)
+
+ .vectors_config(
+
+ VectorParamsBuilder::new(128, Distance::Cosine).datatype(Datatype::Float16),
+
+ );
+
+
+
+client.create_collection(create_collection).await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.Datatype;
+
+import io.qdrant.client.grpc.Collections.Distance;
+
+import io.qdrant.client.grpc.Collections.SparseIndexConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorParams;
+
+import io.qdrant.client.grpc.Collections.VectorParams;
+
+import io.qdrant.client.grpc.Collections.VectorsConfig;
+
+
+
+QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createCollectionAsync(
+
+ CreateCollection.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setVectorsConfig(VectorsConfig.newBuilder()
+
+ .setParams(VectorParams.newBuilder()
+
+ .setSize(128)
+
+ .setDistance(Distance.Cosine)
+
+ .setDatatype(Datatype.Float16)
+
+ .build())
+
+ .build())
+
+ .setSparseVectorsConfig(
+
+ SparseVectorConfig.newBuilder()
+
+ .putMap(""text"", SparseVectorParams.newBuilder()
+
+ .setIndex(SparseIndexConfig.newBuilder()
+
+ .setDatatype(Datatype.Float16)
+
+ .build())
+
+ .build()))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ vectorsConfig: new VectorParams {
+
+ Size = 128,
+
+ Distance = Distance.Cosine,
+
+ Datatype = Datatype.Float16
+
+ },
+
+ sparseVectorsConfig: (
+
+ ""text"",
+
+ new SparseVectorParams {
+
+ Index = new SparseIndexConfig {
+
+ Datatype = Datatype.Float16
+
+ }
+
+ }
+
+ )
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+
+ Size: 128,
+
+ Distance: qdrant.Distance_Cosine,
+
+ Datatype: qdrant.Datatype_Float16.Enum(),
+
+ }),
+
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+
+ map[string]*qdrant.SparseVectorParams{
+
+ ""text"": {
+
+ Index: &qdrant.SparseIndexConfig{
+
+ Datatype: qdrant.Datatype_Float16.Enum(),
+
+ },
+
+ },
+
+ }),
+
+})
+
+```
+
+
+
+**Uint8**
+
+
+
+Another step towards memory optimization is to use the Uint8 datatype for vectors.
+
+Unlike Float16, Uint8 is not a floating-point number, but an integer number in the range from 0 to 255.
+
+
+
+Not all embeddings models generate vectors in the range from 0 to 255, so you need to be careful when using Uint8 datatype.
+
+
+
+In order to convert a number from float range to Uint8 range, you need to apply a process called quantization.
+
+
+
+Some embedding providers may provide embeddings in a pre-quantized format.
+
+One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings).
+
+
+
+For other embeddings, you will need to apply quantization yourself.
+
+
+
+
+
+
+
+
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 128,
+
+ ""distance"": ""Cosine"",
+
+ ""datatype"": ""uint8"" // <-- For dense vectors
+
+ },
+
+ ""sparse_vectors"": {
+
+ ""text"": {
+
+ ""index"": {
+
+ ""datatype"": ""uint8"" // <-- For sparse vectors
+
+ }
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(
+
+ size=128,
+
+ distance=models.Distance.COSINE,
+
+ datatype=models.Datatype.UINT8
+
+ ),
+
+ sparse_vectors_config={
+
+ ""text"": models.SparseVectorParams(
+
+ index=models.SparseIndexConfig(datatype=models.Datatype.UINT8)
+
+ ),
+
+ },
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 128,
+
+ distance: ""Cosine"",
+
+ datatype: ""uint8""
+
+ },
+
+ sparse_vectors: {
+
+ text: {
+
+ index: {
+
+ datatype: ""uint8""
+
+ }
+
+ }
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{
+
+ CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder,
+
+ SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder,
+
+};
+
+
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+let mut sparse_vector_config = SparseVectorsConfigBuilder::default();
+
+
+
+sparse_vector_config.add_named_vector_params(
+
+ ""text"",
+
+ SparseVectorParamsBuilder::default()
+
+ .index(SparseIndexConfigBuilder::default().datatype(Datatype::Uint8)),
+
+);
+
+let create_collection = CreateCollectionBuilder::new(""{collection_name}"")
+
+ .sparse_vectors_config(sparse_vector_config)
+
+ .vectors_config(
+
+ VectorParamsBuilder::new(128, Distance::Cosine)
+
+ .datatype(Datatype::Uint8)
+
+ );
+
+
+
+client.create_collection(create_collection).await?;
+
+```
+
+
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Collections.CreateCollection;
+
+import io.qdrant.client.grpc.Collections.Datatype;
+
+import io.qdrant.client.grpc.Collections.Distance;
+
+import io.qdrant.client.grpc.Collections.SparseIndexConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorConfig;
+
+import io.qdrant.client.grpc.Collections.SparseVectorParams;
+
+import io.qdrant.client.grpc.Collections.VectorParams;
+
+import io.qdrant.client.grpc.Collections.VectorsConfig;
+
+
+
+QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .createCollectionAsync(
+
+ CreateCollection.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setVectorsConfig(VectorsConfig.newBuilder()
+
+ .setParams(VectorParams.newBuilder()
+
+ .setSize(128)
+
+ .setDistance(Distance.Cosine)
+
+ .setDatatype(Datatype.Uint8)
+
+ .build())
+
+ .build())
+
+ .setSparseVectorsConfig(
+
+ SparseVectorConfig.newBuilder()
+
+ .putMap(""text"", SparseVectorParams.newBuilder()
+
+ .setIndex(SparseIndexConfig.newBuilder()
+
+ .setDatatype(Datatype.Uint8)
+
+ .build())
+
+ .build()))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateCollectionAsync(
+
+ collectionName: ""{collection_name}"",
+
+ vectorsConfig: new VectorParams {
+
+ Size = 128,
+
+ Distance = Distance.Cosine,
+
+ Datatype = Datatype.Uint8
+
+ },
+
+ sparseVectorsConfig: (
+
+ ""text"",
+
+ new SparseVectorParams {
+
+ Index = new SparseIndexConfig {
+
+ Datatype = Datatype.Uint8
+
+ }
+
+ }
+
+ )
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateCollection(context.Background(), &qdrant.CreateCollection{
+
+ CollectionName: ""{collection_name}"",
+
+ VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{
+
+ Size: 128,
+
+ Distance: qdrant.Distance_Cosine,
+
+ Datatype: qdrant.Datatype_Uint8.Enum(),
+
+ }),
+
+ SparseVectorsConfig: qdrant.NewSparseVectorsConfig(
+
+ map[string]*qdrant.SparseVectorParams{
+
+ ""text"": {
+
+ Index: &qdrant.SparseIndexConfig{
+
+ Datatype: qdrant.Datatype_Uint8.Enum(),
+
+ },
+
+ },
+
+ }),
+
+})
+
+```
+
+
+
+## Quantization
+
+
+
+Apart from changing the datatype of the original vectors, Qdrant can create quantized representations of vectors alongside the original ones.
+
+This quantized representation can be used to quickly select candidates for rescoring with the original vectors or even used directly for search.
+
+
+
+Quantization is applied in the background, during the optimization process.
+
+
+
+More information about the quantization process can be found in the [Quantization](/documentation/guides/quantization/) section.
+
+
+
+
+
+## Vector Storage
+
+
+
+Depending on the requirements of the application, Qdrant can use one of the data storage options.
+
+Keep in mind that you will have to tradeoff between search speed and the size of RAM used.
+
+
+
+More information about the storage options can be found in the [Storage](/documentation/concepts/storage/#vector-storage) section.
+",documentation/concepts/vectors.md
+"---
+
+title: Snapshots
+
+weight: 110
+
+aliases:
+
+ - ../snapshots
+
+---
+
+
+
+# Snapshots
+
+
+
+*Available as of v0.8.4*
+
+
+
+Snapshots are `tar` archive files that contain data and configuration of a specific collection on a specific node at a specific time. In a distributed setup, when you have multiple nodes in your cluster, you must create snapshots for each node separately when dealing with a single collection.
+
+
+
+This feature can be used to archive data or easily replicate an existing deployment. For disaster recovery, Qdrant Cloud users may prefer to use [Backups](/documentation/cloud/backups/) instead, which are physical disk-level copies of your data.
+
+
+
+For a step-by-step guide on how to use snapshots, see our [tutorial](/documentation/tutorials/create-snapshot/).
+
+
+
+## Create snapshot
+
+
+
+
+
+
+
+To create a new snapshot for an existing collection:
+
+
+
+```http
+
+POST /collections/{collection_name}/snapshots
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_snapshot(collection_name=""{collection_name}"")
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createSnapshot(""{collection_name}"");
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.create_snapshot(""{collection_name}"").await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.createSnapshotAsync(""{collection_name}"").get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateSnapshotAsync(""{collection_name}"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateSnapshot(context.Background(), ""{collection_name}"")
+
+```
+
+
+
+This is a synchronous operation for which a `tar` archive file will be generated into the `snapshot_path`.
+
+
+
+### Delete snapshot
+
+
+
+*Available as of v1.0.0*
+
+
+
+```http
+
+DELETE /collections/{collection_name}/snapshots/{snapshot_name}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.delete_snapshot(
+
+ collection_name=""{collection_name}"", snapshot_name=""{snapshot_name}""
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.deleteSnapshot(""{collection_name}"", ""{snapshot_name}"");
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::DeleteSnapshotRequestBuilder;
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .delete_snapshot(DeleteSnapshotRequestBuilder::new(
+
+ ""{collection_name}"",
+
+ ""{snapshot_name}"",
+
+ ))
+
+ .await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.deleteSnapshotAsync(""{collection_name}"", ""{snapshot_name}"").get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.DeleteSnapshotAsync(collectionName: ""{collection_name}"", snapshotName: ""{snapshot_name}"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.DeleteSnapshot(context.Background(), ""{collection_name}"", ""{snapshot_name}"")
+
+```
+
+
+
+## List snapshot
+
+
+
+List of snapshots for a collection:
+
+
+
+```http
+
+GET /collections/{collection_name}/snapshots
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.list_snapshots(collection_name=""{collection_name}"")
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.listSnapshots(""{collection_name}"");
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.list_snapshots(""{collection_name}"").await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.listSnapshotAsync(""{collection_name}"").get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ListSnapshotsAsync(""{collection_name}"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.ListSnapshots(context.Background(), ""{collection_name}"")
+
+```
+
+
+
+## Retrieve snapshot
+
+
+
+
+
+
+
+To download a specified snapshot from a collection as a file:
+
+
+
+```http
+
+GET /collections/{collection_name}/snapshots/{snapshot_name}
+
+```
+
+
+
+```shell
+
+curl 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.snapshot' \
+
+ -H 'api-key: ********' \
+
+ --output 'filename.snapshot'
+
+```
+
+
+
+## Restore snapshot
+
+
+
+
+
+
+
+Snapshots can be restored in three possible ways:
+
+
+
+1. [Recovering from a URL or local file](#recover-from-a-url-or-local-file) (useful for restoring a snapshot file that is on a remote server or already stored on the node)
+
+3. [Recovering from an uploaded file](#recover-from-an-uploaded-file) (useful for migrating data to a new cluster)
+
+3. [Recovering during start-up](#recover-during-start-up) (useful when running a self-hosted single-node Qdrant instance)
+
+
+
+Regardless of the method used, Qdrant will extract the shard data from the snapshot and properly register shards in the cluster.
+
+If there are other active replicas of the recovered shards in the cluster, Qdrant will replicate them to the newly recovered node by default to maintain data consistency.
+
+
+
+### Recover from a URL or local file
+
+
+
+*Available as of v0.11.3*
+
+
+
+This method of recovery requires the snapshot file to be downloadable from a URL or exist as a local file on the node (like if you [created the snapshot](#create-snapshot) on this node previously). If instead you need to upload a snapshot file, see the next section.
+
+
+
+To recover from a URL or local file use the [snapshot recovery endpoint](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-snapshot). This endpoint accepts either a URL like `https://example.com` or a [file URI](https://en.wikipedia.org/wiki/File_URI_scheme) like `file:///tmp/snapshot-2022-10-10.snapshot`. If the target collection does not exist, it will be created.
+
+
+
+```http
+
+PUT /collections/{collection_name}/snapshots/recover
+
+{
+
+ ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot""
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://qdrant-node-2:6333"")
+
+
+
+client.recover_snapshot(
+
+ ""{collection_name}"",
+
+ ""http://qdrant-node-1:6333/collections/collection_name/snapshots/snapshot-2022-10-10.shapshot"",
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.recoverSnapshot(""{collection_name}"", {
+
+ location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+
+});
+
+```
+
+
+
+
+
+
+
+### Recover from an uploaded file
+
+
+
+The snapshot file can also be uploaded as a file and restored using the [recover from uploaded snapshot](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-uploaded-snapshot). This endpoint accepts the raw snapshot data in the request body. If the target collection does not exist, it will be created.
+
+
+
+```bash
+
+curl -X POST 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
+
+ -H 'api-key: ********' \
+
+ -H 'Content-Type:multipart/form-data' \
+
+ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
+
+```
+
+
+
+This method is typically used to migrate data from one cluster to another, so we recommend setting the [priority](#snapshot-priority) to ""snapshot"" for that use-case.
+
+
+
+### Recover during start-up
+
+
+
+
+
+
+
+If you have a single-node deployment, you can recover any collection at start-up and it will be immediately available.
+
+Restoring snapshots is done through the Qdrant CLI at start-up time via the `--snapshot` argument which accepts a list of pairs such as `:`
+
+
+
+For example:
+
+
+
+```bash
+
+./qdrant --snapshot /snapshots/test-collection-archive.snapshot:test-collection --snapshot /snapshots/test-collection-archive.snapshot:test-copy-collection
+
+```
+
+
+
+The target collection **must** be absent otherwise the program will exit with an error.
+
+
+
+If you wish instead to overwrite an existing collection, use the `--force_snapshot` flag with caution.
+
+
+
+### Snapshot priority
+
+
+
+When recovering a snapshot to a non-empty node, there may be conflicts between the snapshot data and the existing data. The ""priority"" setting controls how Qdrant handles these conflicts. The priority setting is important because different priorities can give very
+
+different end results. The default priority may not be best for all situations.
+
+
+
+The available snapshot recovery priorities are:
+
+
+
+- `replica`: _(default)_ prefer existing data over the snapshot.
+
+- `snapshot`: prefer snapshot data over existing data.
+
+- `no_sync`: restore snapshot without any additional synchronization.
+
+
+
+To recover a new collection from a snapshot, you need to set
+
+the priority to `snapshot`. With `snapshot` priority, all data from the snapshot
+
+will be recovered onto the cluster. With `replica` priority _(default)_, you'd
+
+end up with an empty collection because the collection on the cluster did not
+
+contain any points and that source was preferred.
+
+
+
+`no_sync` is for specialized use cases and is not commonly used. It allows
+
+managing shards and transferring shards between clusters manually without any
+
+additional synchronization. Using it incorrectly will leave your cluster in a
+
+broken state.
+
+
+
+To recover from a URL, you specify an additional parameter in the request body:
+
+
+
+```http
+
+PUT /collections/{collection_name}/snapshots/recover
+
+{
+
+ ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+
+ ""priority"": ""snapshot""
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://qdrant-node-2:6333"")
+
+
+
+client.recover_snapshot(
+
+ ""{collection_name}"",
+
+ ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+
+ priority=models.SnapshotPriority.SNAPSHOT,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.recoverSnapshot(""{collection_name}"", {
+
+ location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"",
+
+ priority: ""snapshot""
+
+});
+
+```
+
+
+
+```bash
+
+curl -X POST 'http://qdrant-node-1:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \
+
+ -H 'api-key: ********' \
+
+ -H 'Content-Type:multipart/form-data' \
+
+ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot'
+
+```
+
+
+
+## Snapshots for the whole storage
+
+
+
+*Available as of v0.8.5*
+
+
+
+Sometimes it might be handy to create snapshot not just for a single collection, but for the whole storage, including collection aliases.
+
+Qdrant provides a dedicated API for that as well. It is similar to collection-level snapshots, but does not require `collection_name`.
+
+
+
+
+
+
+
+
+
+
+
+### Create full storage snapshot
+
+
+
+```http
+
+POST /snapshots
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_full_snapshot()
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createFullSnapshot();
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.create_full_snapshot().await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.createFullSnapshotAsync().get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.CreateFullSnapshotAsync();
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.CreateFullSnapshot(context.Background())
+
+```
+
+
+
+### Delete full storage snapshot
+
+
+
+*Available as of v1.0.0*
+
+
+
+```http
+
+DELETE /snapshots/{snapshot_name}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.delete_full_snapshot(snapshot_name=""{snapshot_name}"")
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.deleteFullSnapshot(""{snapshot_name}"");
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.delete_full_snapshot(""{snapshot_name}"").await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.deleteFullSnapshotAsync(""{snapshot_name}"").get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.DeleteFullSnapshotAsync(""{snapshot_name}"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.DeleteFullSnapshot(context.Background(), ""{snapshot_name}"")
+
+```
+
+
+
+### List full storage snapshots
+
+
+
+```http
+
+GET /snapshots
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(""localhost"", port=6333)
+
+
+
+client.list_full_snapshots()
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.listFullSnapshots();
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.list_full_snapshots().await?;
+
+```
+
+
+
+```java
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.listFullSnapshotAsync().get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ListFullSnapshotsAsync();
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.ListFullSnapshots(context.Background())
+
+```
+
+
+
+### Download full storage snapshot
+
+
+
+
+
+
+
+```http
+
+GET /snapshots/{snapshot_name}
+
+```
+
+
+
+## Restore full storage snapshot
+
+
+
+Restoring snapshots can only be done through the Qdrant CLI at startup time.
+
+
+
+For example:
+
+
+
+```bash
+
+./qdrant --storage-snapshot /snapshots/full-snapshot-2022-07-18-11-20-51.snapshot
+
+```
+
+
+
+## Storage
+
+
+
+Created, uploaded and recovered snapshots are stored as `.snapshot` files. By
+
+default, they're stored on the [local file system](#local-file-system). You may
+
+also configure to use an [S3 storage](#s3) service for them.
+
+
+
+### Local file system
+
+
+
+By default, snapshots are stored at `./snapshots` or at `/qdrant/snapshots` when
+
+using our Docker image.
+
+
+
+The target directory can be controlled through the [configuration](../../guides/configuration/):
+
+
+
+```yaml
+
+storage:
+
+ # Specify where you want to store snapshots.
+
+ snapshots_path: ./snapshots
+
+```
+
+
+
+Alternatively you may use the environment variable `QDRANT__STORAGE__SNAPSHOTS_PATH=./snapshots`.
+
+
+
+*Available as of v1.3.0*
+
+
+
+While a snapshot is being created, temporary files are placed in the configured
+
+storage directory by default. In case of limited capacity or a slow
+
+network attached disk, you can specify a separate location for temporary files:
+
+
+
+```yaml
+
+storage:
+
+ # Where to store temporary files
+
+ temp_path: /tmp
+
+```
+
+
+
+### S3
+
+
+
+*Available as of v1.10.0*
+
+
+
+Rather than storing snapshots on the local file system, you may also configure
+
+to store snapshots in an S3-compatible storage service. To enable this, you must
+
+configure it in the [configuration](../../guides/configuration/) file.
+
+
+
+For example, to configure for AWS S3:
+
+
+
+```yaml
+
+storage:
+
+ snapshots_config:
+
+ # Use 's3' to store snapshots on S3
+
+ snapshots_storage: s3
+
+
+
+ s3_config:
+
+ # Bucket name
+
+ bucket: your_bucket_here
+
+
+
+ # Bucket region (e.g. eu-central-1)
+
+ region: your_bucket_region_here
+
+
+
+ # Storage access key
+
+ # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ACCESS_KEY` environment variable.
+
+ access_key: your_access_key_here
+
+
+
+ # Storage secret key
+
+ # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__SECRET_KEY` environment variable.
+
+ secret_key: your_secret_key_here
+
+
+
+ # S3-Compatible Storage URL
+
+ # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ENDPOINT_URL` environment variable.
+
+ endpoint_url: your_url_here
+
+```
+",documentation/concepts/snapshots.md
+"---
+
+title: Hybrid Queries #required
+
+weight: 57 # This is the order of the page in the sidebar. The lower the number, the higher the page will be in the sidebar.
+
+aliases:
+
+ - ../hybrid-queries
+
+hideInSidebar: false # Optional. If true, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md).
+
+---
+
+
+
+# Hybrid and Multi-Stage Queries
+
+
+
+*Available as of v1.10.0*
+
+
+
+With the introduction of [many named vectors per point](../vectors/#named-vectors), there are use-cases when the best search is obtained by combining multiple queries,
+
+or by performing the search in more than one stage.
+
+
+
+Qdrant has a flexible and universal interface to make this possible, called `Query API` ([API reference](https://api.qdrant.tech/api-reference/search/query-points)).
+
+
+
+The main component for making the combinations of queries possible is the `prefetch` parameter, which enables making sub-requests.
+
+
+
+Specifically, whenever a query has at least one prefetch, Qdrant will:
+
+1. Perform the prefetch query (or queries),
+
+2. Apply the main query over the results of its prefetch(es).
+
+
+
+Additionally, prefetches can have prefetches themselves, so you can have nested prefetches.
+
+
+
+## Hybrid Search
+
+
+
+One of the most common problems when you have different representations of the same data is to combine the queried points for each representation into a single result.
+
+
+
+{{< figure src=""/docs/fusion-idea.png"" caption=""Fusing results from multiple queries"" width=""80%"" >}}
+
+
+
+For example, in text search, it is often useful to combine dense and sparse vectors get the best of semantics,
+
+plus the best of matching specific words.
+
+
+
+Qdrant currently has two ways of combining the results from different queries:
+
+
+
+- `rrf` -
+
+
+
+Reciprocal Rank Fusion
+
+
+
+
+
+ Considers the positions of results within each query, and boosts the ones that appear closer to the top in multiple of them.
+
+
+
+- `dbsf` -
+
+
+
+Distribution-Based Score Fusion
+
+ *(available as of v1.11.0)*
+
+
+
+ Normalizes the scores of the points in each query, using the mean +/- the 3rd standard deviation as limits, and then sums the scores of the same point across different queries.
+
+
+
+
+
+
+
+Here is an example of Reciprocal Rank Fusion for a query containing two prefetches against different named vectors configured to respectively hold sparse and dense vectors.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""prefetch"": [
+
+ {
+
+ ""query"": {
+
+ ""indices"": [1, 42], // <┐
+
+ ""values"": [0.22, 0.8] // <┴─sparse vector
+
+ },
+
+ ""using"": ""sparse"",
+
+ ""limit"": 20
+
+ },
+
+ {
+
+ ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector
+
+ ""using"": ""dense"",
+
+ ""limit"": 20
+
+ }
+
+ ],
+
+ ""query"": { ""fusion"": ""rrf"" }, // <--- reciprocal rank fusion
+
+ ""limit"": 10
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ prefetch=[
+
+ models.Prefetch(
+
+ query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]),
+
+ using=""sparse"",
+
+ limit=20,
+
+ ),
+
+ models.Prefetch(
+
+ query=[0.01, 0.45, 0.67, ...], # <-- dense vector
+
+ using=""dense"",
+
+ limit=20,
+
+ ),
+
+ ],
+
+ query=models.FusionQuery(fusion=models.Fusion.RRF),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ prefetch: [
+
+ {
+
+ query: {
+
+ values: [0.22, 0.8],
+
+ indices: [1, 42],
+
+ },
+
+ using: 'sparse',
+
+ limit: 20,
+
+ },
+
+ {
+
+ query: [0.01, 0.45, 0.67],
+
+ using: 'dense',
+
+ limit: 20,
+
+ },
+
+ ],
+
+ query: {
+
+ fusion: 'rrf',
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice()))
+
+ .using(""sparse"")
+
+ .limit(20u64)
+
+ )
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
+
+ .using(""dense"")
+
+ .limit(20u64)
+
+ )
+
+ .query(Query::new_fusion(Fusion::Rrf))
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.QueryFactory.fusion;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.Fusion;
+
+import io.qdrant.client.grpc.Points.PrefetchQuery;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client.queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .addPrefetch(PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42)))
+
+ .setUsing(""sparse"")
+
+ .setLimit(20)
+
+ .build())
+
+ .addPrefetch(PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f)))
+
+ .setUsing(""dense"")
+
+ .setLimit(20)
+
+ .build())
+
+ .setQuery(fusion(Fusion.RRF))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ prefetch: new List < PrefetchQuery > {
+
+ new() {
+
+ Query = new(float, uint)[] {
+
+ (0.22f, 1), (0.8f, 42),
+
+ },
+
+ Using = ""sparse"",
+
+ Limit = 20
+
+ },
+
+ new() {
+
+ Query = new float[] {
+
+ 0.01f, 0.45f, 0.67f
+
+ },
+
+ Using = ""dense"",
+
+ Limit = 20
+
+ }
+
+ },
+
+ query: Fusion.Rrf
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Query: qdrant.NewQuerySparse([]uint32{1, 42}, []float32{0.22, 0.8}),
+
+ Using: qdrant.PtrOf(""sparse""),
+
+ },
+
+ {
+
+ Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
+
+ Using: qdrant.PtrOf(""dense""),
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryFusion(qdrant.Fusion_RRF),
+
+})
+
+```
+
+
+
+## Multi-stage queries
+
+
+
+In many cases, the usage of a larger vector representation gives more accurate search results, but it is also more expensive to compute.
+
+
+
+Splitting the search into two stages is a known technique:
+
+
+
+* First, use a smaller and cheaper representation to get a large list of candidates.
+
+* Then, re-score the candidates using the larger and more accurate representation.
+
+
+
+There are a few ways to build search architectures around this idea:
+
+
+
+* The quantized vectors as a first stage, and the full-precision vectors as a second stage.
+
+* Leverage Matryoshka Representation Learning (MRL) to generate candidate vectors with a shorter vector, and then refine them with a longer one.
+
+* Use regular dense vectors to pre-fetch the candidates, and then re-score them with a multi-vector model like ColBERT.
+
+
+
+To get the best of all worlds, Qdrant has a convenient interface to perform the queries in stages,
+
+such that the coarse results are fetched first, and then they are refined later with larger vectors.
+
+
+
+### Re-scoring examples
+
+
+
+Fetch 1000 results using a shorter MRL byte vector, then re-score them using the full vector and get the top 10.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""prefetch"": {
+
+ ""query"": [1, 23, 45, 67], // <------------- small byte vector
+
+ ""using"": ""mrl_byte""
+
+ ""limit"": 1000
+
+ },
+
+ ""query"": [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector
+
+ ""using"": ""full"",
+
+ ""limit"": 10
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ prefetch=models.Prefetch(
+
+ query=[1, 23, 45, 67], # <------------- small byte vector
+
+ using=""mrl_byte"",
+
+ limit=1000,
+
+ ),
+
+ query=[0.01, 0.299, 0.45, 0.67, ...], # <-- full vector
+
+ using=""full"",
+
+ limit=10,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ prefetch: {
+
+ query: [1, 23, 45, 67], // <------------- small byte vector
+
+ using: 'mrl_byte',
+
+ limit: 1000,
+
+ },
+
+ query: [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector,
+
+ using: 'full',
+
+ limit: 10,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0]))
+
+ .using(""mlr_byte"")
+
+ .limit(1000u64)
+
+ )
+
+ .query(Query::new_nearest(vec![0.01, 0.299, 0.45, 0.67]))
+
+ .using(""full"")
+
+ .limit(10u64)
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PrefetchQuery;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector
+
+ .setLimit(1000)
+
+ .setUsing(""mrl_byte"")
+
+ .build())
+
+ .setQuery(nearest(0.01f, 0.299f, 0.45f, 0.67f)) // <-- full vector
+
+ .setUsing(""full"")
+
+ .setLimit(10)
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ prefetch: new List {
+
+ new() {
+
+ Query = new float[] { 1,23, 45, 67 }, // <------------- small byte vector
+
+ Using = ""mrl_byte"",
+
+ Limit = 1000
+
+ }
+
+ },
+
+ query: new float[] { 0.01f, 0.299f, 0.45f, 0.67f }, // <-- full vector
+
+ usingVector: ""full"",
+
+ limit: 10
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}),
+
+ Using: qdrant.PtrOf(""mrl_byte""),
+
+ Limit: qdrant.PtrOf(uint64(1000)),
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryDense([]float32{0.01, 0.299, 0.45, 0.67}),
+
+ Using: qdrant.PtrOf(""full""),
+
+})
+
+```
+
+
+
+Fetch 100 results using the default vector, then re-score them using a multi-vector to get the top 10.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""prefetch"": {
+
+ ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector
+
+ ""limit"": 100
+
+ },
+
+ ""query"": [ // <─┐
+
+ [0.1, 0.2, ...], // < │
+
+ [0.2, 0.1, ...], // < ├─ multi-vector
+
+ [0.8, 0.9, ...] // < │
+
+ ], // <─┘
+
+ ""using"": ""colbert"",
+
+ ""limit"": 10
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ prefetch=models.Prefetch(
+
+ query=[0.01, 0.45, 0.67, ...], # <-- dense vector
+
+ limit=100,
+
+ ),
+
+ query=[
+
+ [0.1, 0.2, ...], # <─┐
+
+ [0.2, 0.1, ...], # < ├─ multi-vector
+
+ [0.8, 0.9, ...], # < ┘
+
+ ],
+
+ using=""colbert"",
+
+ limit=10,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ prefetch: {
+
+ query: [1, 23, 45, 67], // <------------- small byte vector
+
+ limit: 100,
+
+ },
+
+ query: [
+
+ [0.1, 0.2], // <─┐
+
+ [0.2, 0.1], // < ├─ multi-vector
+
+ [0.8, 0.9], // < ┘
+
+ ],
+
+ using: 'colbert',
+
+ limit: 10,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
+
+ .limit(100u64)
+
+ )
+
+ .query(Query::new_nearest(vec![
+
+ vec![0.1, 0.2],
+
+ vec![0.2, 0.1],
+
+ vec![0.8, 0.9],
+
+ ]))
+
+ .using(""colbert"")
+
+ .limit(10u64)
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PrefetchQuery;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector
+
+ .setLimit(100)
+
+ .build())
+
+ .setQuery(
+
+ nearest(
+
+ new float[][] {
+
+ {0.1f, 0.2f}, // <─┐
+
+ {0.2f, 0.1f}, // < ├─ multi-vector
+
+ {0.8f, 0.9f} // < ┘
+
+ }))
+
+ .setUsing(""colbert"")
+
+ .setLimit(10)
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ prefetch: new List {
+
+ new() {
+
+ Query = new float[] { 0.01f, 0.45f, 0.67f }, // <-- dense vector****
+
+ Limit = 100
+
+ }
+
+ },
+
+ query: new float[][] {
+
+ [0.1f, 0.2f], // <─┐
+
+ [0.2f, 0.1f], // < ├─ multi-vector
+
+ [0.8f, 0.9f] // < ┘
+
+ },
+
+ usingVector: ""colbert"",
+
+ limit: 10
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
+
+ Limit: qdrant.PtrOf(uint64(100)),
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryMulti([][]float32{
+
+ {0.1, 0.2},
+
+ {0.2, 0.1},
+
+ {0.8, 0.9},
+
+ }),
+
+ Using: qdrant.PtrOf(""colbert""),
+
+})
+
+```
+
+
+
+It is possible to combine all the above techniques in a single query:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""prefetch"": {
+
+ ""prefetch"": {
+
+ ""query"": [1, 23, 45, 67], // <------ small byte vector
+
+ ""using"": ""mrl_byte""
+
+ ""limit"": 1000
+
+ },
+
+ ""query"": [0.01, 0.45, 0.67, ...], // <-- full dense vector
+
+ ""using"": ""full""
+
+ ""limit"": 100
+
+ },
+
+ ""query"": [ // <─┐
+
+ [0.1, 0.2, ...], // < │
+
+ [0.2, 0.1, ...], // < ├─ multi-vector
+
+ [0.8, 0.9, ...] // < │
+
+ ], // <─┘
+
+ ""using"": ""colbert"",
+
+ ""limit"": 10
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ prefetch=models.Prefetch(
+
+ prefetch=models.Prefetch(
+
+ query=[1, 23, 45, 67], # <------ small byte vector
+
+ using=""mrl_byte"",
+
+ limit=1000,
+
+ ),
+
+ query=[0.01, 0.45, 0.67, ...], # <-- full dense vector
+
+ using=""full"",
+
+ limit=100,
+
+ ),
+
+ query=[
+
+ [0.1, 0.2, ...], # <─┐
+
+ [0.2, 0.1, ...], # < ├─ multi-vector
+
+ [0.8, 0.9, ...], # < ┘
+
+ ],
+
+ using=""colbert"",
+
+ limit=10,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ prefetch: {
+
+ prefetch: {
+
+ query: [1, 23, 45, 67, ...], // <------------- small byte vector
+
+ using: 'mrl_byte',
+
+ limit: 1000,
+
+ },
+
+ query: [0.01, 0.45, 0.67, ...], // <-- full dense vector
+
+ using: 'full',
+
+ limit: 100,
+
+ },
+
+ query: [
+
+ [0.1, 0.2], // <─┐
+
+ [0.2, 0.1], // < ├─ multi-vector
+
+ [0.8, 0.9], // < ┘
+
+ ],
+
+ using: 'colbert',
+
+ limit: 10,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0]))
+
+ .using(""mlr_byte"")
+
+ .limit(1000u64)
+
+ )
+
+ .query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
+
+ .using(""full"")
+
+ .limit(100u64)
+
+ )
+
+ .query(Query::new_nearest(vec![
+
+ vec![0.1, 0.2],
+
+ vec![0.2, 0.1],
+
+ vec![0.8, 0.9],
+
+ ]))
+
+ .using(""colbert"")
+
+ .limit(10u64)
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.PrefetchQuery;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector
+
+ .setUsing(""mrl_byte"")
+
+ .setLimit(1000)
+
+ .build())
+
+ .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector
+
+ .setUsing(""full"")
+
+ .setLimit(100)
+
+ .build())
+
+ .setQuery(
+
+ nearest(
+
+ new float[][] {
+
+ {0.1f, 0.2f}, // <─┐
+
+ {0.2f, 0.1f}, // < ├─ multi-vector
+
+ {0.8f, 0.9f} // < ┘
+
+ }))
+
+ .setUsing(""colbert"")
+
+ .setLimit(10)
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ prefetch: new List {
+
+ new() {
+
+ Prefetch = {
+
+ new List {
+
+ new() {
+
+ Query = new float[] { 1, 23, 45, 67 }, // <------------- small byte vector
+
+ Using = ""mrl_byte"",
+
+ Limit = 1000
+
+ },
+
+ }
+
+ },
+
+ Query = new float[] {0.01f, 0.45f, 0.67f}, // <-- dense vector
+
+ Using = ""full"",
+
+ Limit = 100
+
+ }
+
+ },
+
+ query: new float[][] {
+
+ [0.1f, 0.2f], // <─┐
+
+ [0.2f, 0.1f], // < ├─ multi-vector
+
+ [0.8f, 0.9f] // < ┘
+
+ },
+
+ usingVector: ""colbert"",
+
+ limit: 10
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}),
+
+ Using: qdrant.PtrOf(""mrl_byte""),
+
+ Limit: qdrant.PtrOf(uint64(1000)),
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}),
+
+ Limit: qdrant.PtrOf(uint64(100)),
+
+ Using: qdrant.PtrOf(""full""),
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryMulti([][]float32{
+
+ {0.1, 0.2},
+
+ {0.2, 0.1},
+
+ {0.8, 0.9},
+
+ }),
+
+ Using: qdrant.PtrOf(""colbert""),
+
+})
+
+```
+
+
+
+## Flexible interface
+
+
+
+Other than the introduction of `prefetch`, the `Query API` has been designed to make querying simpler. Let's look at a few bonus features:
+
+
+
+### Query by ID
+
+
+
+Whenever you need to use a vector as an input, you can always use a [point ID](../points/#point-ids) instead.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"" // <--- point id
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+import java.util.UUID;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"") // <--- point id
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")),
+
+})
+
+```
+
+
+
+The above example will fetch the default vector from the point with this id, and use it as the query vector.
+
+
+
+If the `using` parameter is also specified, Qdrant will use the vector with that name.
+
+
+
+It is also possible to reference an ID from a different collection, by setting the `lookup_from` parameter.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", // <--- point id
+
+ ""using"": ""512d-vector""
+
+ ""lookup_from"": {
+
+ ""collection"": ""another_collection"", // <--- other collection name
+
+ ""vector"": ""image-512"" // <--- vector name in the other collection
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id
+
+ using=""512d-vector"",
+
+ lookup_from=models.LookupFrom(
+
+ collection=""another_collection"", # <--- other collection name
+
+ vector=""image-512"", # <--- vector name in the other collection
+
+ )
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id
+
+ using: '512d-vector',
+
+ lookup_from: {
+
+ collection: 'another_collection', // <--- other collection name
+
+ vector: 'image-512', // <--- vector name in the other collection
+
+ }
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{LookupLocationBuilder, PointId, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
+
+ .using(""512d-vector"")
+
+ .lookup_from(
+
+ LookupLocationBuilder::new(""another_collection"")
+
+ .vector_name(""image-512"")
+
+ )
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.LookupLocation;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+import java.util.UUID;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")))
+
+ .setUsing(""512d-vector"")
+
+ .setLookupFrom(
+
+ LookupLocation.newBuilder()
+
+ .setCollectionName(""another_collection"")
+
+ .setVectorName(""image-512"")
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""), // <--- point id
+
+ usingVector: ""512d-vector"",
+
+ lookupFrom: new() {
+
+ CollectionName = ""another_collection"", // <--- other collection name
+
+ VectorName = ""image-512"" // <--- vector name in the other collection
+
+ }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")),
+
+ Using: qdrant.PtrOf(""512d-vector""),
+
+ LookupFrom: &qdrant.LookupLocation{
+
+ CollectionName: ""another_collection"",
+
+ VectorName: qdrant.PtrOf(""image-512""),
+
+ },
+
+})
+
+```
+
+
+
+In the case above, Qdrant will fetch the `""image-512""` vector from the specified point id in the
+
+collection `another_collection`.
+
+
+
+
+
+
+
+
+
+## Re-ranking with payload values
+
+
+
+The Query API can retrieve points not only by vector similarity but also by the content of the payload.
+
+
+
+There are two ways to make use of the payload in the query:
+
+
+
+* Apply filters to the payload fields, to only get the points that match the filter.
+
+* Order the results by the payload field.
+
+
+
+Let's see an example of when this might be useful:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query
+
+{
+
+ ""prefetch"": [
+
+ {
+
+ ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector
+
+ ""filter"": {
+
+ ""must"": {
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""red""
+
+ }
+
+ }
+
+ },
+
+ ""limit"": 10
+
+ },
+
+ {
+
+ ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector
+
+ ""filter"": {
+
+ ""must"": {
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""green""
+
+ }
+
+ }
+
+ },
+
+ ""limit"": 10
+
+ }
+
+ ],
+
+ ""query"": { ""order_by"": ""price"" }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points(
+
+ collection_name=""{collection_name}"",
+
+ prefetch=[
+
+ models.Prefetch(
+
+ query=[0.01, 0.45, 0.67, ...], # <-- dense vector
+
+ filter=models.Filter(
+
+ must=models.FieldCondition(
+
+ key=""color"",
+
+ match=models.Match(value=""red""),
+
+ ),
+
+ ),
+
+ limit=10,
+
+ ),
+
+ models.Prefetch(
+
+ query=[0.01, 0.45, 0.67, ...], # <-- dense vector
+
+ filter=models.Filter(
+
+ must=models.FieldCondition(
+
+ key=""color"",
+
+ match=models.Match(value=""green""),
+
+ ),
+
+ ),
+
+ limit=10,
+
+ ),
+
+ ],
+
+ query=models.OrderByQuery(order_by=""price""),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.query(""{collection_name}"", {
+
+ prefetch: [
+
+ {
+
+ query: [0.01, 0.45, 0.67], // <-- dense vector
+
+ filter: {
+
+ must: {
+
+ key: 'color',
+
+ match: {
+
+ value: 'red',
+
+ },
+
+ }
+
+ },
+
+ limit: 10,
+
+ },
+
+ {
+
+ query: [0.01, 0.45, 0.67], // <-- dense vector
+
+ filter: {
+
+ must: {
+
+ key: 'color',
+
+ match: {
+
+ value: 'green',
+
+ },
+
+ }
+
+ },
+
+ limit: 10,
+
+ },
+
+ ],
+
+ query: {
+
+ order_by: 'price',
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{Condition, Filter, PrefetchQueryBuilder, Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query(
+
+ QueryPointsBuilder::new(""{collection_name}"")
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
+
+ .filter(Filter::must([Condition::matches(
+
+ ""color"",
+
+ ""red"".to_string(),
+
+ )]))
+
+ .limit(10u64)
+
+ )
+
+ .add_prefetch(PrefetchQueryBuilder::default()
+
+ .query(Query::new_nearest(vec![0.01, 0.45, 0.67]))
+
+ .filter(Filter::must([Condition::matches(
+
+ ""color"",
+
+ ""green"".to_string(),
+
+ )]))
+
+ .limit(10u64)
+
+ )
+
+ .query(Query::new_order_by(""price""))
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+import static io.qdrant.client.QueryFactory.orderBy;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.PrefetchQuery;
+
+import io.qdrant.client.grpc.Points.QueryPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryAsync(
+
+ QueryPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(0.01f, 0.45f, 0.67f))
+
+ .setFilter(
+
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build())
+
+ .setLimit(10)
+
+ .build())
+
+ .addPrefetch(
+
+ PrefetchQuery.newBuilder()
+
+ .setQuery(nearest(0.01f, 0.45f, 0.67f))
+
+ .setFilter(
+
+ Filter.newBuilder().addMust(matchKeyword(""color"", ""green"")).build())
+
+ .setLimit(10)
+
+ .build())
+
+ .setQuery(orderBy(""price""))
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryAsync(
+
+ collectionName: ""{collection_name}"",
+
+ prefetch: new List {
+
+ new() {
+
+ Query = new float[] {
+
+ 0.01f, 0.45f, 0.67f
+
+ },
+
+ Filter = MatchKeyword(""color"", ""red""),
+
+ Limit = 10
+
+ },
+
+ new() {
+
+ Query = new float[] {
+
+ 0.01f, 0.45f, 0.67f
+
+ },
+
+ Filter = MatchKeyword(""color"", ""green""),
+
+ Limit = 10
+
+ }
+
+ },
+
+ query: (OrderBy) ""price"",
+
+ limit: 10
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Query(context.Background(), &qdrant.QueryPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Prefetch: []*qdrant.PrefetchQuery{
+
+ {
+
+ Query: qdrant.NewQuery(0.01, 0.45, 0.67),
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+ },
+
+ {
+
+ Query: qdrant.NewQuery(0.01, 0.45, 0.67),
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""green""),
+
+ },
+
+ },
+
+ },
+
+ },
+
+ Query: qdrant.NewQueryOrderBy(&qdrant.OrderBy{
+
+ Key: ""price"",
+
+ }),
+
+})
+
+```
+
+
+
+In this example, we first fetch 10 points with the color `""red""` and then 10 points with the color `""green""`.
+
+Then, we order the results by the price field.
+
+
+
+This is how we can guarantee even sampling of both colors in the results and also get the cheapest ones first.
+
+
+
+## Grouping
+
+
+
+*Available as of v1.11.0*
+
+
+
+It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results.
+
+
+
+REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/query-points-groups)):
+
+
+
+```http
+
+POST /collections/{collection_name}/points/query/groups
+
+{
+
+ ""query"": [0.01, 0.45, 0.67],
+
+ group_by=""document_id"", # Path of the field to group by
+
+ limit=4, # Max amount of groups
+
+ group_size=2, # Max amount of points per group
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.query_points_groups(
+
+ collection_name=""{collection_name}"",
+
+ query=[0.01, 0.45, 0.67],
+
+ group_by=""document_id"",
+
+ limit=4,
+
+ group_size=2,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.queryGroups(""{collection_name}"", {
+
+ query: [0.01, 0.45, 0.67],
+
+ group_by: ""document_id"",
+
+ limit: 4,
+
+ group_size: 2,
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::Qdrant;
+
+use qdrant_client::qdrant::{Query, QueryPointsBuilder};
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client.query_groups(
+
+ QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"")
+
+ .query(Query::from(vec![0.01, 0.45, 0.67]))
+
+ .limit(4u64)
+
+ .group_size(2u64)
+
+).await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.QueryFactory.nearest;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.QueryPointGroups;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .queryGroupsAsync(
+
+ QueryPointGroups.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setGroupBy(""document_id"")
+
+ .setQuery(nearest(0.01f, 0.45f, 0.67f))
+
+ .setLimit(4)
+
+ .setGroupSize(2)
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.QueryGroupsAsync(
+
+ collectionName: ""{collection_name}"",
+
+ groupBy: ""document_id"",
+
+ query: new float[] {
+
+ 0.01f, 0.45f, 0.67f
+
+ },
+
+ limit: 4,
+
+ groupSize: 2
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{
+
+ CollectionName: ""{collection_name}"",
+
+ Query: qdrant.NewQuery(0.01, 0.45, 0.67),
+
+ GroupBy: ""document_id"",
+
+ GroupSize: qdrant.PtrOf(uint64(2)),
+
+})
+
+```
+
+
+
+For more information on the `grouping` capabilities refer to the reference documentation for search with [grouping](./search/#search-groups) and [lookup](./search/#lookup-in-groups).
+",documentation/concepts/hybrid-queries.md
+"---
+
+title: Filtering
+
+weight: 60
+
+aliases:
+
+ - ../filtering
+
+---
+
+
+
+# Filtering
+
+
+
+With Qdrant, you can set conditions when searching or retrieving points.
+
+For example, you can impose conditions on both the [payload](../payload/) and the `id` of the point.
+
+
+
+Setting additional conditions is important when it is impossible to express all the features of the object in the embedding.
+
+Examples include a variety of business requirements: stock availability, user location, or desired price range.
+
+
+
+## Filtering clauses
+
+
+
+Qdrant allows you to combine conditions in clauses.
+
+Clauses are different logical operations, such as `OR`, `AND`, and `NOT`.
+
+Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression.
+
+
+
+Let's take a look at the clauses implemented in Qdrant.
+
+
+
+Suppose we have a set of points with the following payload:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+
+ { ""id"": 2, ""city"": ""London"", ""color"": ""red"" },
+
+ { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+
+ { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" },
+
+ { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
+
+ { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
+
+]
+
+```
+
+
+
+### Must
+
+
+
+Example:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+
+ { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+
+ ]
+
+ }
+
+ ...
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""city"",
+
+ match=models.MatchValue(value=""London""),
+
+ ),
+
+ models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchValue(value=""red""),
+
+ ),
+
+ ]
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""city"",
+
+ match: { value: ""London"" },
+
+ },
+
+ {
+
+ key: ""color"",
+
+ match: { value: ""red"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([
+
+ Condition::matches(""city"", ""london"".to_string()),
+
+ Condition::matches(""color"", ""red"".to_string()),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addAllMust(
+
+ List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+// & operator combines two conditions in an AND conjunction(must)
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"")
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""city"", ""London""),
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[{ ""id"": 2, ""city"": ""London"", ""color"": ""red"" }]
+
+```
+
+
+
+When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied.
+
+In this sense, `must` is equivalent to the operator `AND`.
+
+
+
+### Should
+
+
+
+Example:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""should"": [
+
+ { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+
+ { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ should=[
+
+ models.FieldCondition(
+
+ key=""city"",
+
+ match=models.MatchValue(value=""London""),
+
+ ),
+
+ models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchValue(value=""red""),
+
+ ),
+
+ ]
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ should: [
+
+ {
+
+ key: ""city"",
+
+ match: { value: ""London"" },
+
+ },
+
+ {
+
+ key: ""color"",
+
+ match: { value: ""red"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([
+
+ Condition::matches(""city"", ""london"".to_string()),
+
+ Condition::matches(""color"", ""red"".to_string()),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+import java.util.List;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addAllShould(
+
+ List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+// | operator combines two conditions in an OR disjunction(should)
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""city"", ""London"") | MatchKeyword(""color"", ""red"")
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Should: []*qdrant.Condition{
+
+ qdrant.NewMatch(""city"", ""London""),
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+
+ { ""id"": 2, ""city"": ""London"", ""color"": ""red"" },
+
+ { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+
+ { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" }
+
+]
+
+```
+
+
+
+When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied.
+
+In this sense, `should` is equivalent to the operator `OR`.
+
+
+
+### Must Not
+
+
+
+Example:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must_not"": [
+
+ { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+
+ { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must_not=[
+
+ models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")),
+
+ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
+
+ ]
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must_not: [
+
+ {
+
+ key: ""city"",
+
+ match: { value: ""London"" },
+
+ },
+
+ {
+
+ key: ""color"",
+
+ match: { value: ""red"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must_not([
+
+ Condition::matches(""city"", ""london"".to_string()),
+
+ Condition::matches(""color"", ""red"".to_string()),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addAllMustNot(
+
+ List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red"")))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+// The ! operator negates the condition(must not)
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: !(MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red""))
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ MustNot: []*qdrant.Condition{
+
+ qdrant.NewMatch(""city"", ""London""),
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[
+
+ { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
+
+ { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
+
+]
+
+```
+
+
+
+When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied.
+
+In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`.
+
+
+
+### Clauses combination
+
+
+
+It is also possible to use several clauses simultaneously:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ { ""key"": ""city"", ""match"": { ""value"": ""London"" } }
+
+ ],
+
+ ""must_not"": [
+
+ { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")),
+
+ ],
+
+ must_not=[
+
+ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""city"",
+
+ match: { value: ""London"" },
+
+ },
+
+ ],
+
+ must_not: [
+
+ {
+
+ key: ""color"",
+
+ match: { value: ""red"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter {
+
+ must: vec![Condition::matches(""city"", ""London"".to_string())],
+
+ must_not: vec![Condition::matches(""color"", ""red"".to_string())],
+
+ ..Default::default()
+
+ }),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addMust(matchKeyword(""city"", ""London""))
+
+ .addMustNot(matchKeyword(""color"", ""red""))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""city"", ""London"") & !MatchKeyword(""color"", ""red"")
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""city"", ""London""),
+
+ },
+
+ MustNot: []*qdrant.Condition{
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+
+ { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }
+
+]
+
+```
+
+
+
+In this case, the conditions are combined by `AND`.
+
+
+
+Also, the conditions could be recursively nested. Example:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must_not"": [
+
+ {
+
+ ""must"": [
+
+ { ""key"": ""city"", ""match"": { ""value"": ""London"" } },
+
+ { ""key"": ""color"", ""match"": { ""value"": ""red"" } }
+
+ ]
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must_not=[
+
+ models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""city"", match=models.MatchValue(value=""London"")
+
+ ),
+
+ models.FieldCondition(
+
+ key=""color"", match=models.MatchValue(value=""red"")
+
+ ),
+
+ ],
+
+ ),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must_not: [
+
+ {
+
+ must: [
+
+ {
+
+ key: ""city"",
+
+ match: { value: ""London"" },
+
+ },
+
+ {
+
+ key: ""color"",
+
+ match: { value: ""red"" },
+
+ },
+
+ ],
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must_not([Filter::must(
+
+ [
+
+ Condition::matches(""city"", ""London"".to_string()),
+
+ Condition::matches(""color"", ""red"".to_string()),
+
+ ],
+
+ )
+
+ .into()])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.filter;
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addMustNot(
+
+ filter(
+
+ Filter.newBuilder()
+
+ .addAllMust(
+
+ List.of(
+
+ matchKeyword(""city"", ""London""),
+
+ matchKeyword(""color"", ""red"")))
+
+ .build()))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: new Filter { MustNot = { MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"") } }
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ MustNot: []*qdrant.Condition{
+
+ qdrant.NewFilterAsCondition(&qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""city"", ""London""),
+
+ qdrant.NewMatch(""color"", ""red""),
+
+ },
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+
+ { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+
+ { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" },
+
+ { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" },
+
+ { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" }
+
+]
+
+```
+
+
+
+## Filtering conditions
+
+
+
+Different types of values in payload correspond to different kinds of queries that we can apply to them.
+
+Let's look at the existing condition variants and what types of data they apply to.
+
+
+
+### Match
+
+
+
+```json
+
+{
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""value"": ""red""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchValue(value=""red""),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'color',
+
+ match: {value: 'red'}
+
+}
+
+```
+
+
+
+```rust
+
+Condition::matches(""color"", ""red"".to_string())
+
+```
+
+
+
+```java
+
+matchKeyword(""color"", ""red"");
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+MatchKeyword(""color"", ""red"");
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatch(""color"", ""red"")
+
+```
+
+
+
+For the other types, the match condition will look exactly the same, except for the type used:
+
+
+
+```json
+
+{
+
+ ""key"": ""count"",
+
+ ""match"": {
+
+ ""value"": 0
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""count"",
+
+ match=models.MatchValue(value=0),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'count',
+
+ match: {value: 0}
+
+}
+
+```
+
+
+
+```rust
+
+Condition::matches(""count"", 0)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.match;
+
+
+
+match(""count"", 0);
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+Match(""count"", 0);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatchInt(""count"", 0)
+
+```
+
+
+
+The simplest kind of condition is one that checks if the stored value equals the given one.
+
+If several values are stored, at least one of them should match the condition.
+
+You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads.
+
+
+
+### Match Any
+
+
+
+*Available as of v1.1.0*
+
+
+
+In case you want to check if the stored value is one of multiple values, you can use the Match Any condition.
+
+Match Any works as a logical OR for the given values. It can also be described as a `IN` operator.
+
+
+
+You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
+
+
+
+Example:
+
+
+
+```json
+
+{
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""any"": [""black"", ""yellow""]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchAny(any=[""black"", ""yellow""]),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'color',
+
+ match: {any: ['black', 'yellow']}
+
+}
+
+```
+
+
+
+```rust
+
+Condition::matches(""color"", vec![""black"".to_string(), ""yellow"".to_string()])
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeywords;
+
+
+
+matchKeywords(""color"", List.of(""black"", ""yellow""));
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+Match(""color"", [""black"", ""yellow""]);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatchKeywords(""color"", ""black"", ""yellow"")
+
+```
+
+
+
+In this example, the condition will be satisfied if the stored value is either `black` or `yellow`.
+
+
+
+If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""black""` is in `[""black"", ""yellow""]`.
+
+
+
+
+
+### Match Except
+
+
+
+*Available as of v1.2.0*
+
+
+
+In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition.
+
+Match Except works as a logical NOR for the given values.
+
+It can also be described as a `NOT IN` operator.
+
+
+
+You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads.
+
+
+
+Example:
+
+
+
+```json
+
+{
+
+ ""key"": ""color"",
+
+ ""match"": {
+
+ ""except"": [""black"", ""yellow""]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""color"",
+
+ match=models.MatchExcept(**{""except"": [""black"", ""yellow""]}),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'color',
+
+ match: {except: ['black', 'yellow']}
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::r#match::MatchValue;
+
+
+
+Condition::matches(
+
+ ""color"",
+
+ !MatchValue::from(vec![""black"".to_string(), ""yellow"".to_string()]),
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchExceptKeywords;
+
+
+
+matchExceptKeywords(""color"", List.of(""black"", ""yellow""));
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+Match(""color"", [""black"", ""yellow""]);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatchExcept(""color"", ""black"", ""yellow"")
+
+```
+
+
+
+In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`.
+
+
+
+If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""green""` does not match `""black""` nor `""yellow""`.
+
+
+
+### Nested key
+
+
+
+*Available as of v1.1.0*
+
+
+
+Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field.
+
+
+
+For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project.
+
+
+
+Suppose we have a set of points with the following payload:
+
+
+
+```json
+
+[
+
+ {
+
+ ""id"": 1,
+
+ ""country"": {
+
+ ""name"": ""Germany"",
+
+ ""cities"": [
+
+ {
+
+ ""name"": ""Berlin"",
+
+ ""population"": 3.7,
+
+ ""sightseeing"": [""Brandenburg Gate"", ""Reichstag""]
+
+ },
+
+ {
+
+ ""name"": ""Munich"",
+
+ ""population"": 1.5,
+
+ ""sightseeing"": [""Marienplatz"", ""Olympiapark""]
+
+ }
+
+ ]
+
+ }
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""country"": {
+
+ ""name"": ""Japan"",
+
+ ""cities"": [
+
+ {
+
+ ""name"": ""Tokyo"",
+
+ ""population"": 9.3,
+
+ ""sightseeing"": [""Tokyo Tower"", ""Tokyo Skytree""]
+
+ },
+
+ {
+
+ ""name"": ""Osaka"",
+
+ ""population"": 2.7,
+
+ ""sightseeing"": [""Osaka Castle"", ""Universal Studios Japan""]
+
+ }
+
+ ]
+
+ }
+
+ }
+
+]
+
+```
+
+
+
+You can search on a nested field using a dot notation.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""should"": [
+
+ {
+
+ ""key"": ""country.name"",
+
+ ""match"": {
+
+ ""value"": ""Germany""
+
+ }
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ should=[
+
+ models.FieldCondition(
+
+ key=""country.name"", match=models.MatchValue(value=""Germany"")
+
+ ),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ should: [
+
+ {
+
+ key: ""country.name"",
+
+ match: { value: ""Germany"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([
+
+ Condition::matches(""country.name"", ""Germany"".to_string()),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addShould(matchKeyword(""country.name"", ""Germany""))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""country.name"", ""Germany""));
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Should: []*qdrant.Condition{
+
+ qdrant.NewMatch(""country.name"", ""Germany""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+You can also search through arrays by projecting inner values using the `[]` syntax.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""should"": [
+
+ {
+
+ ""key"": ""country.cities[].population"",
+
+ ""range"": {
+
+ ""gte"": 9.0,
+
+ }
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ should=[
+
+ models.FieldCondition(
+
+ key=""country.cities[].population"",
+
+ range=models.Range(
+
+ gt=None,
+
+ gte=9.0,
+
+ lt=None,
+
+ lte=None,
+
+ ),
+
+ ),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ should: [
+
+ {
+
+ key: ""country.cities[].population"",
+
+ range: {
+
+ gt: null,
+
+ gte: 9.0,
+
+ lt: null,
+
+ lte: null,
+
+ },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([
+
+ Condition::range(
+
+ ""country.cities[].population"",
+
+ Range {
+
+ gte: Some(9.0),
+
+ ..Default::default()
+
+ },
+
+ ),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.range;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.Range;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addShould(
+
+ range(
+
+ ""country.cities[].population"",
+
+ Range.newBuilder().setGte(9.0).build()))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: Range(""country.cities[].population"", new Qdrant.Client.Grpc.Range { Gte = 9.0 })
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Should: []*qdrant.Condition{
+
+ qdrant.NewRange(""country.cities[].population"", &qdrant.Range{
+
+ Gte: qdrant.PtrOf(9.0),
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+This query would only output the point with id 2 as only Japan has a city with population greater than 9.0.
+
+
+
+And the leaf nested field can also be an array.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""should"": [
+
+ {
+
+ ""key"": ""country.cities[].sightseeing"",
+
+ ""match"": {
+
+ ""value"": ""Osaka Castle""
+
+ }
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ should=[
+
+ models.FieldCondition(
+
+ key=""country.cities[].sightseeing"",
+
+ match=models.MatchValue(value=""Osaka Castle""),
+
+ ),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ should: [
+
+ {
+
+ key: ""country.cities[].sightseeing"",
+
+ match: { value: ""Osaka Castle"" },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([
+
+ Condition::matches(""country.cities[].sightseeing"", ""Osaka Castle"".to_string()),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addShould(matchKeyword(""country.cities[].sightseeing"", ""Germany""))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""country.cities[].sightseeing"", ""Germany"")
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Should: []*qdrant.Condition{
+
+ qdrant.NewMatch(""country.cities[].sightseeing"", ""Germany""),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+This query would only output the point with id 2 as only Japan has a city with the ""Osaka castke"" as part of the sightseeing.
+
+
+
+### Nested object filter
+
+
+
+*Available as of v1.2.0*
+
+
+
+By default, the conditions are taking into account the entire payload of a point.
+
+
+
+For instance, given two points with the following payload:
+
+
+
+```json
+
+[
+
+ {
+
+ ""id"": 1,
+
+ ""dinosaur"": ""t-rex"",
+
+ ""diet"": [
+
+ { ""food"": ""leaves"", ""likes"": false},
+
+ { ""food"": ""meat"", ""likes"": true}
+
+ ]
+
+ },
+
+ {
+
+ ""id"": 2,
+
+ ""dinosaur"": ""diplodocus"",
+
+ ""diet"": [
+
+ { ""food"": ""leaves"", ""likes"": true},
+
+ { ""food"": ""meat"", ""likes"": false}
+
+ ]
+
+ }
+
+]
+
+```
+
+
+
+The following query would match both points:
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""diet[].food"",
+
+ ""match"": {
+
+ ""value"": ""meat""
+
+ }
+
+ },
+
+ {
+
+ ""key"": ""diet[].likes"",
+
+ ""match"": {
+
+ ""value"": true
+
+ }
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""diet[].food"", match=models.MatchValue(value=""meat"")
+
+ ),
+
+ models.FieldCondition(
+
+ key=""diet[].likes"", match=models.MatchValue(value=True)
+
+ ),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""diet[].food"",
+
+ match: { value: ""meat"" },
+
+ },
+
+ {
+
+ key: ""diet[].likes"",
+
+ match: { value: true },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([
+
+ Condition::matches(""diet[].food"", ""meat"".to_string()),
+
+ Condition::matches(""diet[].likes"", true),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.match;
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+
+
+import io.qdrant.client.QdrantClient;
+
+import io.qdrant.client.QdrantGrpcClient;
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+QdrantClient client =
+
+ new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build());
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addAllMust(
+
+ List.of(matchKeyword(""diet[].food"", ""meat""), match(""diet[].likes"", true)))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: MatchKeyword(""diet[].food"", ""meat"") & Match(""diet[].likes"", true)
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""diet[].food"", ""meat""),
+
+ qdrant.NewMatchBool(""diet[].likes"", true),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+This happens because both points are matching the two conditions:
+
+
+
+- the ""t-rex"" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes`
+
+- the ""diplodocus"" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes`
+
+
+
+To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter.
+
+
+
+Nested object filters allow arrays of objects to be queried independently of each other.
+
+
+
+It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply.
+
+
+
+The key should point to an array of objects and can be used with or without the bracket notation (""data"" or ""data[]"").
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [{
+
+ ""nested"": {
+
+ ""key"": ""diet"",
+
+ ""filter"":{
+
+ ""must"": [
+
+ {
+
+ ""key"": ""food"",
+
+ ""match"": {
+
+ ""value"": ""meat""
+
+ }
+
+ },
+
+ {
+
+ ""key"": ""likes"",
+
+ ""match"": {
+
+ ""value"": true
+
+ }
+
+ }
+
+ ]
+
+ }
+
+ }
+
+ }]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.NestedCondition(
+
+ nested=models.Nested(
+
+ key=""diet"",
+
+ filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""food"", match=models.MatchValue(value=""meat"")
+
+ ),
+
+ models.FieldCondition(
+
+ key=""likes"", match=models.MatchValue(value=True)
+
+ ),
+
+ ]
+
+ ),
+
+ )
+
+ )
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ nested: {
+
+ key: ""diet"",
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""food"",
+
+ match: { value: ""meat"" },
+
+ },
+
+ {
+
+ key: ""likes"",
+
+ match: { value: true },
+
+ },
+
+ ],
+
+ },
+
+ },
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([NestedCondition {
+
+ key: ""diet"".to_string(),
+
+ filter: Some(Filter::must([
+
+ Condition::matches(""food"", ""meat"".to_string()),
+
+ Condition::matches(""likes"", true),
+
+ ])),
+
+ }
+
+ .into()])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.match;
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+import static io.qdrant.client.ConditionFactory.nested;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addMust(
+
+ nested(
+
+ ""diet"",
+
+ Filter.newBuilder()
+
+ .addAllMust(
+
+ List.of(
+
+ matchKeyword(""food"", ""meat""), match(""likes"", true)))
+
+ .build()))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true))
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewNestedFilter(""diet"", &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""food"", ""meat""),
+
+ qdrant.NewMatchBool(""likes"", true),
+
+ },
+
+ }),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+The matching logic is modified to be applied at the level of an array element within the payload.
+
+
+
+Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time.
+
+Parent document is considered to match the condition if at least one element of the array matches the nested filter.
+
+
+
+**Limitations**
+
+
+
+The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"":{
+
+ ""must"":[
+
+ {
+
+ ""nested"":{
+
+ ""key"":""diet"",
+
+ ""filter"":{
+
+ ""must"":[
+
+ {
+
+ ""key"":""food"",
+
+ ""match"":{
+
+ ""value"":""meat""
+
+ }
+
+ },
+
+ {
+
+ ""key"":""likes"",
+
+ ""match"":{
+
+ ""value"":true
+
+ }
+
+ }
+
+ ]
+
+ }
+
+ }
+
+ },
+
+ {
+
+ ""has_id"":[
+
+ 1
+
+ ]
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.NestedCondition(
+
+ nested=models.Nested(
+
+ key=""diet"",
+
+ filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""food"", match=models.MatchValue(value=""meat"")
+
+ ),
+
+ models.FieldCondition(
+
+ key=""likes"", match=models.MatchValue(value=True)
+
+ ),
+
+ ]
+
+ ),
+
+ )
+
+ ),
+
+ models.HasIdCondition(has_id=[1]),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ nested: {
+
+ key: ""diet"",
+
+ filter: {
+
+ must: [
+
+ {
+
+ key: ""food"",
+
+ match: { value: ""meat"" },
+
+ },
+
+ {
+
+ key: ""likes"",
+
+ match: { value: true },
+
+ },
+
+ ],
+
+ },
+
+ },
+
+ },
+
+ {
+
+ has_id: [1],
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder};
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([
+
+ NestedCondition {
+
+ key: ""diet"".to_string(),
+
+ filter: Some(Filter::must([
+
+ Condition::matches(""food"", ""meat"".to_string()),
+
+ Condition::matches(""likes"", true),
+
+ ])),
+
+ }
+
+ .into(),
+
+ Condition::has_id([1]),
+
+ ])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.hasId;
+
+import static io.qdrant.client.ConditionFactory.match;
+
+import static io.qdrant.client.ConditionFactory.matchKeyword;
+
+import static io.qdrant.client.ConditionFactory.nested;
+
+import static io.qdrant.client.PointIdFactory.id;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addMust(
+
+ nested(
+
+ ""diet"",
+
+ Filter.newBuilder()
+
+ .addAllMust(
+
+ List.of(
+
+ matchKeyword(""food"", ""meat""), match(""likes"", true)))
+
+ .build()))
+
+ .addMust(hasId(id(1)))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(
+
+ collectionName: ""{collection_name}"",
+
+ filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true)) & HasId(1)
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewNestedFilter(""diet"", &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewMatch(""food"", ""meat""),
+
+ qdrant.NewMatchBool(""likes"", true),
+
+ },
+
+ }),
+
+ qdrant.NewHasID(qdrant.NewIDNum(1)),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+### Full Text Match
+
+
+
+*Available as of v0.10.0*
+
+
+
+A special case of the `match` condition is the `text` match condition.
+
+It allows you to search for a specific substring, token or phrase within the text field.
+
+
+
+Exact texts that will match the condition depend on full-text index configuration.
+
+Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index).
+
+
+
+If there is no full-text index for the field, the condition will work as exact substring match.
+
+
+
+```json
+
+{
+
+ ""key"": ""description"",
+
+ ""match"": {
+
+ ""text"": ""good cheap""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""description"",
+
+ match=models.MatchText(text=""good cheap""),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'description',
+
+ match: {text: 'good cheap'}
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::Condition;
+
+
+
+Condition::matches_text(""description"", ""good cheap"")
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.matchText;
+
+
+
+matchText(""description"", ""good cheap"");
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+MatchText(""description"", ""good cheap"");
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatchText(""description"", ""good cheap"")
+
+```
+
+
+
+If the query has several words, then the condition will be satisfied only if all of them are present in the text.
+
+
+
+### Range
+
+
+
+```json
+
+{
+
+ ""key"": ""price"",
+
+ ""range"": {
+
+ ""gt"": null,
+
+ ""gte"": 100.0,
+
+ ""lt"": null,
+
+ ""lte"": 450.0
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""price"",
+
+ range=models.Range(
+
+ gt=None,
+
+ gte=100.0,
+
+ lt=None,
+
+ lte=450.0,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'price',
+
+ range: {
+
+ gt: null,
+
+ gte: 100.0,
+
+ lt: null,
+
+ lte: 450.0
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Range};
+
+
+
+Condition::range(
+
+ ""price"",
+
+ Range {
+
+ gt: None,
+
+ gte: Some(100.0),
+
+ lt: None,
+
+ lte: Some(450.0),
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.range;
+
+
+
+import io.qdrant.client.grpc.Points.Range;
+
+
+
+range(""price"", Range.newBuilder().setGte(100.0).setLte(450).build());
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+Range(""price"", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 });
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewRange(""price"", &qdrant.Range{
+
+ Gte: qdrant.PtrOf(100.0),
+
+ Lte: qdrant.PtrOf(450.0),
+
+})
+
+
+
+```
+
+
+
+The `range` condition sets the range of possible values for stored payload values.
+
+If several values are stored, at least one of them should match the condition.
+
+
+
+Comparisons that can be used:
+
+
+
+- `gt` - greater than
+
+- `gte` - greater than or equal
+
+- `lt` - less than
+
+- `lte` - less than or equal
+
+
+
+Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads.
+
+
+
+### Datetime Range
+
+
+
+The datetime range is a unique range condition, used for [datetime](../payload/#datetime) payloads, which supports RFC 3339 formats.
+
+You do not need to convert dates to UNIX timestaps. During comparison, timestamps are parsed and converted to UTC.
+
+
+
+_Available as of v1.8.0_
+
+
+
+```json
+
+{
+
+ ""key"": ""date"",
+
+ ""range"": {
+
+ ""gt"": ""2023-02-08T10:49:00Z"",
+
+ ""gte"": null,
+
+ ""lt"": null,
+
+ ""lte"": ""2024-01-31 10:14:31Z""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""date"",
+
+ range=models.DatetimeRange(
+
+ gt=""2023-02-08T10:49:00Z"",
+
+ gte=None,
+
+ lt=None,
+
+ lte=""2024-01-31T10:14:31Z"",
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'date',
+
+ range: {
+
+ gt: '2023-02-08T10:49:00Z',
+
+ gte: null,
+
+ lt: null,
+
+ lte: '2024-01-31T10:14:31Z'
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, DatetimeRange, Timestamp};
+
+
+
+Condition::datetime_range(
+
+ ""date"",
+
+ DatetimeRange {
+
+ gt: Some(Timestamp::date_time(2023, 2, 8, 10, 49, 0).unwrap()),
+
+ gte: None,
+
+ lt: None,
+
+ lte: Some(Timestamp::date_time(2024, 1, 31, 10, 14, 31).unwrap()),
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.datetimeRange;
+
+
+
+import com.google.protobuf.Timestamp;
+
+import io.qdrant.client.grpc.Points.DatetimeRange;
+
+import java.time.Instant;
+
+
+
+long gt = Instant.parse(""2023-02-08T10:49:00Z"").getEpochSecond();
+
+long lte = Instant.parse(""2024-01-31T10:14:31Z"").getEpochSecond();
+
+
+
+datetimeRange(""date"",
+
+ DatetimeRange.newBuilder()
+
+ .setGt(Timestamp.newBuilder().setSeconds(gt))
+
+ .setLte(Timestamp.newBuilder().setSeconds(lte))
+
+ .build());
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+
+
+Conditions.DatetimeRange(
+
+ field: ""date"",
+
+ gt: new DateTime(2023, 2, 8, 10, 49, 0, DateTimeKind.Utc),
+
+ lte: new DateTime(2024, 1, 31, 10, 14, 31, DateTimeKind.Utc)
+
+);
+
+```
+
+
+
+```go
+
+import (
+
+ ""time""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+ ""google.golang.org/protobuf/types/known/timestamppb""
+
+)
+
+
+
+qdrant.NewDatetimeRange(""date"", &qdrant.DatetimeRange{
+
+ Gt: timestamppb.New(time.Date(2023, 2, 8, 10, 49, 0, 0, time.UTC)),
+
+ Lte: timestamppb.New(time.Date(2024, 1, 31, 10, 14, 31, 0, time.UTC)),
+
+})
+
+```
+
+
+
+### UUID Match
+
+
+
+_Available as of v1.11.0_
+
+
+
+Matching of UUID values works similarly to the regular `match` condition for strings.
+
+Functionally, it will work with `keyword` and `uuid` indexes exactly the same, but `uuid` index is more memory efficient.
+
+
+
+```json
+
+{
+
+ ""key"": ""uuid"",
+
+ ""match"": {
+
+ ""uuid"": ""f47ac10b-58cc-4372-a567-0e02b2c3d479""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""uuid"",
+
+ match=models.MatchValue(uuid=""f47ac10b-58cc-4372-a567-0e02b2c3d479""),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'uuid',
+
+ match: {uuid: 'f47ac10b-58cc-4372-a567-0e02b2c3d479'}
+
+}
+
+```
+
+
+
+```rust
+
+Condition::matches(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"".to_string())
+
+```
+
+
+
+```java
+
+matchKeyword(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"");
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+MatchKeyword(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"");
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewMatch(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"")
+
+```
+
+
+
+### Geo
+
+
+
+#### Geo Bounding Box
+
+
+
+```json
+
+{
+
+ ""key"": ""location"",
+
+ ""geo_bounding_box"": {
+
+ ""bottom_right"": {
+
+ ""lon"": 13.455868,
+
+ ""lat"": 52.495862
+
+ },
+
+ ""top_left"": {
+
+ ""lon"": 13.403683,
+
+ ""lat"": 52.520711
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""location"",
+
+ geo_bounding_box=models.GeoBoundingBox(
+
+ bottom_right=models.GeoPoint(
+
+ lon=13.455868,
+
+ lat=52.495862,
+
+ ),
+
+ top_left=models.GeoPoint(
+
+ lon=13.403683,
+
+ lat=52.520711,
+
+ ),
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'location',
+
+ geo_bounding_box: {
+
+ bottom_right: {
+
+ lon: 13.455868,
+
+ lat: 52.495862
+
+ },
+
+ top_left: {
+
+ lon: 13.403683,
+
+ lat: 52.520711
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, GeoBoundingBox, GeoPoint};
+
+
+
+Condition::geo_bounding_box(
+
+ ""location"",
+
+ GeoBoundingBox {
+
+ bottom_right: Some(GeoPoint {
+
+ lon: 13.455868,
+
+ lat: 52.495862,
+
+ }),
+
+ top_left: Some(GeoPoint {
+
+ lon: 13.403683,
+
+ lat: 52.520711,
+
+ }),
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.geoBoundingBox;
+
+
+
+geoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868);
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+GeoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewGeoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868)
+
+```
+
+
+
+It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`.
+
+
+
+#### Geo Radius
+
+
+
+```json
+
+{
+
+ ""key"": ""location"",
+
+ ""geo_radius"": {
+
+ ""center"": {
+
+ ""lon"": 13.403683,
+
+ ""lat"": 52.520711
+
+ },
+
+ ""radius"": 1000.0
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""location"",
+
+ geo_radius=models.GeoRadius(
+
+ center=models.GeoPoint(
+
+ lon=13.403683,
+
+ lat=52.520711,
+
+ ),
+
+ radius=1000.0,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'location',
+
+ geo_radius: {
+
+ center: {
+
+ lon: 13.403683,
+
+ lat: 52.520711
+
+ },
+
+ radius: 1000.0
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, GeoPoint, GeoRadius};
+
+
+
+Condition::geo_radius(
+
+ ""location"",
+
+ GeoRadius {
+
+ center: Some(GeoPoint {
+
+ lon: 13.403683,
+
+ lat: 52.520711,
+
+ }),
+
+ radius: 1000.0,
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.geoRadius;
+
+
+
+geoRadius(""location"", 52.520711, 13.403683, 1000.0f);
+
+```
+
+
+
+```csharp
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+GeoRadius(""location"", 52.520711, 13.403683, 1000.0f);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewGeoRadius(""location"", 52.520711, 13.403683, 1000.0)
+
+```
+
+
+
+It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters.
+
+
+
+If several values are stored, at least one of them should match the condition.
+
+These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
+
+
+
+#### Geo Polygon
+
+Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island.
+
+
+
+When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same.
+
+
+
+Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic.
+
+
+
+```json
+
+
+
+{
+
+ ""key"": ""location"",
+
+ ""geo_polygon"": {
+
+ ""exterior"": {
+
+ ""points"": [
+
+ { ""lon"": -70.0, ""lat"": -70.0 },
+
+ { ""lon"": 60.0, ""lat"": -70.0 },
+
+ { ""lon"": 60.0, ""lat"": 60.0 },
+
+ { ""lon"": -70.0, ""lat"": 60.0 },
+
+ { ""lon"": -70.0, ""lat"": -70.0 }
+
+ ]
+
+ },
+
+ ""interiors"": [
+
+ {
+
+ ""points"": [
+
+ { ""lon"": -65.0, ""lat"": -65.0 },
+
+ { ""lon"": 0.0, ""lat"": -65.0 },
+
+ { ""lon"": 0.0, ""lat"": 0.0 },
+
+ { ""lon"": -65.0, ""lat"": 0.0 },
+
+ { ""lon"": -65.0, ""lat"": -65.0 }
+
+ ]
+
+ }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""location"",
+
+ geo_polygon=models.GeoPolygon(
+
+ exterior=models.GeoLineString(
+
+ points=[
+
+ models.GeoPoint(
+
+ lon=-70.0,
+
+ lat=-70.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=60.0,
+
+ lat=-70.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=60.0,
+
+ lat=60.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=-70.0,
+
+ lat=60.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=-70.0,
+
+ lat=-70.0,
+
+ ),
+
+ ]
+
+ ),
+
+ interiors=[
+
+ models.GeoLineString(
+
+ points=[
+
+ models.GeoPoint(
+
+ lon=-65.0,
+
+ lat=-65.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=0.0,
+
+ lat=-65.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=0.0,
+
+ lat=0.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=-65.0,
+
+ lat=0.0,
+
+ ),
+
+ models.GeoPoint(
+
+ lon=-65.0,
+
+ lat=-65.0,
+
+ ),
+
+ ]
+
+ )
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'location',
+
+ geo_polygon: {
+
+ exterior: {
+
+ points: [
+
+ {
+
+ lon: -70.0,
+
+ lat: -70.0
+
+ },
+
+ {
+
+ lon: 60.0,
+
+ lat: -70.0
+
+ },
+
+ {
+
+ lon: 60.0,
+
+ lat: 60.0
+
+ },
+
+ {
+
+ lon: -70.0,
+
+ lat: 60.0
+
+ },
+
+ {
+
+ lon: -70.0,
+
+ lat: -70.0
+
+ }
+
+ ]
+
+ },
+
+ interiors: {
+
+ points: [
+
+ {
+
+ lon: -65.0,
+
+ lat: -65.0
+
+ },
+
+ {
+
+ lon: 0.0,
+
+ lat: -65.0
+
+ },
+
+ {
+
+ lon: 0.0,
+
+ lat: 0.0
+
+ },
+
+ {
+
+ lon: -65.0,
+
+ lat: 0.0
+
+ },
+
+ {
+
+ lon: -65.0,
+
+ lat: -65.0
+
+ }
+
+ ]
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, GeoLineString, GeoPoint, GeoPolygon};
+
+
+
+Condition::geo_polygon(
+
+ ""location"",
+
+ GeoPolygon {
+
+ exterior: Some(GeoLineString {
+
+ points: vec![
+
+ GeoPoint {
+
+ lon: -70.0,
+
+ lat: -70.0,
+
+ },
+
+ GeoPoint {
+
+ lon: 60.0,
+
+ lat: -70.0,
+
+ },
+
+ GeoPoint {
+
+ lon: 60.0,
+
+ lat: 60.0,
+
+ },
+
+ GeoPoint {
+
+ lon: -70.0,
+
+ lat: 60.0,
+
+ },
+
+ GeoPoint {
+
+ lon: -70.0,
+
+ lat: -70.0,
+
+ },
+
+ ],
+
+ }),
+
+ interiors: vec![GeoLineString {
+
+ points: vec![
+
+ GeoPoint {
+
+ lon: -65.0,
+
+ lat: -65.0,
+
+ },
+
+ GeoPoint {
+
+ lon: 0.0,
+
+ lat: -65.0,
+
+ },
+
+ GeoPoint { lon: 0.0, lat: 0.0 },
+
+ GeoPoint {
+
+ lon: -65.0,
+
+ lat: 0.0,
+
+ },
+
+ GeoPoint {
+
+ lon: -65.0,
+
+ lat: -65.0,
+
+ },
+
+ ],
+
+ }],
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.geoPolygon;
+
+
+
+import io.qdrant.client.grpc.Points.GeoLineString;
+
+import io.qdrant.client.grpc.Points.GeoPoint;
+
+
+
+geoPolygon(
+
+ ""location"",
+
+ GeoLineString.newBuilder()
+
+ .addAllPoints(
+
+ List.of(
+
+ GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(),
+
+ GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(),
+
+ GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(),
+
+ GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(),
+
+ GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build()))
+
+ .build(),
+
+ List.of(
+
+ GeoLineString.newBuilder()
+
+ .addAllPoints(
+
+ List.of(
+
+ GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(),
+
+ GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(),
+
+ GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(),
+
+ GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(),
+
+ GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build()))
+
+ .build()));
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+GeoPolygon(
+
+ field: ""location"",
+
+ exterior: new GeoLineString
+
+ {
+
+ Points =
+
+ {
+
+ new GeoPoint { Lat = -70.0, Lon = -70.0 },
+
+ new GeoPoint { Lat = 60.0, Lon = -70.0 },
+
+ new GeoPoint { Lat = 60.0, Lon = 60.0 },
+
+ new GeoPoint { Lat = -70.0, Lon = 60.0 },
+
+ new GeoPoint { Lat = -70.0, Lon = -70.0 }
+
+ }
+
+ },
+
+ interiors: [
+
+ new()
+
+ {
+
+ Points =
+
+ {
+
+ new GeoPoint { Lat = -65.0, Lon = -65.0 },
+
+ new GeoPoint { Lat = 0.0, Lon = -65.0 },
+
+ new GeoPoint { Lat = 0.0, Lon = 0.0 },
+
+ new GeoPoint { Lat = -65.0, Lon = 0.0 },
+
+ new GeoPoint { Lat = -65.0, Lon = -65.0 }
+
+ }
+
+ }
+
+ ]
+
+);
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewGeoPolygon(""location"",
+
+ &qdrant.GeoLineString{
+
+ Points: []*qdrant.GeoPoint{
+
+ {Lat: -70, Lon: -70},
+
+ {Lat: 60, Lon: -70},
+
+ {Lat: 60, Lon: 60},
+
+ {Lat: -70, Lon: 60},
+
+ {Lat: -70, Lon: -70},
+
+ },
+
+ }, &qdrant.GeoLineString{
+
+ Points: []*qdrant.GeoPoint{
+
+ {Lat: -65, Lon: -65},
+
+ {Lat: 0, Lon: -65},
+
+ {Lat: 0, Lon: 0},
+
+ {Lat: -65, Lon: 0},
+
+ {Lat: -65, Lon: -65},
+
+ },
+
+ })
+
+```
+
+
+
+A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors.
+
+
+
+If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset.
+
+These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo).
+
+
+
+### Values count
+
+
+
+In addition to the direct value comparison, it is also possible to filter by the amount of values.
+
+
+
+For example, given the data:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""name"": ""product A"", ""comments"": [""Very good!"", ""Excellent""] },
+
+ { ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] }
+
+]
+
+```
+
+
+
+We can perform the search only among the items with more than two comments:
+
+
+
+```json
+
+{
+
+ ""key"": ""comments"",
+
+ ""values_count"": {
+
+ ""gt"": 2
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.FieldCondition(
+
+ key=""comments"",
+
+ values_count=models.ValuesCount(gt=2),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ key: 'comments',
+
+ values_count: {gt: 2}
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, ValuesCount};
+
+
+
+Condition::values_count(
+
+ ""comments"",
+
+ ValuesCount {
+
+ gt: Some(2),
+
+ ..Default::default()
+
+ },
+
+)
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.valuesCount;
+
+
+
+import io.qdrant.client.grpc.Points.ValuesCount;
+
+
+
+valuesCount(""comments"", ValuesCount.newBuilder().setGt(2).build());
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+ValuesCount(""comments"", new ValuesCount { Gt = 2 });
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewValuesCount(""comments"", &qdrant.ValuesCount{
+
+ Gt: qdrant.PtrOf(uint64(2)),
+
+})
+
+```
+
+
+
+The result would be:
+
+
+
+```json
+
+[{ ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] }]
+
+```
+
+
+
+If stored value is not an array - it is assumed that the amount of values is equals to 1.
+
+
+
+### Is Empty
+
+
+
+Sometimes it is also useful to filter out records that are missing some value.
+
+The `IsEmpty` condition may help you with that:
+
+
+
+```json
+
+{
+
+ ""is_empty"": {
+
+ ""key"": ""reports""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.IsEmptyCondition(
+
+ is_empty=models.PayloadField(key=""reports""),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ is_empty: {
+
+ key: ""reports"";
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::Condition;
+
+
+
+Condition::is_empty(""reports"")
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.isEmpty;
+
+
+
+isEmpty(""reports"");
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+IsEmpty(""reports"");
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewIsEmpty(""reports"")
+
+```
+
+
+
+This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value.
+
+
+
+
+
+
+
+### Is Null
+
+
+
+It is not possible to test for `NULL` values with the match condition.
+
+We have to use `IsNull` condition instead:
+
+
+
+```json
+
+{
+
+ ""is_null"": {
+
+ ""key"": ""reports""
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+models.IsNullCondition(
+
+ is_null=models.PayloadField(key=""reports""),
+
+)
+
+```
+
+
+
+```typescript
+
+{
+
+ is_null: {
+
+ key: ""reports"";
+
+ }
+
+}
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::Condition;
+
+
+
+Condition::is_null(""reports"")
+
+```
+
+
+
+```java
+
+import static io.qdrant.client.ConditionFactory.isNull;
+
+
+
+isNull(""reports"");
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client.Grpc;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+IsNull(""reports"");
+
+```
+
+
+
+```go
+
+import ""github.com/qdrant/go-client/qdrant""
+
+
+
+qdrant.NewIsNull(""reports"")
+
+```
+
+
+
+This condition will match all records where the field `reports` exists and has `NULL` value.
+
+
+
+
+
+### Has id
+
+
+
+This type of query is not related to payload, but can be very useful in some situations.
+
+For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/scroll
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ { ""has_id"": [1,3,5,7,9,11] }
+
+ ]
+
+ }
+
+ ...
+
+}
+
+```
+
+
+
+```python
+
+client.scroll(
+
+ collection_name=""{collection_name}"",
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]),
+
+ ],
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+client.scroll(""{collection_name}"", {
+
+ filter: {
+
+ must: [
+
+ {
+
+ has_id: [1, 3, 5, 7, 9, 11],
+
+ },
+
+ ],
+
+ },
+
+});
+
+```
+
+
+
+```rust
+
+use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder};
+
+use qdrant_client::Qdrant;
+
+
+
+let client = Qdrant::from_url(""http://localhost:6334"").build()?;
+
+
+
+client
+
+ .scroll(
+
+ ScrollPointsBuilder::new(""{collection_name}"")
+
+ .filter(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])),
+
+ )
+
+ .await?;
+
+```
+
+
+
+```java
+
+import java.util.List;
+
+
+
+import static io.qdrant.client.ConditionFactory.hasId;
+
+import static io.qdrant.client.PointIdFactory.id;
+
+
+
+import io.qdrant.client.grpc.Points.Filter;
+
+import io.qdrant.client.grpc.Points.ScrollPoints;
+
+
+
+client
+
+ .scrollAsync(
+
+ ScrollPoints.newBuilder()
+
+ .setCollectionName(""{collection_name}"")
+
+ .setFilter(
+
+ Filter.newBuilder()
+
+ .addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11))))
+
+ .build())
+
+ .build())
+
+ .get();
+
+```
+
+
+
+```csharp
+
+using Qdrant.Client;
+
+using static Qdrant.Client.Grpc.Conditions;
+
+
+
+var client = new QdrantClient(""localhost"", 6334);
+
+
+
+await client.ScrollAsync(collectionName: ""{collection_name}"", filter: HasId([1, 3, 5, 7, 9, 11]));
+
+```
+
+
+
+```go
+
+import (
+
+ ""context""
+
+
+
+ ""github.com/qdrant/go-client/qdrant""
+
+)
+
+
+
+client, err := qdrant.NewClient(&qdrant.Config{
+
+ Host: ""localhost"",
+
+ Port: 6334,
+
+})
+
+
+
+client.Scroll(context.Background(), &qdrant.ScrollPoints{
+
+ CollectionName: ""{collection_name}"",
+
+ Filter: &qdrant.Filter{
+
+ Must: []*qdrant.Condition{
+
+ qdrant.NewHasID(
+
+ qdrant.NewIDNum(1),
+
+ qdrant.NewIDNum(3),
+
+ qdrant.NewIDNum(5),
+
+ qdrant.NewIDNum(7),
+
+ qdrant.NewIDNum(9),
+
+ qdrant.NewIDNum(11),
+
+ ),
+
+ },
+
+ },
+
+})
+
+```
+
+
+
+Filtered points would be:
+
+
+
+```json
+
+[
+
+ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" },
+
+ { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" },
+
+ { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" }
+
+]
+
+```
+",documentation/concepts/filtering.md
+"---
+
+title: Concepts
+
+weight: 11
+
+# If the index.md file is empty, the link to the section will be hidden from the sidebar
+
+---
+
+
+
+# Concepts
+
+
+
+Think of these concepts as a glossary. Each of these concepts include a link to
+
+detailed information, usually with examples. If you're new to AI, these concepts
+
+can help you learn more about AI and the Qdrant approach.
+
+
+
+## Collections
+
+
+
+[Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search.
+
+
+
+## Payload
+
+
+
+A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors.
+
+
+
+## Points
+
+
+
+[Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload.
+
+
+
+## Search
+
+
+
+[Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space.
+
+
+
+## Explore
+
+
+
+[Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections.
+
+
+
+## Hybrid Queries
+
+
+
+[Hybrid Queries](/documentation/concepts/hybrid-queries/) combines multiple queries or performs them in more than one stage.
+
+
+
+## Filtering
+
+
+
+[Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more.
+
+
+
+## Optimizer
+
+
+
+[Optimizer](/documentation/concepts/optimizer/) describes options to rebuild
+
+database structures for faster search. They include a vacuum, a merge, and an
+
+indexing optimizer.
+
+
+
+## Storage
+
+
+
+[Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper.
+
+
+
+## Indexing
+
+
+
+[Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index.
+
+
+
+## Snapshots
+
+
+
+[Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times.
+",documentation/concepts/_index.md
+"---
+
+title: Bulk Upload Vectors
+
+weight: 13
+
+---
+
+
+
+# Bulk upload a large number of vectors
+
+
+
+Uploading a large-scale dataset fast might be a challenge, but Qdrant has a few tricks to help you with that.
+
+
+
+The first important detail about data uploading is that the bottleneck is usually located on the client side, not on the server side.
+
+This means that if you are uploading a large dataset, you should prefer a high-performance client library.
+
+
+
+We recommend using our [Rust client library](https://github.com/qdrant/rust-client) for this purpose, as it is the fastest client library available for Qdrant.
+
+
+
+If you are not using Rust, you might want to consider parallelizing your upload process.
+
+
+
+## Disable indexing during upload
+
+
+
+In case you are doing an initial upload of a large dataset, you might want to disable indexing during upload.
+
+It will enable to avoid unnecessary indexing of vectors, which will be overwritten by the next batch.
+
+
+
+To disable indexing during upload, set `indexing_threshold` to `0`:
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 768,
+
+ ""distance"": ""Cosine""
+
+ },
+
+ ""optimizers_config"": {
+
+ ""indexing_threshold"": 0
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+
+ optimizers_config=models.OptimizersConfigDiff(
+
+ indexing_threshold=0,
+
+ ),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 768,
+
+ distance: ""Cosine"",
+
+ },
+
+ optimizers_config: {
+
+ indexing_threshold: 0,
+
+ },
+
+});
+
+```
+
+
+
+After upload is done, you can enable indexing by setting `indexing_threshold` to a desired value (default is 20000):
+
+
+
+```http
+
+PATCH /collections/{collection_name}
+
+{
+
+ ""optimizers_config"": {
+
+ ""indexing_threshold"": 20000
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.update_collection(
+
+ collection_name=""{collection_name}"",
+
+ optimizer_config=models.OptimizersConfigDiff(indexing_threshold=20000),
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.updateCollection(""{collection_name}"", {
+
+ optimizers_config: {
+
+ indexing_threshold: 20000,
+
+ },
+
+});
+
+```
+
+
+
+## Upload directly to disk
+
+
+
+When the vectors you upload do not all fit in RAM, you likely want to use
+
+[memmap](../../concepts/storage/#configuring-memmap-storage)
+
+support.
+
+
+
+During collection
+
+[creation](../../concepts/collections/#create-collection),
+
+memmaps may be enabled on a per-vector basis using the `on_disk` parameter. This
+
+will store vector data directly on disk at all times. It is suitable for
+
+ingesting a large amount of data, essential for the billion scale benchmark.
+
+
+
+Using `memmap_threshold_kb` is not recommended in this case. It would require
+
+the [optimizer](../../concepts/optimizer/) to constantly
+
+transform in-memory segments into memmap segments on disk. This process is
+
+slower, and the optimizer can be a bottleneck when ingesting a large amount of
+
+data.
+
+
+
+Read more about this in
+
+[Configuring Memmap Storage](../../concepts/storage/#configuring-memmap-storage).
+
+
+
+## Parallel upload into multiple shards
+
+
+
+In Qdrant, each collection is split into shards. Each shard has a separate Write-Ahead-Log (WAL), which is responsible for ordering operations.
+
+By creating multiple shards, you can parallelize upload of a large dataset. From 2 to 4 shards per one machine is a reasonable number.
+
+
+
+```http
+
+PUT /collections/{collection_name}
+
+{
+
+ ""vectors"": {
+
+ ""size"": 768,
+
+ ""distance"": ""Cosine""
+
+ },
+
+ ""shard_number"": 2
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""{collection_name}"",
+
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+
+ shard_number=2,
+
+)
+
+```
+
+
+
+```typescript
+
+import { QdrantClient } from ""@qdrant/js-client-rest"";
+
+
+
+const client = new QdrantClient({ host: ""localhost"", port: 6333 });
+
+
+
+client.createCollection(""{collection_name}"", {
+
+ vectors: {
+
+ size: 768,
+
+ distance: ""Cosine"",
+
+ },
+
+ shard_number: 2,
+
+});
+
+```
+",documentation/tutorials/bulk-upload.md
+"---
+
+title: Semantic code search
+
+weight: 22
+
+---
+
+
+
+# Use semantic search to navigate your codebase
+
+
+
+| Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) | |
+
+|--------------|---------------------|--|----|
+
+
+
+You too can enrich your applications with Qdrant semantic search. In this
+
+tutorial, we describe how you can use Qdrant to navigate a codebase, to help
+
+you find relevant code snippets. As an example, we will use the [Qdrant](https://github.com/qdrant/qdrant)
+
+source code itself, which is mostly written in Rust.
+
+
+
+
+
+
+
+## The approach
+
+
+
+We want to search codebases using natural semantic queries, and searching for
+
+code based on similar logic. You can set up these tasks with embeddings:
+
+
+
+1. General usage neural encoder for Natural Language Processing (NLP), in our case
+
+ `all-MiniLM-L6-v2` from the
+
+ [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html) library.
+
+2. Specialized embeddings for code-to-code similarity search. We use the
+
+ `jina-embeddings-v2-base-code` model.
+
+
+
+To prepare our code for `all-MiniLM-L6-v2`, we preprocess the code to text that
+
+more closely resembles natural language. The Jina embeddings model supports a
+
+variety of standard programming languages, so there is no need to preprocess the
+
+snippets. We can use the code as is.
+
+
+
+NLP-based search is based on function signatures, but code search may return
+
+smaller pieces, such as loops. So, if we receive a particular function signature
+
+from the NLP model and part of its implementation from the code model, we merge
+
+the results and highlight the overlap.
+
+
+
+## Data preparation
+
+
+
+Chunking the application sources into smaller parts is a non-trivial task. In
+
+general, functions, class methods, structs, enums, and all the other language-specific
+
+constructs are good candidates for chunks. They are big enough to
+
+contain some meaningful information, but small enough to be processed by
+
+embedding models with a limited context window. You can also use docstrings,
+
+comments, and other metadata can be used to enrich the chunks with additional
+
+information.
+
+
+
+![Code chunking strategy](/documentation/tutorials/code-search/data-chunking.png)
+
+
+
+### Parsing the codebase
+
+
+
+While our example uses Rust, you can use our approach with any other language.
+
+You can parse code with a [Language Server Protocol](https://microsoft.github.io/language-server-protocol/) (**LSP**)
+
+compatible tool. You can use an LSP to build a graph of the codebase, and then extract chunks.
+
+We did our work with the [rust-analyzer](https://rust-analyzer.github.io/).
+
+We exported the parsed codebase into the [LSIF](https://microsoft.github.io/language-server-protocol/specifications/lsif/0.4.0/specification/)
+
+format, a standard for code intelligence data. Next, we used the LSIF data to
+
+navigate the codebase and extract the chunks. For details, see our [code search
+
+demo](https://github.com/qdrant/demo-code-search).
+
+
+
+
+
+
+
+We then exported the chunks into JSON documents with not only the code itself,
+
+but also context with the location of the code in the project. For example, see
+
+the description of the `await_ready_for_timeout` function from the `IsReady`
+
+struct in the `common` module:
+
+
+
+```json
+
+{
+
+ ""name"":""await_ready_for_timeout"",
+
+ ""signature"":""fn await_ready_for_timeout (& self , timeout : Duration) -> bool"",
+
+ ""code_type"":""Function"",
+
+ ""docstring"":""= \"" Return `true` if ready, `false` if timed out.\"""",
+
+ ""line"":44,
+
+ ""line_from"":43,
+
+ ""line_to"":51,
+
+ ""context"":{
+
+ ""module"":""common"",
+
+ ""file_path"":""lib/collection/src/common/is_ready.rs"",
+
+ ""file_name"":""is_ready.rs"",
+
+ ""struct_name"":""IsReady"",
+
+ ""snippet"":"" /// Return `true` if ready, `false` if timed out.\n pub fn await_ready_for_timeout(&self, timeout: Duration) -> bool {\n let mut is_ready = self.value.lock();\n if !*is_ready {\n !self.condvar.wait_for(&mut is_ready, timeout).timed_out()\n } else {\n true\n }\n }\n""
+
+ }
+
+}
+
+```
+
+
+
+You can examine the Qdrant structures, parsed in JSON, in the [`structures.jsonl`
+
+file](https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl)
+
+in our Google Cloud Storage bucket. Download it and use it as a source of data for our code search.
+
+
+
+```shell
+
+wget https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl
+
+```
+
+
+
+Next, load the file and parse the lines into a list of dictionaries:
+
+
+
+```python
+
+import json
+
+
+
+structures = []
+
+with open(""structures.jsonl"", ""r"") as fp:
+
+ for i, row in enumerate(fp):
+
+ entry = json.loads(row)
+
+ structures.append(entry)
+
+```
+
+
+
+### Code to *natural language* conversion
+
+
+
+Each programming language has its own syntax which is not a part of the natural
+
+language. Thus, a general-purpose model probably does not understand the code
+
+as is. We can, however, normalize the data by removing code specifics and
+
+including additional context, such as module, class, function, and file name.
+
+We took the following steps:
+
+
+
+1. Extract the signature of the function, method, or other code construct.
+
+2. Divide camel case and snake case names into separate words.
+
+3. Take the docstring, comments, and other important metadata.
+
+4. Build a sentence from the extracted data using a predefined template.
+
+5. Remove the special characters and replace them with spaces.
+
+
+
+As input, expect dictionaries with the same structure. Define a `textify`
+
+function to do the conversion. We'll use an `inflection` library to convert
+
+with different naming conventions.
+
+
+
+```shell
+
+pip install inflection
+
+```
+
+
+
+Once all dependencies are installed, we define the `textify` function:
+
+
+
+```python
+
+import inflection
+
+import re
+
+
+
+from typing import Dict, Any
+
+
+
+def textify(chunk: Dict[str, Any]) -> str:
+
+ # Get rid of all the camel case / snake case
+
+ # - inflection.underscore changes the camel case to snake case
+
+ # - inflection.humanize converts the snake case to human readable form
+
+ name = inflection.humanize(inflection.underscore(chunk[""name""]))
+
+ signature = inflection.humanize(inflection.underscore(chunk[""signature""]))
+
+
+
+ # Check if docstring is provided
+
+ docstring = """"
+
+ if chunk[""docstring""]:
+
+ docstring = f""that does {chunk['docstring']} ""
+
+
+
+ # Extract the location of that snippet of code
+
+ context = (
+
+ f""module {chunk['context']['module']} ""
+
+ f""file {chunk['context']['file_name']}""
+
+ )
+
+ if chunk[""context""][""struct_name""]:
+
+ struct_name = inflection.humanize(
+
+ inflection.underscore(chunk[""context""][""struct_name""])
+
+ )
+
+ context = f""defined in struct {struct_name} {context}""
+
+
+
+ # Combine all the bits and pieces together
+
+ text_representation = (
+
+ f""{chunk['code_type']} {name} ""
+
+ f""{docstring}""
+
+ f""defined as {signature} ""
+
+ f""{context}""
+
+ )
+
+
+
+ # Remove any special characters and concatenate the tokens
+
+ tokens = re.split(r""\W"", text_representation)
+
+ tokens = filter(lambda x: x, tokens)
+
+ return "" "".join(tokens)
+
+```
+
+
+
+Now we can use `textify` to convert all chunks into text representations:
+
+
+
+```python
+
+text_representations = list(map(textify, structures))
+
+```
+
+
+
+This is how the `await_ready_for_timeout` function description appears:
+
+
+
+```text
+
+Function Await ready for timeout that does Return true if ready false if timed out defined as Fn await ready for timeout self timeout duration bool defined in struct Is ready module common file is_ready rs
+
+```
+
+
+
+## Ingestion pipeline
+
+
+
+Next, we build the code search engine to vectorizing data and set up a semantic
+
+search mechanism for both embedding models.
+
+
+
+### Natural language embeddings
+
+
+
+We can encode text representations through the `all-MiniLM-L6-v2` model from
+
+`sentence-transformers`. With the following command, we install `sentence-transformers`
+
+with dependencies:
+
+
+
+```shell
+
+pip install sentence-transformers optimum onnx
+
+```
+
+
+
+Then we can use the model to encode the text representations:
+
+
+
+```python
+
+from sentence_transformers import SentenceTransformer
+
+
+
+nlp_model = SentenceTransformer(""all-MiniLM-L6-v2"")
+
+nlp_embeddings = nlp_model.encode(
+
+ text_representations, show_progress_bar=True,
+
+)
+
+```
+
+
+
+### Code embeddings
+
+
+
+The `jina-embeddings-v2-base-code` model is a good candidate for this task.
+
+You can also get it from the `sentence-transformers` library, with conditions.
+
+Visit [the model page](https://huggingface.co/jinaai/jina-embeddings-v2-base-code),
+
+accept the rules, and generate the access token in your [account settings](https://huggingface.co/settings/tokens).
+
+Once you have the token, you can use the model as follows:
+
+
+
+```python
+
+HF_TOKEN = ""THIS_IS_YOUR_TOKEN""
+
+
+
+# Extract the code snippets from the structures to a separate list
+
+code_snippets = [
+
+ structure[""context""][""snippet""] for structure in structures
+
+]
+
+
+
+code_model = SentenceTransformer(
+
+ ""jinaai/jina-embeddings-v2-base-code"",
+
+ token=HF_TOKEN,
+
+ trust_remote_code=True
+
+)
+
+code_model.max_seq_length = 8192 # increase the context length window
+
+code_embeddings = code_model.encode(
+
+ code_snippets, batch_size=4, show_progress_bar=True,
+
+)
+
+```
+
+
+
+Remember to set the `trust_remote_code` parameter to `True`. Otherwise, the
+
+model does not produce meaningful vectors. Setting this parameter allows the
+
+library to download and possibly launch some code on your machine, so be sure
+
+to trust the source.
+
+
+
+With both the natural language and code embeddings, we can store them in the
+
+Qdrant collection.
+
+
+
+### Building Qdrant collection
+
+
+
+We use the `qdrant-client` library to interact with the Qdrant server. Let's
+
+install that client:
+
+
+
+```shell
+
+pip install qdrant-client
+
+```
+
+
+
+Of course, we need a running Qdrant server for vector search. If you need one,
+
+you can [use a local Docker container](/documentation/quick-start/)
+
+or deploy it using the [Qdrant Cloud](https://cloud.qdrant.io/).
+
+You can use either to follow this tutorial. Configure the connection parameters:
+
+
+
+```python
+
+QDRANT_URL = ""https://my-cluster.cloud.qdrant.io:6333"" # http://localhost:6333 for local instance
+
+QDRANT_API_KEY = ""THIS_IS_YOUR_API_KEY"" # None for local instance
+
+```
+
+
+
+Then use the library to create a collection:
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(QDRANT_URL, api_key=QDRANT_API_KEY)
+
+client.create_collection(
+
+ ""qdrant-sources"",
+
+ vectors_config={
+
+ ""text"": models.VectorParams(
+
+ size=nlp_embeddings.shape[1],
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+ ""code"": models.VectorParams(
+
+ size=code_embeddings.shape[1],
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+ }
+
+)
+
+```
+
+
+
+Our newly created collection is ready to accept the data. Let's upload the embeddings:
+
+
+
+```python
+
+import uuid
+
+
+
+points = [
+
+ models.PointStruct(
+
+ id=uuid.uuid4().hex,
+
+ vector={
+
+ ""text"": text_embedding,
+
+ ""code"": code_embedding,
+
+ },
+
+ payload=structure,
+
+ )
+
+ for text_embedding, code_embedding, structure in zip(nlp_embeddings, code_embeddings, structures)
+
+]
+
+
+
+client.upload_points(""qdrant-sources"", points=points, batch_size=64)
+
+```
+
+
+
+The uploaded points are immediately available for search. Next, query the
+
+collection to find relevant code snippets.
+
+
+
+## Querying the codebase
+
+
+
+We use one of the models to search the collection. Start with text embeddings.
+
+Run the following query ""*How do I count points in a collection?*"". Review the
+
+results.
+
+
+
+
+
+
+
+```python
+
+query = ""How do I count points in a collection?""
+
+
+
+hits = client.query_points(
+
+ ""qdrant-sources"",
+
+ query=nlp_model.encode(query).tolist(),
+
+ using=""text"",
+
+ limit=5,
+
+).points
+
+```
+
+
+
+Now, review the results. The following table lists the module, the file name
+
+and score. Each line includes a link to the signature, as a code block from
+
+the file.
+
+
+
+| module | file_name | score | signature |
+
+|--------------------|---------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| toc | point_ops.rs | 0.59448624 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) |
+
+| operations | types.rs | 0.5493385 | [ `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) |
+
+| collection_manager | segments_updater.rs | 0.5121002 | [ `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) |
+
+| collection | point_ops.rs | 0.5063539 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) |
+
+| map_index | mod.rs | 0.49973983 | [ `fn get_points_with_value_count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) |
+
+
+
+It seems we were able to find some relevant code structures. Let's try the same with the code embeddings:
+
+
+
+```python
+
+hits = client.query_points(
+
+ ""qdrant-sources"",
+
+ query=code_model.encode(query).tolist(),
+
+ using=""code"",
+
+ limit=5,
+
+).points
+
+```
+
+
+
+Output:
+
+
+
+| module | file_name | score | signature |
+
+|---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
+
+| numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
+
+| map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
+
+| map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) |
+
+| fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
+
+
+
+While the scores retrieved by different models are not comparable, but we can
+
+see that the results are different. Code and text embeddings can capture
+
+different aspects of the codebase. We can use both models to query the collection
+
+and then combine the results to get the most relevant code snippets, from a single batch request.
+
+
+
+```python
+
+responses = client.query_batch_points(
+
+ ""qdrant-sources"",
+
+ requests=[
+
+ models.QueryRequest(
+
+ query=nlp_model.encode(query).tolist(),
+
+ using=""text"",
+
+ with_payload=True,
+
+ limit=5,
+
+ ),
+
+ models.QueryRequest(
+
+ query=code_model.encode(query).tolist(),
+
+ using=""code"",
+
+ with_payload=True,
+
+ limit=5,
+
+ ),
+
+ ]
+
+)
+
+
+
+results = [response.points for response in responses]
+
+```
+
+
+
+Output:
+
+
+
+| module | file_name | score | signature |
+
+|--------------------|----------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| toc | point_ops.rs | 0.59448624 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) |
+
+| operations | types.rs | 0.5493385 | [ `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) |
+
+| collection_manager | segments_updater.rs | 0.5121002 | [ `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) |
+
+| collection | point_ops.rs | 0.5063539 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) |
+
+| map_index | mod.rs | 0.49973983 | [ `fn get_points_with_value_count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) |
+
+| field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
+
+| numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
+
+| map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
+
+| map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) |
+
+| fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
+
+
+
+This is one example of how you can use different models and combine the results.
+
+In a real-world scenario, you might run some reranking and deduplication, as
+
+well as additional processing of the results.
+
+
+
+### Code search demo
+
+
+
+Our [Code search demo](https://code-search.qdrant.tech/) uses the following process:
+
+
+
+1. The user sends a query.
+
+1. Both models vectorize that query simultaneously. We get two different
+
+ vectors.
+
+1. Both vectors are used in parallel to find relevant snippets. We expect
+
+ 5 examples from the NLP search and 20 examples from the code search.
+
+1. Once we retrieve results for both vectors, we merge them in one of the
+
+ following scenarios:
+
+ 1. If both methods return different results, we prefer the results from
+
+ the general usage model (NLP).
+
+ 1. If there is an overlap between the search results, we merge overlapping
+
+ snippets.
+
+
+
+In the screenshot, we search for `flush of wal`. The result
+
+shows relevant code, merged from both models. Note the highlighted
+
+code in lines 621-629. It's where both models agree.
+
+
+
+![Results from both models, with overlap](/documentation/tutorials/code-search/code-search-demo-example.png)
+
+
+
+Now you see semantic code intelligence, in action.
+
+
+
+### Grouping the results
+
+
+
+You can improve the search results, by grouping them by payload properties.
+
+In our case, we can group the results by the module. If we use code embeddings,
+
+we can see multiple results from the `map_index` module. Let's group the
+
+results and assume a single result per module:
+
+
+
+```python
+
+results = client.search_groups(
+
+ ""qdrant-sources"",
+
+ query_vector=(
+
+ ""code"", code_model.encode(query).tolist()
+
+ ),
+
+ group_by=""context.module"",
+
+ limit=5,
+
+ group_size=1,
+
+)
+
+```
+
+
+
+Output:
+
+
+
+| module | file_name | score | signature |
+
+|---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) |
+
+| numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) |
+
+| map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) |
+
+| fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) |
+
+| hnsw_index | graph_links.rs | 0.6998417 | [ `fn num_points `](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/hnsw_index/graph_links.rs#L477) |
+
+
+
+With the grouping feature, we get more diverse results.
+
+
+
+## Summary
+
+
+
+This tutorial demonstrates how to use Qdrant to navigate a codebase. For an
+
+end-to-end implementation, review the [code search
+
+notebook](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) and the
+
+[code-search-demo](https://github.com/qdrant/demo-code-search). You can also check out [a running version of the code
+
+search demo](https://code-search.qdrant.tech/) which exposes Qdrant codebase for search with a web interface.
+",documentation/tutorials/code-search.md
+"---
+
+title: Measure retrieval quality
+
+weight: 21
+
+---
+
+
+
+# Measure retrieval quality
+
+
+
+| Time: 30 min | Level: Intermediate | | |
+
+|--------------|---------------------|--|----|
+
+
+
+Semantic search pipelines are as good as the embeddings they use. If your model cannot properly represent input data, similar objects might
+
+be far away from each other in the vector space. No surprise, that the search results will be poor in this case. There is, however, another
+
+component of the process which can also degrade the quality of the search results. It is the ANN algorithm itself.
+
+
+
+In this tutorial, we will show how to measure the quality of the semantic retrieval and how to tune the parameters of the HNSW, the ANN
+
+algorithm used in Qdrant, to obtain the best results.
+
+
+
+## Embeddings quality
+
+
+
+The quality of the embeddings is a topic for a separate tutorial. In a nutshell, it is usually measured and compared by benchmarks, such as
+
+[Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard). The evaluation process itself is pretty
+
+straightforward and is based on a ground truth dataset built by humans. We have a set of queries and a set of the documents we would expect
+
+to receive for each of them. In the evaluation process, we take a query, find the most similar documents in the vector space and compare
+
+them with the ground truth. In that setup, **finding the most similar documents is implemented as full kNN search, without any approximation**.
+
+As a result, we can measure the quality of the embeddings themselves, without the influence of the ANN algorithm.
+
+
+
+## Retrieval quality
+
+
+
+Embeddings quality is indeed the most important factor in the semantic search quality. However, vector search engines, such as Qdrant, do not
+
+perform pure kNN search. Instead, they use **Approximate Nearest Neighbors** (ANN) algorithms, which are much faster than the exact search,
+
+but can return suboptimal results. We can also **measure the retrieval quality of that approximation** which also contributes to the overall
+
+search quality.
+
+
+
+### Quality metrics
+
+
+
+There are various ways of how quantify the quality of semantic search. Some of them, such as [Precision@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k),
+
+are based on the number of relevant documents in the top-k search results. Others, such as [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank),
+
+take into account the position of the first relevant document in the search results. [DCG and NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)
+
+metrics are, in turn, based on the relevance score of the documents.
+
+
+
+If we treat the search pipeline as a whole, we could use them all. The same is true for the embeddings quality evaluation. However, for the
+
+ANN algorithm itself, anything based on the relevance score or ranking is not applicable. Ranking in vector search relies on the distance
+
+between the query and the document in the vector space, however distance is not going to change due to approximation, as the function is
+
+still the same.
+
+
+
+Therefore, it only makes sense to measure the quality of the ANN algorithm by the number of relevant documents in the top-k search results,
+
+such as `precision@k`. It is calculated as the number of relevant documents in the top-k search results divided by `k`. In case of testing
+
+just the ANN algorithm, we can use the exact kNN search as a ground truth, with `k` being fixed. It will be a measure on **how well the ANN
+
+algorithm approximates the exact search**.
+
+
+
+## Measure the quality of the search results
+
+
+
+Let's build a quality evaluation of the ANN algorithm in Qdrant. We will, first, call the search endpoint in a standard way to obtain
+
+the approximate search results. Then, we will call the exact search endpoint to obtain the exact matches, and finally compare both results
+
+in terms of precision.
+
+
+
+Before we start, let's create a collection, fill it with some data and then start our evaluation. We will use the same dataset as in the
+
+[Loading a dataset from Hugging Face hub](/documentation/tutorials/huggingface-datasets/) tutorial, `Qdrant/arxiv-titles-instructorxl-embeddings`
+
+from the [Hugging Face hub](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings). Let's download it in a streaming
+
+mode, as we are only going to use part of it.
+
+
+
+```python
+
+from datasets import load_dataset
+
+
+
+dataset = load_dataset(
+
+ ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+
+)
+
+```
+
+
+
+We need some data to be indexed and another set for the testing purposes. Let's get the first 50000 items for the training and the next 1000
+
+for the testing.
+
+
+
+```python
+
+dataset_iterator = iter(dataset)
+
+train_dataset = [next(dataset_iterator) for _ in range(60000)]
+
+test_dataset = [next(dataset_iterator) for _ in range(1000)]
+
+```
+
+
+
+Now, let's create a collection and index the training data. This collection will be created with the default configuration. Please be aware that
+
+it might be different from your collection settings, and it's always important to test exactly the same configuration you are going to use later
+
+in production.
+
+
+
+
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""http://localhost:6333"")
+
+client.create_collection(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ vectors_config=models.VectorParams(
+
+ size=768, # Size of the embeddings generated by InstructorXL model
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+)
+
+```
+
+
+
+We are now ready to index the training data. Uploading the records is going to trigger the indexing process, which will build the HNSW graph.
+
+The indexing process may take some time, depending on the size of the dataset, but your data is going to be available for search immediately
+
+after receiving the response from the `upsert` endpoint. **As long as the indexing is not finished, and HNSW not built, Qdrant will perform
+
+the exact search**. We have to wait until the indexing is finished to be sure that the approximate search is performed.
+
+
+
+```python
+
+client.upload_points( # upload_points is available as of qdrant-client v1.7.1
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=item[""id""],
+
+ vector=item[""vector""],
+
+ payload=item,
+
+ )
+
+ for item in train_dataset
+
+ ]
+
+)
+
+
+
+while True:
+
+ collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"")
+
+ if collection_info.status == models.CollectionStatus.GREEN:
+
+ # Collection status is green, which means the indexing is finished
+
+ break
+
+```
+
+
+
+## Standard mode vs exact search
+
+
+
+Qdrant has a built-in exact search mode, which can be used to measure the quality of the search results. In this mode, Qdrant performs a
+
+full kNN search for each query, without any approximation. It is not suitable for production use with high load, but it is perfect for the
+
+evaluation of the ANN algorithm and its parameters. It might be triggered by setting the `exact` parameter to `True` in the search request.
+
+We are simply going to use all the examples from the test dataset as queries and compare the results of the approximate search with the
+
+results of the exact search. Let's create a helper function with `k` being a parameter, so we can calculate the `precision@k` for different
+
+values of `k`.
+
+
+
+```python
+
+def avg_precision_at_k(k: int):
+
+ precisions = []
+
+ for item in test_dataset:
+
+ ann_result = client.query_points(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ query=item[""vector""],
+
+ limit=k,
+
+ ).points
+
+
+
+ knn_result = client.query_points(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ query=item[""vector""],
+
+ limit=k,
+
+ search_params=models.SearchParams(
+
+ exact=True, # Turns on the exact search mode
+
+ ),
+
+ ).points
+
+
+
+ # We can calculate the precision@k by comparing the ids of the search results
+
+ ann_ids = set(item.id for item in ann_result)
+
+ knn_ids = set(item.id for item in knn_result)
+
+ precision = len(ann_ids.intersection(knn_ids)) / k
+
+ precisions.append(precision)
+
+
+
+ return sum(precisions) / len(precisions)
+
+```
+
+
+
+Calculating the `precision@5` is as simple as calling the function with the corresponding parameter:
+
+
+
+```python
+
+print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"")
+
+```
+
+
+
+Response:
+
+
+
+```text
+
+avg(precision@5) = 0.9935999999999995
+
+```
+
+
+
+As we can see, the precision of the approximate search vs exact search is pretty high. There are, however, some scenarios when we
+
+need higher precision and can accept higher latency. HNSW is pretty tunable, and we can increase the precision by changing its parameters.
+
+
+
+## Tweaking the HNSW parameters
+
+
+
+HNSW is a hierarchical graph, where each node has a set of links to other nodes. The number of edges per node is called the `m` parameter.
+
+The larger the value of it, the higher the precision of the search, but more space required. The `ef_construct` parameter is the number of
+
+neighbours to consider during the index building. Again, the larger the value, the higher the precision, but the longer the indexing time.
+
+The default values of these parameters are `m=16` and `ef_construct=100`. Let's try to increase them to `m=32` and `ef_construct=200` and
+
+see how it affects the precision. Of course, we need to wait until the indexing is finished before we can perform the search.
+
+
+
+```python
+
+client.update_collection(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ hnsw_config=models.HnswConfigDiff(
+
+ m=32, # Increase the number of edges per node from the default 16 to 32
+
+ ef_construct=200, # Increase the number of neighbours from the default 100 to 200
+
+ )
+
+)
+
+
+
+while True:
+
+ collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"")
+
+ if collection_info.status == models.CollectionStatus.GREEN:
+
+ # Collection status is green, which means the indexing is finished
+
+ break
+
+```
+
+
+
+The same function can be used to calculate the average `precision@5`:
+
+
+
+```python
+
+print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"")
+
+```
+
+
+
+Response:
+
+
+
+```text
+
+avg(precision@5) = 0.9969999999999998
+
+```
+
+
+
+The precision has obviously increased, and we know how to control it. However, there is a trade-off between the precision and the search
+
+latency and memory requirements. In some specific cases, we may want to increase the precision as much as possible, so now we know how
+
+to do it.
+
+
+
+## Wrapping up
+
+
+
+Assessing the quality of retrieval is a critical aspect of evaluating semantic search performance. It is imperative to measure retrieval quality when aiming for optimal quality of.
+
+your search results. Qdrant provides a built-in exact search mode, which can be used to measure the quality of the ANN algorithm itself,
+
+even in an automated way, as part of your CI/CD pipeline.
+
+
+
+Again, **the quality of the embeddings is the most important factor**. HNSW does a pretty good job in terms of precision, and it is
+
+parameterizable and tunable, when required. There are some other ANN algorithms available out there, such as [IVF*](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes#cell-probe-methods-indexivf-indexes),
+
+but they usually [perform worse than HNSW in terms of quality and performance](https://nirantk.com/writing/pgvector-vs-qdrant/#correctness).
+",documentation/tutorials/retrieval-quality.md
+"---
+
+title: Neural Search Service
+
+weight: 1
+
+---
+
+
+
+# Create a Simple Neural Search Service
+
+
+
+| Time: 30 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) |
+
+| --- | ----------- | ----------- |----------- |
+
+
+
+This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry.
+
+
+
+A neural search service uses artificial neural networks to improve the accuracy and relevance of search results. Besides offering simple keyword results, this system can retrieve results by meaning. It can understand and interpret complex search queries and provide more contextually relevant output, effectively enhancing the user's search experience.
+
+
+
+
+
+
+
+
+
+## Workflow
+
+
+
+To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI.
+
+
+
+![Neural Search Workflow](/docs/workflow-neural-search.png)
+
+
+
+> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
+
+
+
+## Prerequisites
+
+
+
+To complete this tutorial, you will need:
+
+
+
+- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
+
+- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
+
+- Python version >=3.8
+
+
+
+## Prepare sample dataset
+
+
+
+To conduct a neural search on startup descriptions, you must first encode the description data into vectors. To process text, you can use a pre-trained models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) or sentence transformers. The [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library lets you conveniently download and use many pre-trained models, such as DistilBERT, MPNet, etc.
+
+
+
+1. First you need to download the dataset.
+
+
+
+```bash
+
+wget https://storage.googleapis.com/generall-shared-data/startups_demo.json
+
+```
+
+
+
+2. Install the SentenceTransformer library as well as other relevant packages.
+
+
+
+```bash
+
+pip install sentence-transformers numpy pandas tqdm
+
+```
+
+
+
+3. Import the required modules.
+
+
+
+```python
+
+from sentence_transformers import SentenceTransformer
+
+import numpy as np
+
+import json
+
+import pandas as pd
+
+from tqdm.notebook import tqdm
+
+```
+
+
+
+You will be using a pre-trained model called `all-MiniLM-L6-v2`.
+
+This is a performance-optimized sentence embedding model and you can read more about it and other available models [here](https://www.sbert.net/docs/pretrained_models.html).
+
+
+
+
+
+4. Download and create a pre-trained sentence encoder.
+
+
+
+```python
+
+model = SentenceTransformer(
+
+ ""all-MiniLM-L6-v2"", device=""cuda""
+
+) # or device=""cpu"" if you don't have a GPU
+
+```
+
+5. Read the raw data file.
+
+
+
+```python
+
+df = pd.read_json(""./startups_demo.json"", lines=True)
+
+```
+
+6. Encode all startup descriptions to create an embedding vector for each. Internally, the `encode` function will split the input into batches, which will significantly speed up the process.
+
+
+
+```python
+
+vectors = model.encode(
+
+ [row.alt + "". "" + row.description for row in df.itertuples()],
+
+ show_progress_bar=True,
+
+)
+
+```
+
+All of the descriptions are now converted into vectors. There are 40474 vectors of 384 dimensions. The output layer of the model has this dimension
+
+
+
+```python
+
+vectors.shape
+
+# > (40474, 384)
+
+```
+
+
+
+7. Download the saved vectors into a new file named `startup_vectors.npy`
+
+
+
+```python
+
+np.save(""startup_vectors.npy"", vectors, allow_pickle=False)
+
+```
+
+
+
+## Run Qdrant in Docker
+
+
+
+Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
+
+
+
+> **Note:** Before you begin, create a project directory and a virtual python environment in it.
+
+
+
+1. Download the Qdrant image from DockerHub.
+
+
+
+```bash
+
+docker pull qdrant/qdrant
+
+```
+
+2. Start Qdrant inside of Docker.
+
+
+
+```bash
+
+docker run -p 6333:6333 \
+
+ -v $(pwd)/qdrant_storage:/qdrant/storage \
+
+ qdrant/qdrant
+
+```
+
+You should see output like this
+
+
+
+```text
+
+...
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333
+
+```
+
+
+
+Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
+
+
+
+All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
+
+
+
+## Upload data to Qdrant
+
+
+
+1. Install the official Python client to best interact with Qdrant.
+
+
+
+```bash
+
+pip install qdrant-client
+
+```
+
+
+
+At this point, you should have startup records in the `startups_demo.json` file, encoded vectors in `startup_vectors.npy` and Qdrant running on a local machine.
+
+
+
+Now you need to write a script to upload all startup data and vectors into the search engine.
+
+
+
+2. Create a client object for Qdrant.
+
+
+
+```python
+
+# Import client library
+
+from qdrant_client import QdrantClient
+
+from qdrant_client.models import VectorParams, Distance
+
+
+
+client = QdrantClient(""http://localhost:6333"")
+
+```
+
+
+
+3. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
+
+
+
+```python
+
+if not client.collection_exists(""startups""):
+
+ client.create_collection(
+
+ collection_name=""startups"",
+
+ vectors_config=VectorParams(size=384, distance=Distance.COSINE),
+
+ )
+
+```
+
+
+
+
+
+4. Create an iterator over the startup data and vectors.
+
+
+
+The Qdrant client library defines a special function that allows you to load datasets into the service.
+
+However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input.
+
+
+
+```python
+
+fd = open(""./startups_demo.json"")
+
+
+
+# payload is now an iterator over startup data
+
+payload = map(json.loads, fd)
+
+
+
+# Load all vectors into memory, numpy array works as iterable for itself.
+
+# Other option would be to use Mmap, if you don't want to load all data into RAM
+
+vectors = np.load(""./startup_vectors.npy"")
+
+```
+
+
+
+5. Upload the data
+
+
+
+```python
+
+client.upload_collection(
+
+ collection_name=""startups"",
+
+ vectors=vectors,
+
+ payload=payload,
+
+ ids=None, # Vector ids will be assigned automatically
+
+ batch_size=256, # How many vectors will be uploaded in a single request?
+
+)
+
+```
+
+
+
+Vectors are now uploaded to Qdrant.
+
+
+
+## Build the search API
+
+
+
+Now that all the preparations are complete, let's start building a neural search class.
+
+
+
+In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries.
+
+
+
+1. Create a file named `neural_searcher.py` and specify the following.
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+from sentence_transformers import SentenceTransformer
+
+
+
+
+
+class NeuralSearcher:
+
+ def __init__(self, collection_name):
+
+ self.collection_name = collection_name
+
+ # Initialize encoder model
+
+ self.model = SentenceTransformer(""all-MiniLM-L6-v2"", device=""cpu"")
+
+ # initialize Qdrant client
+
+ self.qdrant_client = QdrantClient(""http://localhost:6333"")
+
+```
+
+
+
+2. Write the search function.
+
+
+
+```python
+
+def search(self, text: str):
+
+ # Convert text query into vector
+
+ vector = self.model.encode(text).tolist()
+
+
+
+ # Use `vector` for search for closest vectors in the collection
+
+ search_result = self.qdrant_client.query_points(
+
+ collection_name=self.collection_name,
+
+ query=vector,
+
+ query_filter=None, # If you don't want any filters for now
+
+ limit=5, # 5 the most closest results is enough
+
+ ).points
+
+ # `search_result` contains found vector ids with similarity scores along with the stored payload
+
+ # In this function you are interested in payload only
+
+ payloads = [hit.payload for hit in search_result]
+
+ return payloads
+
+```
+
+
+
+3. Add search filters.
+
+
+
+With Qdrant it is also feasible to add some conditions to the search.
+
+For example, if you wanted to search for startups in a certain city, the search query could look like this:
+
+
+
+```python
+
+from qdrant_client.models import Filter
+
+
+
+ ...
+
+
+
+ city_of_interest = ""Berlin""
+
+
+
+ # Define a filter for cities
+
+ city_filter = Filter(**{
+
+ ""must"": [{
+
+ ""key"": ""city"", # Store city information in a field of the same name
+
+ ""match"": { # This condition checks if payload field has the requested value
+
+ ""value"": city_of_interest
+
+ }
+
+ }]
+
+ })
+
+
+
+ search_result = self.qdrant_client.query_points(
+
+ collection_name=self.collection_name,
+
+ query=vector,
+
+ query_filter=city_filter,
+
+ limit=5
+
+ ).points
+
+ ...
+
+```
+
+
+
+You have now created a class for neural search queries. Now wrap it up into a service.
+
+
+
+## Deploy the search with FastAPI
+
+
+
+To build the service you will use the FastAPI framework.
+
+
+
+1. Install FastAPI.
+
+
+
+To install it, use the command
+
+
+
+```bash
+
+pip install fastapi uvicorn
+
+```
+
+
+
+2. Implement the service.
+
+
+
+Create a file named `service.py` and specify the following.
+
+
+
+The service will have only one API endpoint and will look like this:
+
+
+
+```python
+
+from fastapi import FastAPI
+
+
+
+# The file where NeuralSearcher is stored
+
+from neural_searcher import NeuralSearcher
+
+
+
+app = FastAPI()
+
+
+
+# Create a neural searcher instance
+
+neural_searcher = NeuralSearcher(collection_name=""startups"")
+
+
+
+
+
+@app.get(""/api/search"")
+
+def search_startup(q: str):
+
+ return {""result"": neural_searcher.search(text=q)}
+
+
+
+
+
+if __name__ == ""__main__"":
+
+ import uvicorn
+
+
+
+ uvicorn.run(app, host=""0.0.0.0"", port=8000)
+
+```
+
+
+
+3. Run the service.
+
+
+
+```bash
+
+python service.py
+
+```
+
+
+
+4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
+
+
+
+You should be able to see a debug interface for your service.
+
+
+
+![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
+
+
+
+Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
+
+
+
+## Next steps
+
+
+
+The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo).
+
+You can try it to get an intuition for cases when the neural search is useful.
+
+The demo contains a switch that selects between neural and full-text searches.
+
+You can turn the neural search on and off to compare your result with a regular full-text search.
+
+
+
+> **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). |
+
+
+
+Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
+",documentation/tutorials/neural-search.md
+"---
+
+title: Semantic Search 101
+
+weight: -100
+
+aliases:
+
+ - /documentation/tutorials/mighty.md/
+
+---
+
+
+
+# Semantic Search for Beginners
+
+
+
+| Time: 5 - 15 min | Level: Beginner | | |
+
+| --- | ----------- | ----------- |----------- |
+
+
+
+
+
+
+
+## Overview
+
+
+
+If you are new to vector databases, this tutorial is for you. In 5 minutes you will build a semantic search engine for science fiction books. After you set it up, you will ask the engine about an impending alien threat. Your creation will recommend books as preparation for a potential space attack.
+
+
+
+Before you begin, you need to have a [recent version of Python](https://www.python.org/downloads/) installed. If you don't know how to run this code in a virtual environment, follow Python documentation for [Creating Virtual Environments](https://docs.python.org/3/tutorial/venv.html#creating-virtual-environments) first.
+
+
+
+This tutorial assumes you're in the bash shell. Use the Python documentation to activate a virtual environment, with commands such as:
+
+
+
+```bash
+
+source tutorial-env/bin/activate
+
+```
+
+
+
+## 1. Installation
+
+
+
+You need to process your data so that the search engine can work with it. The [Sentence Transformers](https://www.sbert.net/) framework gives you access to common Large Language Models that turn raw data into embeddings.
+
+
+
+```bash
+
+pip install -U sentence-transformers
+
+```
+
+
+
+Once encoded, this data needs to be kept somewhere. Qdrant lets you store data as embeddings. You can also use Qdrant to run search queries against this data. This means that you can ask the engine to give you relevant answers that go way beyond keyword matching.
+
+
+
+```bash
+
+pip install -U qdrant-client
+
+```
+
+
+
+
+
+
+
+### Import the models
+
+
+
+Once the two main frameworks are defined, you need to specify the exact models this engine will use. Before you do, activate the Python prompt (`>>>`) with the `python` command.
+
+
+
+```python
+
+from qdrant_client import models, QdrantClient
+
+from sentence_transformers import SentenceTransformer
+
+```
+
+
+
+The [Sentence Transformers](https://www.sbert.net/index.html) framework contains many embedding models. However, [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) is the fastest encoder for this tutorial.
+
+
+
+```python
+
+encoder = SentenceTransformer(""all-MiniLM-L6-v2"")
+
+```
+
+
+
+## 2. Add the dataset
+
+
+
+[all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) will encode the data you provide. Here you will list all the science fiction books in your library. Each book has metadata, a name, author, publication year and a short description.
+
+
+
+```python
+
+documents = [
+
+ {
+
+ ""name"": ""The Time Machine"",
+
+ ""description"": ""A man travels through time and witnesses the evolution of humanity."",
+
+ ""author"": ""H.G. Wells"",
+
+ ""year"": 1895,
+
+ },
+
+ {
+
+ ""name"": ""Ender's Game"",
+
+ ""description"": ""A young boy is trained to become a military leader in a war against an alien race."",
+
+ ""author"": ""Orson Scott Card"",
+
+ ""year"": 1985,
+
+ },
+
+ {
+
+ ""name"": ""Brave New World"",
+
+ ""description"": ""A dystopian society where people are genetically engineered and conditioned to conform to a strict social hierarchy."",
+
+ ""author"": ""Aldous Huxley"",
+
+ ""year"": 1932,
+
+ },
+
+ {
+
+ ""name"": ""The Hitchhiker's Guide to the Galaxy"",
+
+ ""description"": ""A comedic science fiction series following the misadventures of an unwitting human and his alien friend."",
+
+ ""author"": ""Douglas Adams"",
+
+ ""year"": 1979,
+
+ },
+
+ {
+
+ ""name"": ""Dune"",
+
+ ""description"": ""A desert planet is the site of political intrigue and power struggles."",
+
+ ""author"": ""Frank Herbert"",
+
+ ""year"": 1965,
+
+ },
+
+ {
+
+ ""name"": ""Foundation"",
+
+ ""description"": ""A mathematician develops a science to predict the future of humanity and works to save civilization from collapse."",
+
+ ""author"": ""Isaac Asimov"",
+
+ ""year"": 1951,
+
+ },
+
+ {
+
+ ""name"": ""Snow Crash"",
+
+ ""description"": ""A futuristic world where the internet has evolved into a virtual reality metaverse."",
+
+ ""author"": ""Neal Stephenson"",
+
+ ""year"": 1992,
+
+ },
+
+ {
+
+ ""name"": ""Neuromancer"",
+
+ ""description"": ""A hacker is hired to pull off a near-impossible hack and gets pulled into a web of intrigue."",
+
+ ""author"": ""William Gibson"",
+
+ ""year"": 1984,
+
+ },
+
+ {
+
+ ""name"": ""The War of the Worlds"",
+
+ ""description"": ""A Martian invasion of Earth throws humanity into chaos."",
+
+ ""author"": ""H.G. Wells"",
+
+ ""year"": 1898,
+
+ },
+
+ {
+
+ ""name"": ""The Hunger Games"",
+
+ ""description"": ""A dystopian society where teenagers are forced to fight to the death in a televised spectacle."",
+
+ ""author"": ""Suzanne Collins"",
+
+ ""year"": 2008,
+
+ },
+
+ {
+
+ ""name"": ""The Andromeda Strain"",
+
+ ""description"": ""A deadly virus from outer space threatens to wipe out humanity."",
+
+ ""author"": ""Michael Crichton"",
+
+ ""year"": 1969,
+
+ },
+
+ {
+
+ ""name"": ""The Left Hand of Darkness"",
+
+ ""description"": ""A human ambassador is sent to a planet where the inhabitants are genderless and can change gender at will."",
+
+ ""author"": ""Ursula K. Le Guin"",
+
+ ""year"": 1969,
+
+ },
+
+ {
+
+ ""name"": ""The Three-Body Problem"",
+
+ ""description"": ""Humans encounter an alien civilization that lives in a dying system."",
+
+ ""author"": ""Liu Cixin"",
+
+ ""year"": 2008,
+
+ },
+
+]
+
+```
+
+
+
+## 3. Define storage location
+
+
+
+You need to tell Qdrant where to store embeddings. This is a basic demo, so your local computer will use its memory as temporary storage.
+
+
+
+```python
+
+client = QdrantClient("":memory:"")
+
+```
+
+
+
+## 4. Create a collection
+
+
+
+All data in Qdrant is organized by collections. In this case, you are storing books, so we are calling it `my_books`.
+
+
+
+```python
+
+client.create_collection(
+
+ collection_name=""my_books"",
+
+ vectors_config=models.VectorParams(
+
+ size=encoder.get_sentence_embedding_dimension(), # Vector size is defined by used model
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+)
+
+```
+
+
+
+- The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. 384 is the encoder output dimensionality. You can also use model.get_sentence_embedding_dimension() to get the dimensionality of the model you are using.
+
+
+
+- The `distance` parameter lets you specify the function used to measure the distance between two points.
+
+
+
+
+
+## 5. Upload data to collection
+
+
+
+Tell the database to upload `documents` to the `my_books` collection. This will give each record an id and a payload. The payload is just the metadata from the dataset.
+
+
+
+```python
+
+client.upload_points(
+
+ collection_name=""my_books"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=idx, vector=encoder.encode(doc[""description""]).tolist(), payload=doc
+
+ )
+
+ for idx, doc in enumerate(documents)
+
+ ],
+
+)
+
+```
+
+
+
+## 6. Ask the engine a question
+
+
+
+Now that the data is stored in Qdrant, you can ask it questions and receive semantically relevant results.
+
+
+
+```python
+
+hits = client.query_points(
+
+ collection_name=""my_books"",
+
+ query=encoder.encode(""alien invasion"").tolist(),
+
+ limit=3,
+
+).points
+
+
+
+for hit in hits:
+
+ print(hit.payload, ""score:"", hit.score)
+
+```
+
+
+
+**Response:**
+
+
+
+The search engine shows three of the most likely responses that have to do with the alien invasion. Each of the responses is assigned a score to show how close the response is to the original inquiry.
+
+
+
+```text
+
+{'name': 'The War of the Worlds', 'description': 'A Martian invasion of Earth throws humanity into chaos.', 'author': 'H.G. Wells', 'year': 1898} score: 0.570093257022374
+
+{'name': ""The Hitchhiker's Guide to the Galaxy"", 'description': 'A comedic science fiction series following the misadventures of an unwitting human and his alien friend.', 'author': 'Douglas Adams', 'year': 1979} score: 0.5040468703143637
+
+{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
+
+```
+
+
+
+### Narrow down the query
+
+
+
+How about the most recent book from the early 2000s?
+
+
+
+```python
+
+hits = client.query_points(
+
+ collection_name=""my_books"",
+
+ query=encoder.encode(""alien invasion"").tolist(),
+
+ query_filter=models.Filter(
+
+ must=[models.FieldCondition(key=""year"", range=models.Range(gte=2000))]
+
+ ),
+
+ limit=1,
+
+).points
+
+
+
+for hit in hits:
+
+ print(hit.payload, ""score:"", hit.score)
+
+```
+
+
+
+**Response:**
+
+
+
+The query has been narrowed down to one result from 2008.
+
+
+
+```text
+
+{'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216
+
+```
+
+
+
+## Next Steps
+
+
+
+Congratulations, you have just created your very first search engine! Trust us, the rest of Qdrant is not that complicated, either. For your next tutorial you should try building an actual [Neural Search Service with a complete API and a dataset](../../tutorials/neural-search/).
+
+
+
+## Return to the bash shell
+
+
+
+To return to the bash prompt:
+
+
+
+1. Press Ctrl+D to exit the Python prompt (`>>>`).
+
+1. Enter the `deactivate` command to deactivate the virtual environment.
+",documentation/tutorials/search-beginners.md
+"---
+
+title: Multimodal Search
+
+weight: 4
+
+---
+
+
+
+# Multimodal Search with Qdrant and FastEmbed
+
+
+
+| Time: 15 min | Level: Beginner |Output: [GitHub](https://github.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb)|[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb) |
+
+| --- | ----------- | ----------- | ----------- |
+
+
+
+In this tutorial, you will set up a simple Multimodal Image & Text Search with Qdrant & FastEmbed.
+
+
+
+## Overview
+
+
+
+We often understand and share information more effectively when combining different types of data. For example, the taste of comfort food can trigger childhood memories. We might describe a song with just “pam pam clap” sounds. Instead of writing paragraphs. Sometimes, we may use emojis and stickers to express how we feel or to share complex ideas.
+
+
+
+Modalities of data such as **text**, **images**, **video** and **audio** in various combinations form valuable use cases for Semantic Search applications.
+
+
+
+Vector databases, being **modality-agnostic**, are perfect for building these applications.
+
+
+
+In this simple tutorial, we are working with two simple modalities: **image** and **text** data. However, you can create a Semantic Search application with any combination of modalities if you choose the right embedding model to bridge the **semantic gap**.
+
+
+
+> The **semantic gap** refers to the difference between low-level features (aka brightness) and high-level concepts (aka cuteness).
+
+
+
+For example, the [ImageBind model](https://github.com/facebookresearch/ImageBind) from Meta AI is said to bind all 4 mentioned modalities in one shared space.
+
+
+
+## Prerequisites
+
+
+
+> **Note**: The code for this tutorial can be found [here](https://github.com/qdrant/examples/multimodal-search)
+
+
+
+To complete this tutorial, you will need either Docker to run a pre-built Docker image of Qdrant and Python version ≥ 3.8 or a Google Collab Notebook if you don't want to install anything locally.
+
+
+
+We showed how to run Qdrant in Docker in the [""Create a Simple Neural Search Service""](https://qdrant.tech/documentation/tutorials/neural-search/) Tutorial.
+
+
+
+## Setup
+
+
+
+First, install the required libraries `qdrant-client`, `fastembed` and `Pillow`.
+
+For example, with the `pip` package manager, it can be done in the following way.
+
+
+
+```bash
+
+python3 -m pip install --upgrade qdrant-client fastembed Pillow
+
+```
+
+
+
+
+
+
+
+## Dataset
+
+To make the demonstration simple, we created a tiny dataset of images and their captions for you.
+
+
+
+Images can be downloaded from [here](https://github.com/qdrant/examples/multimodal-search/images).
+
+It's **important** to place them in the same folder as your code/notebook, in the folder named `images`.
+
+
+
+You can check out how images look like in the following way:
+
+```python
+
+from PIL import Image
+
+
+
+Image.open('images/lizard.jpg')
+
+```
+
+## Vectorize data
+
+
+
+`FastEmbed` supports **Contrastive Language–Image Pre-training** ([CLIP](https://openai.com/index/clip/)) model, the old (2021) but gold classics of multimodal Image-Text Machine Learning.
+
+**CLIP** model was one of the first models of such kind with ZERO-SHOT capabilities.
+
+
+
+When using it for semantic search, it's important to remember that the textual encoder of CLIP is trained to process no more than **77 tokens**,
+
+so CLIP is good for short texts.
+
+
+
+Let's embed a very short selection of images and their captions in the **shared embedding space** with CLIP.
+
+
+
+```python
+
+from fastembed import TextEmbedding, ImageEmbedding
+
+
+
+documents = [{""caption"": ""A photo of a cute pig"",
+
+ ""image"": ""images/piggy.jpg""},
+
+ {""caption"": ""A picture with a coffee cup"",
+
+ ""image"": ""images/coffee.jpg""},
+
+ {""caption"": ""A photo of a colourful lizard"",
+
+ ""image"": ""images/lizard.jpg""}
+
+]
+
+
+
+text_model_name = ""Qdrant/clip-ViT-B-32-text"" #CLIP text encoder
+
+text_model = TextEmbedding(model_name=text_model_name)
+
+text_embeddings_size = text_model._get_model_description(text_model_name)[""dim""] #dimension of text embeddings, produced by CLIP text encoder (512)
+
+texts_embeded = list(text_model.embed([document[""caption""] for document in documents])) #embedding captions with CLIP text encoder
+
+
+
+image_model_name = ""Qdrant/clip-ViT-B-32-vision"" #CLIP image encoder
+
+image_model = ImageEmbedding(model_name=image_model_name)
+
+image_embeddings_size = image_model._get_model_description(image_model_name)[""dim""] #dimension of image embeddings, produced by CLIP image encoder (512)
+
+images_embeded = list(image_model.embed([document[""image""] for document in documents])) #embedding images with CLIP image encoder
+
+```
+
+
+
+## Upload data to Qdrant
+
+
+
+1. **Create a client object for Qdrant**.
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""http://localhost:6333"") #or QdrantClient("":memory:"") if you're using Google Collab, this option is suitable only for simple prototypes/demos with Python client
+
+```
+
+
+
+2. **Create a new collection for your images with captions**.
+
+
+
+CLIP’s weights were trained to maximize the scaled **Cosine Similarity** of truly corresponding image/caption pairs,
+
+so that's the **Distance Metric** we will choose for our [Collection](https://qdrant.tech/documentation/concepts/collections/) of [Named Vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors).
+
+
+
+Using **Named Vectors**, we can easily showcase both Text-to-Image and Image-to-Text (Image-to-Image and Text-to-Text) search.
+
+
+
+```python
+
+if not client.collection_exists(""text_image""): #creating a Collection
+
+ client.create_collection(
+
+ collection_name =""text_image"",
+
+ vectors_config={ #Named Vectors
+
+ ""image"": models.VectorParams(size=image_embeddings_size, distance=models.Distance.COSINE),
+
+ ""text"": models.VectorParams(size=text_embeddings_size, distance=models.Distance.COSINE),
+
+ }
+
+ )
+
+```
+
+3. **Upload our images with captions to the Collection**.
+
+
+
+Each image with its caption will create a [Point](https://qdrant.tech/documentation/concepts/points/) in Qdrant.
+
+
+
+```python
+
+client.upload_points(
+
+ collection_name=""text_image"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=idx, #unique id of a point, pre-defined by the user
+
+ vector={
+
+ ""text"": texts_embeded[idx], #embeded caption
+
+ ""image"": images_embeded[idx] #embeded image
+
+ },
+
+ payload=doc #original image and its caption
+
+ )
+
+ for idx, doc in enumerate(documents)
+
+ ]
+
+)
+
+```
+
+
+
+## Search
+
+
+
+
Text-to-Image
+
+
+
+Let's see what image we will get to the query ""*What would make me energetic in the morning?*""
+
+
+
+```python
+
+from PIL import Image
+
+
+
+find_image = text_model.embed([""What would make me energetic in the morning?""]) #query, we embed it, so it also becomes a vector
+
+
+
+Image.open(client.search(
+
+ collection_name=""text_image"", #searching in our collection
+
+ query_vector=(""image"", list(find_image)[0]), #searching only among image vectors with our textual query
+
+ with_payload=[""image""], #user-readable information about search results, we are interested to see which image we will find
+
+ limit=1 #top-1 similar to the query result
+
+)[0].payload['image'])
+
+```
+
+**Response:**
+
+
+
+![Coffee Image](/docs/coffee.jpg)
+
+
+
+
Image-to-Text
+
+Now, let's do a reverse search with an image:
+
+
+
+
+
+```python
+
+from PIL import Image
+
+
+
+Image.open('images/piglet.jpg')
+
+```
+
+![Piglet Image](/docs/piglet.jpg)
+
+
+
+Let's see what caption we will get, searching by this piglet image, which, as you can check, is not in our **Collection**.
+
+
+
+```python
+
+find_image = image_model.embed(['images/piglet.jpg']) #embedding our image query
+
+
+
+client.search(
+
+ collection_name=""text_image"",
+
+ query_vector=(""text"", list(find_image)[0]), #now we are searching only among text vectors with our image query
+
+ with_payload=[""caption""], #user-readable information about search results, we are interested to see which caption we will get
+
+ limit=1
+
+)[0].payload['caption']
+
+```
+
+**Response:**
+
+```text
+
+'A photo of a cute pig'
+
+```
+
+
+
+## Next steps
+
+
+
+Use cases of even just Image & Text Multimodal Search are countless: E-Commerce, Media Management, Content Recommendation, Emotion Recognition Systems, Biomedical Image Retrieval, Spoken Sign Language Transcription, etc.
+
+
+
+Imagine a scenario: user wants to find a product similar to a picture they have, but they also have specific textual requirements, like ""*in beige colour*"".
+
+You can search using just texts or images and combine their embeddings in a **late fusion manner** (summing and weighting might work surprisingly well).
+
+
+
+Moreover, using [Discovery Search](https://qdrant.tech/articles/discovery-search/) with both modalities, you can provide users with information that is impossible to retrieve unimodally!
+
+
+
+Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, experiment, and have fun!",documentation/tutorials/multimodal-search-fastembed.md
+"---
+
+title: Load Hugging Face dataset
+
+weight: 19
+
+---
+
+
+
+# Loading a dataset from Hugging Face hub
+
+
+
+[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and
+
+datasets. [Qdrant](https://huggingface.co/Qdrant) also publishes datasets along with the
+
+embeddings that you can use to practice with Qdrant and build your applications based on semantic
+
+search. **Please [let us know](https://qdrant.to/discord) if you'd like to see a specific dataset!**
+
+
+
+## arxiv-titles-instructorxl-embeddings
+
+
+
+[This dataset](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) contains
+
+embeddings generated from the paper titles only. Each vector has a payload with the title used to
+
+create it, along with the DOI (Digital Object Identifier).
+
+
+
+```json
+
+{
+
+ ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"",
+
+ ""DOI"": ""1612.05191""
+
+}
+
+```
+
+
+
+You can find a detailed description of the dataset in the [Practice Datasets](/documentation/datasets/#journal-article-titles)
+
+section. If you prefer loading the dataset from a Qdrant snapshot, it also linked there.
+
+
+
+Loading the dataset is as simple as using the `load_dataset` function from the `datasets` library:
+
+
+
+```python
+
+from datasets import load_dataset
+
+
+
+dataset = load_dataset(""Qdrant/arxiv-titles-instructorxl-embeddings"")
+
+```
+
+
+
+
+
+
+
+The dataset contains 2,250,000 vectors. This is how you can check the list of the features in the dataset:
+
+
+
+```python
+
+dataset.features
+
+```
+
+
+
+### Streaming the dataset
+
+
+
+Dataset streaming lets you work with a dataset without downloading it. The data is streamed as
+
+you iterate over the dataset. You can read more about it in the [Hugging Face
+
+documentation](https://huggingface.co/docs/datasets/stream).
+
+
+
+```python
+
+from datasets import load_dataset
+
+
+
+dataset = load_dataset(
+
+ ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+
+)
+
+```
+
+
+
+### Loading the dataset into Qdrant
+
+
+
+You can load the dataset into Qdrant using the [Python SDK](https://github.com/qdrant/qdrant-client).
+
+The embeddings are already precomputed, so you can store them in a collection, that we're going
+
+to create in a second:
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ vectors_config=models.VectorParams(
+
+ size=768,
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+)
+
+```
+
+
+
+It is always a good idea to use batching, while loading a large dataset, so let's do that.
+
+We are going to need a helper function to split the dataset into batches:
+
+
+
+```python
+
+from itertools import islice
+
+
+
+def batched(iterable, n):
+
+ iterator = iter(iterable)
+
+ while batch := list(islice(iterator, n)):
+
+ yield batch
+
+```
+
+
+
+If you are a happy user of Python 3.12+, you can use the [`batched` function from the `itertools`
+
+](https://docs.python.org/3/library/itertools.html#itertools.batched) package instead.
+
+
+
+No matter what Python version you are using, you can use the `upsert` method to load the dataset,
+
+batch by batch, into Qdrant:
+
+
+
+```python
+
+batch_size = 100
+
+
+
+for batch in batched(dataset, batch_size):
+
+ ids = [point.pop(""id"") for point in batch]
+
+ vectors = [point.pop(""vector"") for point in batch]
+
+
+
+ client.upsert(
+
+ collection_name=""arxiv-titles-instructorxl-embeddings"",
+
+ points=models.Batch(
+
+ ids=ids,
+
+ vectors=vectors,
+
+ payloads=batch,
+
+ ),
+
+ )
+
+```
+
+
+
+Your collection is ready to be used for search! Please [let us know using Discord](https://qdrant.to/discord)
+
+if you would like to see more datasets published on Hugging Face hub.
+",documentation/tutorials/huggingface-datasets.md
+"---
+
+title: Hybrid Search with Fastembed
+
+weight: 2
+
+
+
+aliases:
+
+ - /documentation/tutorials/neural-search-fastembed/
+
+---
+
+
+
+# Create a Hybrid Search Service with Fastembed
+
+
+
+| Time: 20 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/) |
+
+| --- | ----------- | ----------- |----------- |
+
+
+
+This tutorial shows you how to build and deploy your own hybrid search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query.
+
+The website contains the company names, descriptions, locations, and a picture for each entry.
+
+
+
+As we have already written on our [blog](/articles/hybrid-search/), there is no single definition of hybrid search.
+
+In this tutorial we are covering the case with a combination of dense and [sparse embeddings](/articles/sparse-vectors/).
+
+The former ones refer to the embeddings generated by such well-known neural networks as BERT, while the latter ones are more related to a traditional full-text search approach.
+
+
+
+Our hybrid search service will use [Fastembed](https://github.com/qdrant/fastembed) package to generate embeddings of text descriptions and [FastAPI](https://fastapi.tiangolo.com/) to serve the search API.
+
+Fastembed natively integrates with Qdrant client, so you can easily upload the data into Qdrant and perform search queries.
+
+
+
+![Hybrid Search Schema](/documentation/tutorials/hybrid-search-with-fastembed/hybrid-search-schema.png)
+
+
+
+
+
+## Workflow
+
+
+
+To create a hybrid search service, you will need to transform your raw data and then create a search function to manipulate it.
+
+First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a hybrid search API and 4) serve it using FastAPI.
+
+
+
+![Hybrid Search Workflow](/docs/workflow-neural-search.png)
+
+
+
+## Prerequisites
+
+
+
+To complete this tutorial, you will need:
+
+
+
+- Docker - The easiest way to use Qdrant is to run a pre-built Docker image.
+
+- [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com.
+
+- Python version >=3.8
+
+
+
+## Prepare sample dataset
+
+
+
+To conduct a hybrid search on startup descriptions, you must first encode the description data into vectors.
+
+Fastembed integration into qdrant client combines encoding and uploading into a single step.
+
+
+
+It also takes care of batching and parallelization, so you don't have to worry about it.
+
+
+
+Let's start by downloading the data and installing the necessary packages.
+
+
+
+
+
+1. First you need to download the dataset.
+
+
+
+```bash
+
+wget https://storage.googleapis.com/generall-shared-data/startups_demo.json
+
+```
+
+
+
+## Run Qdrant in Docker
+
+
+
+Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API.
+
+
+
+> **Note:** Before you begin, create a project directory and a virtual python environment in it.
+
+
+
+1. Download the Qdrant image from DockerHub.
+
+
+
+```bash
+
+docker pull qdrant/qdrant
+
+```
+
+2. Start Qdrant inside of Docker.
+
+
+
+```bash
+
+docker run -p 6333:6333 \
+
+ -v $(pwd)/qdrant_storage:/qdrant/storage \
+
+ qdrant/qdrant
+
+```
+
+You should see output like this
+
+
+
+```text
+
+...
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333
+
+```
+
+
+
+Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser.
+
+
+
+All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container.
+
+
+
+
+
+## Upload data to Qdrant
+
+
+
+1. Install the official Python client to best interact with Qdrant.
+
+
+
+```bash
+
+pip install ""qdrant-client[fastembed]>=1.8.2""
+
+```
+
+> **Note:** This tutorial requires fastembed of version >=0.2.6.
+
+
+
+At this point, you should have startup records in the `startups_demo.json` file and Qdrant running on a local machine.
+
+
+
+Now you need to write a script to upload all startup data and vectors into the search engine.
+
+
+
+2. Create a client object for Qdrant.
+
+
+
+```python
+
+# Import client library
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(url=""http://localhost:6333"")
+
+```
+
+
+
+3. Select model to encode your data.
+
+
+
+You will be using two pre-trained models to compute dense and sparse vectors correspondingly: `sentence-transformers/all-MiniLM-L6-v2` and `prithivida/Splade_PP_en_v1`.
+
+
+
+
+
+
+
+```python
+
+client.set_model(""sentence-transformers/all-MiniLM-L6-v2"")
+
+# comment this line to use dense vectors only
+
+client.set_sparse_model(""prithivida/Splade_PP_en_v1"")
+
+```
+
+
+
+4. Related vectors need to be added to a collection. Create a new collection for your startup vectors.
+
+
+
+```python
+
+if not client.collection_exists(""startups""):
+
+ client.create_collection(
+
+ collection_name=""startups"",
+
+ vectors_config=client.get_fastembed_vector_params(),
+
+ # comment this line to use dense vectors only
+
+ sparse_vectors_config=client.get_fastembed_sparse_vector_params(),
+
+ )
+
+```
+
+
+
+Qdrant requires vectors to have their own names and configurations.
+
+
+
+Methods `get_fastembed_vector_params` and `get_fastembed_sparse_vector_params` help you to get the corresponding parameters for the models you are using.
+
+These parameters include vector size, distance function, etc.
+
+
+
+Without fastembed integration, you would need to specify the vector size and distance function manually. Read more about it [here](/documentation/tutorials/neural-search/).
+
+
+
+Additionally, you can specify extended configuration for your vectors, like `quantization_config` or `hnsw_config`.
+
+
+
+
+
+5. Read data from the file.
+
+
+
+```python
+
+import json
+
+
+
+payload_path = ""startups_demo.json""
+
+metadata = []
+
+documents = []
+
+
+
+with open(payload_path) as fd:
+
+ for line in fd:
+
+ obj = json.loads(line)
+
+ documents.append(obj.pop(""description""))
+
+ metadata.append(obj)
+
+```
+
+
+
+In this block of code, we read data from `startups_demo.json` file and split it into 2 lists: `documents` and `metadata`.
+
+Documents are the raw text descriptions of startups. Metadata is the payload associated with each startup, such as the name, location, and picture.
+
+We will use `documents` to encode the data into vectors.
+
+
+
+
+
+6. Encode and upload data.
+
+
+
+```python
+
+client.add(
+
+ collection_name=""startups"",
+
+ documents=documents,
+
+ metadata=metadata,
+
+ parallel=0, # Use all available CPU cores to encode data.
+
+ # Requires wrapping code into if __name__ == '__main__' block
+
+)
+
+```
+
+
+
+
+
+
+
+
+
+ Upload processed data
+
+
+
+Download and unpack the processed data from [here](https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz) or use the following script:
+
+
+
+```bash
+
+wget https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz
+
+tar -xvf startups_hybrid_search_processed_40k.tar.gz
+
+```
+
+
+
+Then you can upload the data to Qdrant.
+
+
+
+```python
+
+from typing import List
+
+import json
+
+import numpy as np
+
+from qdrant_client import models
+
+
+
+
+
+def named_vectors(vectors: List[float], sparse_vectors: List[models.SparseVector]) -> dict:
+
+ # make sure to use the same client object as previously
+
+ # or `set_model_name` and `set_sparse_model_name` manually
+
+ dense_vector_name = client.get_vector_field_name()
+
+ sparse_vector_name = client.get_sparse_vector_field_name()
+
+ for vector, sparse_vector in zip(vectors, sparse_vectors):
+
+ yield {
+
+ dense_vector_name: vector,
+
+ sparse_vector_name: models.SparseVector(**sparse_vector),
+
+ }
+
+
+
+with open(""dense_vectors.npy"", ""rb"") as f:
+
+ vectors = np.load(f)
+
+
+
+with open(""sparse_vectors.json"", ""r"") as f:
+
+ sparse_vectors = json.load(f)
+
+
+
+with open(""payload.json"", ""r"",) as f:
+
+ payload = json.load(f)
+
+
+
+client.upload_collection(
+
+ ""startups"", vectors=named_vectors(vectors, sparse_vectors), payload=payload
+
+)
+
+```
+
+
+
+
+
+The `add` method will encode all documents and upload them to Qdrant.
+
+This is one of the two fastembed-specific methods, that combines encoding and uploading into a single step.
+
+
+
+The `parallel` parameter enables data-parallelism instead of built-in ONNX parallelism.
+
+
+
+Additionally, you can specify ids for each document, if you want to use them later to update or delete documents.
+
+If you don't specify ids, they will be generated automatically and returned as a result of the `add` method.
+
+
+
+You can monitor the progress of the encoding by passing tqdm progress bar to the `add` method.
+
+
+
+```python
+
+from tqdm import tqdm
+
+
+
+client.add(
+
+ collection_name=""startups"",
+
+ documents=documents,
+
+ metadata=metadata,
+
+ ids=tqdm(range(len(documents))),
+
+)
+
+```
+
+
+
+## Build the search API
+
+
+
+Now that all the preparations are complete, let's start building a neural search class.
+
+
+
+In order to process incoming requests, the hybrid search class will need 3 things: 1) models to convert the query into a vector, 2) the Qdrant client to perform search queries, 3) fusion function to re-rank dense and sparse search results.
+
+
+
+Fastembed integration encapsulates query encoding, search and fusion into a single method call.
+
+Fastembed leverages [reciprocal rank fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) in order combine the results.
+
+
+
+
+
+1. Create a file named `hybrid_searcher.py` and specify the following.
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+
+
+class HybridSearcher:
+
+ DENSE_MODEL = ""sentence-transformers/all-MiniLM-L6-v2""
+
+ SPARSE_MODEL = ""prithivida/Splade_PP_en_v1""
+
+ def __init__(self, collection_name):
+
+ self.collection_name = collection_name
+
+ # initialize Qdrant client
+
+ self.qdrant_client = QdrantClient(""http://localhost:6333"")
+
+ self.qdrant_client.set_model(self.DENSE_MODEL)
+
+ # comment this line to use dense vectors only
+
+ self.qdrant_client.set_sparse_model(self.SPARSE_MODEL)
+
+```
+
+
+
+2. Write the search function.
+
+
+
+```python
+
+def search(self, text: str):
+
+ search_result = self.qdrant_client.query(
+
+ collection_name=self.collection_name,
+
+ query_text=text,
+
+ query_filter=None, # If you don't want any filters for now
+
+ limit=5, # 5 the closest results
+
+ )
+
+ # `search_result` contains found vector ids with similarity scores
+
+ # along with the stored payload
+
+
+
+ # Select and return metadata
+
+ metadata = [hit.metadata for hit in search_result]
+
+ return metadata
+
+```
+
+
+
+3. Add search filters.
+
+
+
+With Qdrant it is also feasible to add some conditions to the search.
+
+For example, if you wanted to search for startups in a certain city, the search query could look like this:
+
+
+
+```python
+
+from qdrant_client import models
+
+
+
+ ...
+
+
+
+ city_of_interest = ""Berlin""
+
+
+
+ # Define a filter for cities
+
+ city_filter = models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""city"",
+
+ match=models.MatchValue(value=city_of_interest)
+
+ )
+
+ ]
+
+ )
+
+
+
+ search_result = self.qdrant_client.query(
+
+ collection_name=self.collection_name,
+
+ query_text=text,
+
+ query_filter=city_filter,
+
+ limit=5
+
+ )
+
+ ...
+
+```
+
+
+
+You have now created a class for neural search queries. Now wrap it up into a service.
+
+
+
+## Deploy the search with FastAPI
+
+
+
+To build the service you will use the FastAPI framework.
+
+
+
+1. Install FastAPI.
+
+
+
+To install it, use the command
+
+
+
+```bash
+
+pip install fastapi uvicorn
+
+```
+
+
+
+2. Implement the service.
+
+
+
+Create a file named `service.py` and specify the following.
+
+
+
+The service will have only one API endpoint and will look like this:
+
+
+
+```python
+
+from fastapi import FastAPI
+
+
+
+# The file where HybridSearcher is stored
+
+from hybrid_searcher import HybridSearcher
+
+
+
+app = FastAPI()
+
+
+
+# Create a neural searcher instance
+
+hybrid_searcher = HybridSearcher(collection_name=""startups"")
+
+
+
+
+
+@app.get(""/api/search"")
+
+def search_startup(q: str):
+
+ return {""result"": hybrid_searcher.search(text=q)}
+
+
+
+
+
+if __name__ == ""__main__"":
+
+ import uvicorn
+
+
+
+ uvicorn.run(app, host=""0.0.0.0"", port=8000)
+
+```
+
+
+
+3. Run the service.
+
+
+
+```bash
+
+python service.py
+
+```
+
+
+
+4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs).
+
+
+
+You should be able to see a debug interface for your service.
+
+
+
+![FastAPI Swagger interface](/docs/fastapi_neural_search.png)
+
+
+
+Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results.
+
+
+
+Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
+",documentation/tutorials/hybrid-search-fastembed.md
+"---
+
+title: Asynchronous API
+
+weight: 14
+
+---
+
+
+
+# Using Qdrant asynchronously
+
+
+
+Asynchronous programming is being broadly adopted in the Python ecosystem. Tools such as FastAPI [have embraced this new
+
+paradigm](https://fastapi.tiangolo.com/async/), but it is also becoming a standard for ML models served as SaaS. For example, the Cohere SDK
+
+[provides an async client](https://github.com/cohere-ai/cohere-python/blob/856a4c3bd29e7a75fa66154b8ac9fcdf1e0745e0/src/cohere/client.py#L189) next to its synchronous counterpart.
+
+
+
+Databases are often launched as separate services and are accessed via a network. All the interactions with them are IO-bound and can
+
+be performed asynchronously so as not to waste time actively waiting for a server response. In Python, this is achieved by
+
+using [`async/await`](https://docs.python.org/3/library/asyncio-task.html) syntax. That lets the interpreter switch to another task
+
+while waiting for a response from the server.
+
+
+
+## When to use async API
+
+
+
+There is no need to use async API if the application you are writing will never support multiple users at once (e.g it is a script that runs once per day). However, if you are writing a web service that multiple users will use simultaneously, you shouldn't be
+
+blocking the threads of the web server as it limits the number of concurrent requests it can handle. In this case, you should use
+
+the async API.
+
+
+
+Modern web frameworks like [FastAPI](https://fastapi.tiangolo.com/) and [Quart](https://quart.palletsprojects.com/en/latest/) support
+
+async API out of the box. Mixing asynchronous code with an existing synchronous codebase might be a challenge. The `async/await` syntax
+
+cannot be used in synchronous functions. On the other hand, calling an IO-bound operation synchronously in async code is considered
+
+an antipattern. Therefore, if you build an async web service, exposed through an [ASGI](https://asgi.readthedocs.io/en/latest/) server,
+
+you should use the async API for all the interactions with Qdrant.
+
+
+
+
+
+
+
+### Using Qdrant asynchronously
+
+
+
+The simplest way of running asynchronous code is to use define `async` function and use the `asyncio.run` in the following way to run it:
+
+
+
+```python
+
+from qdrant_client import models
+
+
+
+import qdrant_client
+
+import asyncio
+
+
+
+
+
+async def main():
+
+ client = qdrant_client.AsyncQdrantClient(""localhost"")
+
+
+
+ # Create a collection
+
+ await client.create_collection(
+
+ collection_name=""my_collection"",
+
+ vectors_config=models.VectorParams(size=4, distance=models.Distance.COSINE),
+
+ )
+
+
+
+ # Insert a vector
+
+ await client.upsert(
+
+ collection_name=""my_collection"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"",
+
+ payload={
+
+ ""color"": ""red"",
+
+ },
+
+ vector=[0.9, 0.1, 0.1, 0.5],
+
+ ),
+
+ ],
+
+ )
+
+
+
+ # Search for nearest neighbors
+
+ points = await client.query_points(
+
+ collection_name=""my_collection"",
+
+ query=[0.9, 0.1, 0.1, 0.5],
+
+ limit=2,
+
+ ).points
+
+
+
+ # Your async code using AsyncQdrantClient might be put here
+
+ # ...
+
+
+
+
+
+asyncio.run(main())
+
+```
+
+
+
+The `AsyncQdrantClient` provides the same methods as the synchronous counterpart `QdrantClient`. If you already have a synchronous
+
+codebase, switching to async API is as simple as replacing `QdrantClient` with `AsyncQdrantClient` and adding `await` before each
+
+method call.
+
+
+
+
+
+
+
+## Supported Python libraries
+
+
+
+Qdrant integrates with numerous Python libraries. Until recently, only [Langchain](https://python.langchain.com) provided async Python API support.
+
+Qdrant is the only vector database with full coverage of async API in Langchain. Their documentation [describes how to use
+
+it](https://python.langchain.com/docs/modules/data_connection/vectorstores/#asynchronous-operations).
+",documentation/tutorials/async-api.md
+"---
+
+title: Create and restore from snapshot
+
+weight: 14
+
+---
+
+
+
+# Create and restore collections from snapshot
+
+
+
+| Time: 20 min | Level: Beginner | | |
+
+|--------------|-----------------|--|----|
+
+
+
+A collection is a basic unit of data storage in Qdrant. It contains vectors, their IDs, and payloads. However, keeping the search efficient requires additional data structures to be built on top of the data. Building these data structures may take a while, especially for large collections.
+
+That's why using snapshots is the best way to export and import Qdrant collections, as they contain all the bits and pieces required to restore the entire collection efficiently.
+
+
+
+This tutorial will show you how to create a snapshot of a collection and restore it. Since working with snapshots in a distributed environment might be thought to be a bit more complex, we will use a 3-node Qdrant cluster. However, the same approach applies to a single-node setup.
+
+
+
+
+
+
+
+You can use the techniques described in this page to migrate a cluster. Follow the instructions
+
+in this tutorial to create and download snapshots. When you [Restore from snapshot](#restore-from-snapshot), restore your data to the new cluster.
+
+
+
+## Prerequisites
+
+
+
+Let's assume you already have a running Qdrant instance or a cluster. If not, you can follow the [installation guide](/documentation/guides/installation/) to set up a local Qdrant instance or use [Qdrant Cloud](https://cloud.qdrant.io/) to create a cluster in a few clicks.
+
+
+
+Once the cluster is running, let's install the required dependencies:
+
+
+
+```shell
+
+pip install qdrant-client datasets
+
+```
+
+
+
+### Establish a connection to Qdrant
+
+
+
+We are going to use the Python SDK and raw HTTP calls to interact with Qdrant. Since we are going to use a 3-node cluster, we need to know the URLs of all the nodes. For the simplicity, let's keep them all in constants, along with the API key, so we can refer to them later:
+
+
+
+```python
+
+QDRANT_MAIN_URL = ""https://my-cluster.com:6333""
+
+QDRANT_NODES = (
+
+ ""https://node-0.my-cluster.com:6333"",
+
+ ""https://node-1.my-cluster.com:6333"",
+
+ ""https://node-2.my-cluster.com:6333"",
+
+)
+
+QDRANT_API_KEY = ""my-api-key""
+
+```
+
+
+
+
+
+
+
+We can now create a client instance:
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+client = QdrantClient(QDRANT_MAIN_URL, api_key=QDRANT_API_KEY)
+
+```
+
+
+
+First of all, we are going to create a collection from a precomputed dataset. If you already have a collection, you can skip this step and start by [creating a snapshot](#create-and-download-snapshots).
+
+
+
+
+
+ (Optional) Create collection and import data
+
+
+
+### Load the dataset
+
+
+
+We are going to use a dataset with precomputed embeddings, available on Hugging Face Hub. The dataset is called [Qdrant/arxiv-titles-instructorxl-embeddings](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) and was created using the [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. It contains 2.25M embeddings for the titles of the papers from the [arXiv](https://arxiv.org/) dataset.
+
+
+
+Loading the dataset is as simple as:
+
+
+
+```python
+
+from datasets import load_dataset
+
+
+
+dataset = load_dataset(
+
+ ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True
+
+)
+
+```
+
+
+
+We used the streaming mode, so the dataset is not loaded into memory. Instead, we can iterate through it and extract the id and vector embedding:
+
+
+
+```python
+
+for payload in dataset:
+
+ id_ = payload.pop(""id"")
+
+ vector = payload.pop(""vector"")
+
+ print(id_, vector, payload)
+
+```
+
+
+
+A single payload looks like this:
+
+
+
+```json
+
+{
+
+ 'title': 'Dynamics of partially localized brane systems',
+
+ 'DOI': '1109.1415'
+
+}
+
+```
+
+
+
+
+
+### Create a collection
+
+
+
+First things first, we need to create our collection. We're not going to play with the configuration of it, but it makes sense to do it right now.
+
+The configuration is also a part of the collection snapshot.
+
+
+
+```python
+
+from qdrant_client import models
+
+
+
+if not client.collection_exists(""test_collection""):
+
+ client.create_collection(
+
+ collection_name=""test_collection"",
+
+ vectors_config=models.VectorParams(
+
+ size=768, # Size of the embedding vector generated by the InstructorXL model
+
+ distance=models.Distance.COSINE
+
+ ),
+
+ )
+
+```
+
+
+
+### Upload the dataset
+
+
+
+Calculating the embeddings is usually a bottleneck of the vector search pipelines, but we are happy to have them in place already. Since the goal of this tutorial is to show how to create a snapshot, **we are going to upload only a small part of the dataset**.
+
+
+
+```python
+
+ids, vectors, payloads = [], [], []
+
+for payload in dataset:
+
+ id_ = payload.pop(""id"")
+
+ vector = payload.pop(""vector"")
+
+
+
+ ids.append(id_)
+
+ vectors.append(vector)
+
+ payloads.append(payload)
+
+
+
+ # We are going to upload only 1000 vectors
+
+ if len(ids) == 1000:
+
+ break
+
+
+
+client.upsert(
+
+ collection_name=""test_collection"",
+
+ points=models.Batch(
+
+ ids=ids,
+
+ vectors=vectors,
+
+ payloads=payloads,
+
+ ),
+
+)
+
+```
+
+
+
+Our collection is now ready to be used for search. Let's create a snapshot of it.
+
+
+
+
+
+
+
+If you already have a collection, you can skip the previous step and start by [creating a snapshot](#create-and-download-snapshots).
+
+
+
+## Create and download snapshots
+
+
+
+Qdrant exposes an HTTP endpoint to request creating a snapshot, but we can also call it with the Python SDK.
+
+Our setup consists of 3 nodes, so we need to call the endpoint **on each of them** and create a snapshot on each node. While using Python SDK, that means creating a separate client instance for each node.
+
+
+
+
+
+
+
+
+
+
+
+```python
+
+snapshot_urls = []
+
+for node_url in QDRANT_NODES:
+
+ node_client = QdrantClient(node_url, api_key=QDRANT_API_KEY)
+
+ snapshot_info = node_client.create_snapshot(collection_name=""test_collection"")
+
+
+
+ snapshot_url = f""{node_url}/collections/test_collection/snapshots/{snapshot_info.name}""
+
+ snapshot_urls.append(snapshot_url)
+
+```
+
+
+
+```http
+
+// for `https://node-0.my-cluster.com:6333`
+
+POST /collections/test_collection/snapshots
+
+
+
+// for `https://node-1.my-cluster.com:6333`
+
+POST /collections/test_collection/snapshots
+
+
+
+// for `https://node-2.my-cluster.com:6333`
+
+POST /collections/test_collection/snapshots
+
+```
+
+
+
+
+
+ Response
+
+
+
+```json
+
+{
+
+ ""result"": {
+
+ ""name"": ""test_collection-559032209313046-2024-01-03-13-20-11.snapshot"",
+
+ ""creation_time"": ""2024-01-03T13:20:11"",
+
+ ""size"": 18956800
+
+ },
+
+ ""status"": ""ok"",
+
+ ""time"": 0.307644965
+
+}
+
+```
+
+
+
+
+
+
+
+
+
+Once we have the snapshot URLs, we can download them. Please make sure to include the API key in the request headers.
+
+Downloading the snapshot **can be done only through the HTTP API**, so we are going to use the `requests` library.
+
+
+
+```python
+
+import requests
+
+import os
+
+
+
+# Create a directory to store snapshots
+
+os.makedirs(""snapshots"", exist_ok=True)
+
+
+
+local_snapshot_paths = []
+
+for snapshot_url in snapshot_urls:
+
+ snapshot_name = os.path.basename(snapshot_url)
+
+ local_snapshot_path = os.path.join(""snapshots"", snapshot_name)
+
+
+
+ response = requests.get(
+
+ snapshot_url, headers={""api-key"": QDRANT_API_KEY}
+
+ )
+
+ with open(local_snapshot_path, ""wb"") as f:
+
+ response.raise_for_status()
+
+ f.write(response.content)
+
+
+
+ local_snapshot_paths.append(local_snapshot_path)
+
+```
+
+
+
+Alternatively, you can use the `wget` command:
+
+
+
+```bash
+
+wget https://node-0.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313046-2024-01-03-13-20-11.snapshot \
+
+ --header=""api-key: ${QDRANT_API_KEY}"" \
+
+ -O node-0-shapshot.snapshot
+
+
+
+wget https://node-1.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313047-2024-01-03-13-20-12.snapshot \
+
+ --header=""api-key: ${QDRANT_API_KEY}"" \
+
+ -O node-1-shapshot.snapshot
+
+
+
+wget https://node-2.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313048-2024-01-03-13-20-13.snapshot \
+
+ --header=""api-key: ${QDRANT_API_KEY}"" \
+
+ -O node-2-shapshot.snapshot
+
+```
+
+
+
+The snapshots are now stored locally. We can use them to restore the collection to a different Qdrant instance, or treat them as a backup. We will create another collection using the same data on the same cluster.
+
+
+
+## Restore from snapshot
+
+
+
+Our brand-new snapshot is ready to be restored. Typically, it is used to move a collection to a different Qdrant instance, but we are going to use it to create a new collection on the same cluster.
+
+It is just going to have a different name, `test_collection_import`. We do not need to create a collection first, as it is going to be created automatically.
+
+
+
+Restoring collection is also done separately on each node, but our Python SDK does not support it yet. We are going to use the HTTP API instead,
+
+and send a request to each node using `requests` library.
+
+
+
+```python
+
+for node_url, snapshot_path in zip(QDRANT_NODES, local_snapshot_paths):
+
+ snapshot_name = os.path.basename(snapshot_path)
+
+ requests.post(
+
+ f""{node_url}/collections/test_collection_import/snapshots/upload?priority=snapshot"",
+
+ headers={
+
+ ""api-key"": QDRANT_API_KEY,
+
+ },
+
+ files={""snapshot"": (snapshot_name, open(snapshot_path, ""rb""))},
+
+ )
+
+```
+
+
+
+Alternatively, you can use the `curl` command:
+
+
+
+```bash
+
+curl -X POST 'https://node-0.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+
+ -H 'api-key: ${QDRANT_API_KEY}' \
+
+ -H 'Content-Type:multipart/form-data' \
+
+ -F 'snapshot=@node-0-shapshot.snapshot'
+
+
+
+curl -X POST 'https://node-1.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+
+ -H 'api-key: ${QDRANT_API_KEY}' \
+
+ -H 'Content-Type:multipart/form-data' \
+
+ -F 'snapshot=@node-1-shapshot.snapshot'
+
+
+
+curl -X POST 'https://node-2.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \
+
+ -H 'api-key: ${QDRANT_API_KEY}' \
+
+ -H 'Content-Type:multipart/form-data' \
+
+ -F 'snapshot=@node-2-shapshot.snapshot'
+
+```
+
+
+
+
+
+**Important:** We selected `priority=snapshot` to make sure that the snapshot is preferred over the data stored on the node. You can read mode about the priority in the [documentation](/documentation/concepts/snapshots/#snapshot-priority).
+",documentation/tutorials/create-snapshot.md
+"---
+
+title: Collaborative filtering
+
+short_description: ""Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search.""
+
+description: ""Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search.""
+
+preview_image: /blog/collaborative-filtering/social_preview.png
+
+social_preview_image: /blog/collaborative-filtering/social_preview.png
+
+weight: 23
+
+---
+
+
+
+# Create a collaborative filtering system
+
+
+
+| Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb) | |
+
+|--------------|---------------------|--|----|
+
+
+
+Every time Spotify recommends the next song from a band you've never heard of, it uses a recommendation algorithm based on other users' interactions with that song. This type of algorithm is known as **collaborative filtering**.
+
+
+
+Unlike content-based recommendations, collaborative filtering excels when the objects' semantics are loosely or unrelated to users' preferences. This adaptability is what makes it so fascinating. Movie, music, or book recommendations are good examples of such use cases. After all, we rarely choose which book to read purely based on the plot twists.
+
+
+
+The traditional way to build a collaborative filtering engine involves training a model that converts the sparse matrix of user-to-item relations into a compressed, dense representation of user and item vectors. Some of the most commonly referenced algorithms for this purpose include [SVD (Singular Value Decomposition)](https://en.wikipedia.org/wiki/Singular_value_decomposition) and [Factorization Machines](https://en.wikipedia.org/wiki/Matrix_factorization_(recommender_systems)). However, the model training approach requires significant resource investments. Model training necessitates data, regular re-training, and a mature infrastructure.
+
+
+
+## Methodology
+
+
+
+Fortunately, there is a way to build collaborative filtering systems without any model training. You can obtain interpretable recommendations and have a scalable system using a technique based on similarity search. Let’s explore how this works with an example of building a movie recommendation system.
+
+
+
+
+
+
+
+## Implementation
+
+
+
+To implement this, you will use a simple yet powerful resource: [Qdrant with Sparse Vectors](https://qdrant.tech/articles/sparse-vectors/).
+
+
+
+Notebook: [You can try this code here](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb)
+
+
+
+
+
+### Setup
+
+
+
+You have to first import the necessary libraries and define the environment.
+
+
+
+```python
+
+import os
+
+import pandas as pd
+
+import requests
+
+from qdrant_client import QdrantClient, models
+
+from qdrant_client.models import PointStruct, SparseVector, NamedSparseVector
+
+from collections import defaultdict
+
+
+
+# OMDB API Key - for movie posters
+
+omdb_api_key = os.getenv(""OMDB_API_KEY"")
+
+
+
+# Collection name
+
+collection_name = ""movies""
+
+
+
+# Set Qdrant Client
+
+qdrant_client = QdrantClient(
+
+ os.getenv(""QDRANT_HOST""),
+
+ api_key=os.getenv(""QDRANT_API_KEY"")
+
+)
+
+```
+
+
+
+### Define output
+
+
+
+Here, you will configure the recommendation engine to retrieve movie posters as output.
+
+
+
+```python
+
+# Function to get movie poster using OMDB API
+
+def get_movie_poster(imdb_id, api_key):
+
+ url = f""https://www.omdbapi.com/?i={imdb_id}&apikey={api_key}""
+
+ data = requests.get(url).json()
+
+ return data.get('Poster'), data
+
+```
+
+
+
+### Prepare the data
+
+
+
+Load the movie datasets. These include three main CSV files: user ratings, movie titles, and OMDB IDs.
+
+
+
+```python
+
+# Load CSV files
+
+ratings_df = pd.read_csv('data/ratings.csv', low_memory=False)
+
+movies_df = pd.read_csv('data/movies.csv', low_memory=False)
+
+
+
+# Convert movieId in ratings_df and movies_df to string
+
+ratings_df['movieId'] = ratings_df['movieId'].astype(str)
+
+movies_df['movieId'] = movies_df['movieId'].astype(str)
+
+
+
+rating = ratings_df['rating']
+
+
+
+# Normalize ratings
+
+ratings_df['rating'] = (rating - rating.mean()) / rating.std()
+
+
+
+# Merge ratings with movie metadata to get movie titles
+
+merged_df = ratings_df.merge(
+
+ movies_df[['movieId', 'title']],
+
+ left_on='movieId', right_on='movieId', how='inner'
+
+)
+
+
+
+# Aggregate ratings to handle duplicate (userId, title) pairs
+
+ratings_agg_df = merged_df.groupby(['userId', 'movieId']).rating.mean().reset_index()
+
+
+
+ratings_agg_df.head()
+
+```
+
+
+
+| |userId |movieId |rating |
+
+|---|-----------|---------|---------|
+
+|0 |1 |1 |0.429960 |
+
+|1 |1 |1036 |1.369846 |
+
+|2 |1 |1049 |-0.509926|
+
+|3 |1 |1066 |0.429960 |
+
+|4 |1 |110 |0.429960 |
+
+
+
+### Convert to sparse
+
+
+
+If you want to search across numerous reviews from different users, you can represent these reviews in a sparse matrix.
+
+
+
+```python
+
+# Convert ratings to sparse vectors
+
+user_sparse_vectors = defaultdict(lambda: {""values"": [], ""indices"": []})
+
+for row in ratings_agg_df.itertuples():
+
+ user_sparse_vectors[row.userId][""values""].append(row.rating)
+
+ user_sparse_vectors[row.userId][""indices""].append(int(row.movieId))
+
+```
+
+
+
+![collaborative-filtering](/blog/collaborative-filtering/collaborative-filtering.png)
+
+
+
+
+
+### Upload the data
+
+
+
+Here, you will initialize the Qdrant client and create a new collection to store the data.
+
+Convert the user ratings to sparse vectors and include the `movieId` in the payload.
+
+
+
+```python
+
+# Define a data generator
+
+def data_generator():
+
+ for user_id, sparse_vector in user_sparse_vectors.items():
+
+ yield PointStruct(
+
+ id=user_id,
+
+ vector={""ratings"": SparseVector(
+
+ indices=sparse_vector[""indices""],
+
+ values=sparse_vector[""values""]
+
+ )},
+
+ payload={""user_id"": user_id, ""movie_id"": sparse_vector[""indices""]}
+
+ )
+
+
+
+# Upload points using the data generator
+
+qdrant_client.upload_points(
+
+ collection_name=collection_name,
+
+ points=data_generator()
+
+)
+
+```
+
+
+
+### Define query
+
+
+
+In order to get recommendations, we need to find users with similar tastes to ours.
+
+Let's describe our preferences by providing ratings for some of our favorite movies.
+
+
+
+`1` indicates that we like the movie, `-1` indicates that we dislike it.
+
+
+
+```python
+
+my_ratings = {
+
+ 603: 1, # Matrix
+
+ 13475: 1, # Star Trek
+
+ 11: 1, # Star Wars
+
+ 1091: -1, # The Thing
+
+ 862: 1, # Toy Story
+
+ 597: -1, # Titanic
+
+ 680: -1, # Pulp Fiction
+
+ 13: 1, # Forrest Gump
+
+ 120: 1, # Lord of the Rings
+
+ 87: -1, # Indiana Jones
+
+ 562: -1 # Die Hard
+
+}
+
+
+
+```
+
+
+
+
+
+Click to see the code for to_vector
+
+
+
+```python
+
+# Create sparse vector from my_ratings
+
+def to_vector(ratings):
+
+ vector = SparseVector(
+
+ values=[],
+
+ indices=[]
+
+ )
+
+ for movie_id, rating in ratings.items():
+
+ vector.values.append(rating)
+
+ vector.indices.append(movie_id)
+
+ return vector
+
+```
+
+
+
+
+
+
+
+
+
+### Run the query
+
+
+
+From the uploaded list of movies with ratings, we can perform a search in Qdrant to get the top most similar users to us.
+
+
+
+```python
+
+# Perform the search
+
+results = qdrant_client.query_points(
+
+ collection_name=collection_name,
+
+ query=to_vector(my_ratings),
+
+ using=""ratings"",
+
+ limit=20
+
+).points
+
+```
+
+
+
+Now we can find the movies liked by the other similar users, but we haven't seen yet.
+
+Let's combine the results from found users, filter out seen movies, and sort by the score.
+
+
+
+```python
+
+# Convert results to scores and sort by score
+
+def results_to_scores(results):
+
+ movie_scores = defaultdict(lambda: 0)
+
+ for result in results:
+
+ for movie_id in result.payload[""movie_id""]:
+
+ movie_scores[movie_id] += result.score
+
+ return movie_scores
+
+
+
+# Convert results to scores and sort by score
+
+movie_scores = results_to_scores(results)
+
+top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True)
+
+```
+
+
+
+
+
+
+
+ Visualize results in Jupyter Notebook
+
+
+
+Finally, we display the top 5 recommended movies along with their posters and titles.
+
+
+
+```python
+
+# Create HTML to display top 5 results
+
+html_content = ""
+
+ """"""
+
+ else:
+
+ continue # Skip if imdb_id is not found
+
+
+
+html_content += ""
""
+
+
+
+display(HTML(html_content))
+
+```
+
+
+
+
+
+
+
+## Recommendations
+
+
+
+For a complete display of movie posters, check the [notebook output](https://github.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb). Here are the results without html content.
+
+
+
+```text
+
+Toy Story, Score: 131.2033799
+
+Monty Python and the Holy Grail, Score: 131.2033799
+
+Star Wars: Episode V - The Empire Strikes Back, Score: 131.2033799
+
+Star Wars: Episode VI - Return of the Jedi, Score: 131.2033799
+
+Men in Black, Score: 131.2033799
+
+```
+
+
+
+On top of collaborative filtering, we can further enhance the recommendation system by incorporating other features like user demographics, movie genres, or movie tags.
+
+
+
+Or, for example, only consider recent ratings via a time-based filter. This way, we can recommend movies that are currently popular among users.
+
+
+
+## Conclusion
+
+
+
+As demonstrated, it is possible to build an interesting movie recommendation system without intensive model training using Qdrant and Sparse Vectors. This approach not only simplifies the recommendation process but also makes it scalable and interpretable. In future tutorials, we can experiment more with this combination to further enhance our recommendation systems.
+",documentation/tutorials/collaborative-filtering.md
+"---
+
+title: Tutorials
+
+weight: 13
+
+# If the index.md file is empty, the link to the section will be hidden from the sidebar
+
+is_empty: false
+
+aliases:
+
+ - how-to
+
+ - tutorials
+
+---
+
+
+
+# Tutorials
+
+
+
+These tutorials demonstrate different ways you can build vector search into your applications.
+
+
+
+| Essential How-Tos | Description | Stack |
+
+|---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------|
+
+| [Semantic Search for Beginners](../tutorials/search-beginners/) | Create a simple search engine locally in minutes. | Qdrant |
+
+| [Simple Neural Search](../tutorials/neural-search/) | Build and deploy a neural search that browses startup data. | Qdrant, BERT, FastAPI |
+
+| [Neural Search with FastEmbed](../tutorials/neural-search-fastembed/) | Build and deploy a neural search with our FastEmbed library. | Qdrant |
+
+| [Multimodal Search](../tutorials/multimodal-search-fastembed/) | Create a simple multimodal search. | Qdrant |
+
+| [Bulk Upload Vectors](../tutorials/bulk-upload/) | Upload a large scale dataset. | Qdrant |
+
+| [Asynchronous API](../tutorials/async-api/) | Communicate with Qdrant server asynchronously with Python SDK. | Qdrant, Python |
+
+| [Create Dataset Snapshots](../tutorials/create-snapshot/) | Turn a dataset into a snapshot by exporting it from a collection. | Qdrant |
+
+| [Load HuggingFace Dataset](../tutorials/huggingface-datasets/) | Load a Hugging Face dataset to Qdrant | Qdrant, Python, datasets |
+
+| [Measure Retrieval Quality](../tutorials/retrieval-quality/) | Measure and fine-tune the retrieval quality | Qdrant, Python, datasets |
+
+| [Search Through Code](../tutorials/code-search/) | Implement semantic search application for code search tasks | Qdrant, Python, sentence-transformers, Jina |
+
+| [Setup Collaborative Filtering](../tutorials/collaborative-filtering/) | Implement a collaborative filtering system for recommendation engines | Qdrant|
+",documentation/tutorials/_index.md
+"---
+
+title: Semantic-Router
+
+---
+
+
+
+# Semantic-Router
+
+
+
+[Semantic-Router](https://www.aurelio.ai/semantic-router/) is a library to build decision-making layers for your LLMs and agents. It uses vector embeddings to make tool-use decisions rather than LLM generations, routing our requests using semantic meaning.
+
+
+
+Qdrant is available as a supported index in Semantic-Router for you to ingest route data and perform retrievals.
+
+
+
+## Installation
+
+
+
+To use Semantic-Router with Qdrant, install the `qdrant` extra:
+
+
+
+```console
+
+pip install semantic-router[qdrant]
+
+```
+
+
+
+## Usage
+
+
+
+Set up `QdrantIndex` with the appropriate configurations:
+
+
+
+```python
+
+from semantic_router.index import QdrantIndex
+
+
+
+qdrant_index = QdrantIndex(
+
+ url=""https://xyz-example.eu-central.aws.cloud.qdrant.io"", api_key=""""
+
+)
+
+```
+
+
+
+Once the Qdrant index is set up with the appropriate configurations, we can pass it to the `RouteLayer`.
+
+
+
+```python
+
+from semantic_router.layer import RouteLayer
+
+
+
+RouteLayer(encoder=some_encoder, routes=some_routes, index=qdrant_index)
+
+```
+
+
+
+## Complete Example
+
+
+
+
+
+
+
+Click to expand
+
+
+
+```python
+
+import os
+
+
+
+from semantic_router import Route
+
+from semantic_router.encoders import OpenAIEncoder
+
+from semantic_router.index import QdrantIndex
+
+from semantic_router.layer import RouteLayer
+
+
+
+# we could use this as a guide for our chatbot to avoid political conversations
+
+politics = Route(
+
+ name=""politics value"",
+
+ utterances=[
+
+ ""isn't politics the best thing ever"",
+
+ ""why don't you tell me about your political opinions"",
+
+ ""don't you just love the president"",
+
+ ""they're going to destroy this country!"",
+
+ ""they will save the country!"",
+
+ ],
+
+)
+
+
+
+# this could be used as an indicator to our chatbot to switch to a more
+
+# conversational prompt
+
+chitchat = Route(
+
+ name=""chitchat"",
+
+ utterances=[
+
+ ""how's the weather today?"",
+
+ ""how are things going?"",
+
+ ""lovely weather today"",
+
+ ""the weather is horrendous"",
+
+ ""let's go to the chippy"",
+
+ ],
+
+)
+
+
+
+# we place both of our decisions together into single list
+
+routes = [politics, chitchat]
+
+
+
+os.environ[""OPENAI_API_KEY""] = """"
+
+encoder = OpenAIEncoder()
+
+
+
+rl = RouteLayer(
+
+ encoder=encoder,
+
+ routes=routes,
+
+ index=QdrantIndex(location="":memory:""),
+
+)
+
+
+
+print(rl(""What have you been upto?"").name)
+
+```
+
+
+
+This returns:
+
+
+
+```console
+
+[Out]: 'chitchat'
+
+```
+
+
+
+
+
+
+
+## 📚 Further Reading
+
+
+
+- Semantic-Router [Documentation](https://github.com/aurelio-labs/semantic-router/tree/main/docs)
+
+- Semantic-Router [Video Course](https://www.aurelio.ai/course/semantic-router)
+
+- [Source Code](https://github.com/aurelio-labs/semantic-router/blob/main/semantic_router/index/qdrant.py)
+",documentation/frameworks/semantic-router.md
+"---
+
+title: Testcontainers
+
+---
+
+
+
+# Testcontainers
+
+
+
+Qdrant is available as a [Testcontainers module](https://testcontainers.com/modules/qdrant/) in multiple languages. It facilitates the spawning of a Qdrant instance for end-to-end testing.
+
+
+
+As noted by [Testcontainers](https://testcontainers.com/), it ""is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container.""
+
+
+
+## Usage
+
+
+
+```java
+
+import org.testcontainers.qdrant.QdrantContainer;
+
+
+
+QdrantContainer qdrantContainer = new QdrantContainer(""qdrant/qdrant"");
+
+```
+
+
+
+```go
+
+import (
+
+ ""github.com/testcontainers/testcontainers-go""
+
+ ""github.com/testcontainers/testcontainers-go/modules/qdrant""
+
+)
+
+
+
+qdrantContainer, err := qdrant.RunContainer(ctx, testcontainers.WithImage(""qdrant/qdrant""))
+
+```
+
+
+
+```typescript
+
+import { QdrantContainer } from ""@testcontainers/qdrant"";
+
+
+
+const qdrantContainer = await new QdrantContainer(""qdrant/qdrant"").start();
+
+```
+
+
+
+```python
+
+from testcontainers.qdrant import QdrantContainer
+
+
+
+qdrant_container = QdrantContainer(""qdrant/qdrant"").start()
+
+```
+
+
+
+Testcontainers modules provide options/methods to configure ENVs, volumes, and virtually everything you can configure in a Docker container.
+
+
+
+## Further reading
+
+
+
+- [Testcontainers Guides](https://testcontainers.com/guides/)
+
+- [Testcontainers Qdrant Module](https://testcontainers.com/modules/qdrant/)
+",documentation/frameworks/testcontainers.md
+"---
+
+title: Stanford DSPy
+
+aliases: [ ../integrations/dspy/ ]
+
+---
+
+
+
+# Stanford DSPy
+
+
+
+[DSPy](https://github.com/stanfordnlp/dspy) is the framework for solving advanced tasks with language models (LMs) and retrieval models (RMs). It unifies techniques for prompting and fine-tuning LMs — and approaches for reasoning, self-improvement, and augmentation with retrieval and tools.
+
+
+
+- Provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax.
+
+
+
+- Introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program.
+
+
+
+Qdrant can be used as a retrieval mechanism in the DSPy flow.
+
+
+
+## Installation
+
+
+
+For the Qdrant retrieval integration, include `dspy-ai` with the `qdrant` extra:
+
+```bash
+
+pip install dspy-ai[qdrant]
+
+```
+
+
+
+## Usage
+
+
+
+We can configure `DSPy` settings to use the Qdrant retriever model like so:
+
+```python
+
+import dspy
+
+from dspy.retrieve.qdrant_rm import QdrantRM
+
+
+
+from qdrant_client import QdrantClient
+
+
+
+turbo = dspy.OpenAI(model=""gpt-3.5-turbo"")
+
+qdrant_client = QdrantClient() # Defaults to a local instance at http://localhost:6333/
+
+qdrant_retriever_model = QdrantRM(""collection-name"", qdrant_client, k=3)
+
+
+
+dspy.settings.configure(lm=turbo, rm=qdrant_retriever_model)
+
+```
+
+Using the retriever is pretty simple. The `dspy.Retrieve(k)` module will search for the top-k passages that match a given query.
+
+
+
+```python
+
+retrieve = dspy.Retrieve(k=3)
+
+question = ""Some question about my data""
+
+topK_passages = retrieve(question).passages
+
+
+
+print(f""Top {retrieve.k} passages for question: {question} \n"", ""\n"")
+
+
+
+for idx, passage in enumerate(topK_passages):
+
+ print(f""{idx+1}]"", passage, ""\n"")
+
+```
+
+
+
+With Qdrant configured as the retriever for contexts, you can set up a DSPy module like so:
+
+```python
+
+class RAG(dspy.Module):
+
+ def __init__(self, num_passages=3):
+
+ super().__init__()
+
+
+
+ self.retrieve = dspy.Retrieve(k=num_passages)
+
+ ...
+
+
+
+ def forward(self, question):
+
+ context = self.retrieve(question).passages
+
+ ...
+
+
+
+```
+
+
+
+With the generic RAG blueprint now in place, you can add the many interactions offered by DSPy with context retrieval powered by Qdrant.
+
+
+
+## Next steps
+
+
+
+- Find DSPy usage docs and examples [here](https://github.com/stanfordnlp/dspy#4-documentation--tutorials).
+
+
+
+- [Source Code](https://github.com/stanfordnlp/dspy/blob/main/dspy/retrieve/qdrant_rm.py)
+",documentation/frameworks/dspy.md
+"---
+
+title: FiftyOne
+
+aliases: [ ../integrations/fifty-one ]
+
+---
+
+
+
+# FiftyOne
+
+
+
+[FiftyOne](https://voxel51.com/) is an open-source toolkit designed to enhance computer vision workflows by optimizing dataset quality
+
+and providing valuable insights about your models. FiftyOne 0.20, which includes a native integration with Qdrant, supporting workflows
+
+like [image similarity search](https://docs.voxel51.com/user_guide/brain.html#image-similarity) and
+
+[text search](https://docs.voxel51.com/user_guide/brain.html#text-similarity).
+
+
+
+Qdrant helps FiftyOne to find the most similar images in the dataset using vector embeddings.
+
+
+
+FiftyOne is available as a Python package that might be installed in the following way:
+
+
+
+```bash
+
+pip install fiftyone
+
+```
+
+
+
+Please check out the documentation of FiftyOne on [Qdrant integration](https://docs.voxel51.com/integrations/qdrant.html).
+
+
+",documentation/frameworks/fifty-one.md
+"---
+
+title: Pinecone Canopy
+
+---
+
+
+
+# Pinecone Canopy
+
+
+
+[Canopy](https://github.com/pinecone-io/canopy) is an open-source framework and context engine to build chat assistants at scale.
+
+
+
+Qdrant is supported as a knowledge base within Canopy for context retrieval and augmented generation.
+
+
+
+## Usage
+
+
+
+Install the SDK with the Qdrant extra as described in the [Canopy README](https://github.com/pinecone-io/canopy?tab=readme-ov-file#extras).
+
+
+
+```bash
+
+pip install canopy-sdk[qdrant]
+
+```
+
+
+
+### Creating a knowledge base
+
+
+
+```python
+
+from canopy.knowledge_base import QdrantKnowledgeBase
+
+
+
+kb = QdrantKnowledgeBase(collection_name="""")
+
+```
+
+
+
+
+
+
+
+To create a new Qdrant collection and connect it to the knowledge base, use the `create_canopy_collection` method:
+
+
+
+```python
+
+kb.create_canopy_collection()
+
+```
+
+
+
+You can always verify the connection to the collection with the `verify_index_connection` method:
+
+
+
+```python
+
+kb.verify_index_connection()
+
+```
+
+
+
+Learn more about customizing the knowledge base and its inner components [in the Canopy library](https://github.com/pinecone-io/canopy/blob/main/docs/library.md#understanding-knowledgebase-workings).
+
+
+
+### Adding data to the knowledge base
+
+
+
+To insert data into the knowledge base, you can create a list of documents and use the `upsert` method:
+
+
+
+```python
+
+from canopy.models.data_models import Document
+
+
+
+documents = [
+
+ Document(
+
+ id=""1"",
+
+ text=""U2 are an Irish rock band from Dublin, formed in 1976."",
+
+ source=""https://en.wikipedia.org/wiki/U2"",
+
+ ),
+
+ Document(
+
+ id=""2"",
+
+ text=""Arctic Monkeys are an English rock band formed in Sheffield in 2002."",
+
+ source=""https://en.wikipedia.org/wiki/Arctic_Monkeys"",
+
+ metadata={""my-key"": ""my-value""},
+
+ ),
+
+]
+
+
+
+kb.upsert(documents)
+
+```
+
+
+
+### Querying the knowledge base
+
+
+
+You can query the knowledge base with the `query` method to find the most similar documents to a given text:
+
+
+
+```python
+
+from canopy.models.data_models import Query
+
+
+
+kb.query(
+
+ [
+
+ Query(text=""Arctic Monkeys music genre""),
+
+ Query(
+
+ text=""U2 music genre"",
+
+ top_k=10,
+
+ metadata_filter={""key"": ""my-key"", ""match"": {""value"": ""my-value""}},
+
+ ),
+
+ ]
+
+)
+
+```
+
+
+
+## Further Reading
+
+
+
+- [Introduction to Canopy](https://www.pinecone.io/blog/canopy-rag-framework/)
+
+- [Canopy library reference](https://github.com/pinecone-io/canopy/blob/main/docs/library.md)
+
+- [Source Code](https://github.com/pinecone-io/canopy/tree/main/src/canopy/knowledge_base/qdrant)
+",documentation/frameworks/canopy.md
+"---
+
+title: Langchain Go
+
+---
+
+
+
+# Langchain Go
+
+
+
+[Langchain Go](https://tmc.github.io/langchaingo/docs/) is a framework for developing data-aware applications powered by language models in Go.
+
+
+
+You can use Qdrant as a vector store in Langchain Go.
+
+
+
+## Setup
+
+
+
+Install the `langchain-go` project dependency
+
+
+
+```bash
+
+go get -u github.com/tmc/langchaingo
+
+```
+
+
+
+## Usage
+
+
+
+Before you use the following code sample, customize the following values for your configuration:
+
+
+
+- `YOUR_QDRANT_REST_URL`: If you've set up Qdrant using the [Quick Start](/documentation/quick-start/) guide,
+
+ set this value to `http://localhost:6333`.
+
+- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or
+
+ list collections.
+
+
+
+```go
+
+package main
+
+
+
+import (
+
+ ""log""
+
+ ""net/url""
+
+
+
+ ""github.com/tmc/langchaingo/embeddings""
+
+ ""github.com/tmc/langchaingo/llms/openai""
+
+ ""github.com/tmc/langchaingo/vectorstores/qdrant""
+
+)
+
+
+
+func main() {
+
+ llm, err: = openai.New()
+
+ if err != nil {
+
+ log.Fatal(err)
+
+ }
+
+
+
+ e, err: = embeddings.NewEmbedder(llm)
+
+ if err != nil {
+
+ log.Fatal(err)
+
+ }
+
+
+
+ url, err: = url.Parse(""YOUR_QDRANT_REST_URL"")
+
+ if err != nil {
+
+ log.Fatal(err)
+
+ }
+
+
+
+ store, err: = qdrant.New(
+
+ qdrant.WithURL( * url),
+
+ qdrant.WithCollectionName(""YOUR_COLLECTION_NAME""),
+
+ qdrant.WithEmbedder(e),
+
+ )
+
+ if err != nil {
+
+ log.Fatal(err)
+
+ }
+
+}
+
+```
+
+
+
+## Further Reading
+
+
+
+- You can find usage examples of Langchain Go [here](https://github.com/tmc/langchaingo/tree/main/examples).
+
+
+
+- [Source Code](https://github.com/tmc/langchaingo/tree/main/vectorstores/qdrant)
+",documentation/frameworks/langchain-go.md
+"---
+
+title: Firebase Genkit
+
+---
+
+
+
+# Firebase Genkit
+
+
+
+[Genkit](https://firebase.google.com/products/genkit) is a framework to build, deploy, and monitor production-ready AI-powered apps.
+
+
+
+You can build apps that generate custom content, use semantic search, handle unstructured inputs, answer questions with your business data, autonomously make decisions, orchestrate tool calls, and more.
+
+
+
+You can use Qdrant for indexing/semantic retrieval of data in your Genkit applications via the [Qdrant-Genkit plugin](https://github.com/qdrant/qdrant-genkit).
+
+
+
+Genkit currently supports server-side development in JavaScript/TypeScript (Node.js) with Go support in active development.
+
+
+
+## Installation
+
+
+
+```bash
+
+npm i genkitx-qdrant
+
+```
+
+
+
+## Configuration
+
+
+
+To use this plugin, specify it when you call `configureGenkit()`:
+
+
+
+```js
+
+import { qdrant } from 'genkitx-qdrant';
+
+import { textEmbeddingGecko } from '@genkit-ai/vertexai';
+
+
+
+export default configureGenkit({
+
+ plugins: [
+
+ qdrant([
+
+ {
+
+ clientParams: {
+
+ host: 'localhost',
+
+ port: 6333,
+
+ },
+
+ collectionName: 'some-collection',
+
+ embedder: textEmbeddingGecko,
+
+ },
+
+ ]),
+
+ ],
+
+ // ...
+
+});
+
+```
+
+
+
+You'll need to specify a collection name, the embedding model you want to use and the Qdrant client parameters. In
+
+addition, there are a few optional parameters:
+
+
+
+- `embedderOptions`: Additional options to pass options to the embedder:
+
+
+
+ ```js
+
+ embedderOptions: { taskType: 'RETRIEVAL_DOCUMENT' },
+
+ ```
+
+
+
+- `contentPayloadKey`: Name of the payload filed with the document content. Defaults to ""content"".
+
+
+
+ ```js
+
+ contentPayloadKey: 'content';
+
+ ```
+
+
+
+- `metadataPayloadKey`: Name of the payload filed with the document metadata. Defaults to ""metadata"".
+
+
+
+ ```js
+
+ metadataPayloadKey: 'metadata';
+
+ ```
+
+
+
+- `collectionCreateOptions`: [Additional options](<(https://qdrant.tech/documentation/concepts/collections/#create-a-collection)>) when creating the Qdrant collection.
+
+
+
+## Usage
+
+
+
+Import retriever and indexer references like so:
+
+
+
+```js
+
+import { qdrantIndexerRef, qdrantRetrieverRef } from 'genkitx-qdrant';
+
+import { Document, index, retrieve } from '@genkit-ai/ai/retriever';
+
+```
+
+
+
+Then, pass the references to `retrieve()` and `index()`:
+
+
+
+```js
+
+// To specify an indexer:
+
+export const qdrantIndexer = qdrantIndexerRef({
+
+ collectionName: 'some-collection',
+
+ displayName: 'Some Collection indexer',
+
+});
+
+
+
+await index({ indexer: qdrantIndexer, documents });
+
+```
+
+
+
+```js
+
+// To specify a retriever:
+
+export const qdrantRetriever = qdrantRetrieverRef({
+
+ collectionName: 'some-collection',
+
+ displayName: 'Some Collection Retriever',
+
+});
+
+
+
+let docs = await retrieve({ retriever: qdrantRetriever, query });
+
+```
+
+
+
+You can refer to [Retrieval-augmented generation](https://firebase.google.com/docs/genkit/rag) for a general
+
+discussion on indexers and retrievers.
+
+
+
+## Further Reading
+
+
+
+- [Introduction to Genkit](https://firebase.google.com/docs/genkit)
+
+- [Genkit Documentation](https://firebase.google.com/docs/genkit/get-started)
+
+- [Source Code](https://github.com/qdrant/qdrant-genkit)
+",documentation/frameworks/genkit.md
+"---
+
+title: Langchain4J
+
+---
+
+
+
+# LangChain for Java
+
+
+
+LangChain for Java, also known as [Langchain4J](https://github.com/langchain4j/langchain4j), is a community port of [Langchain](https://www.langchain.com/) for building context-aware AI applications in Java
+
+
+
+You can use Qdrant as a vector store in Langchain4J through the [`langchain4j-qdrant`](https://central.sonatype.com/artifact/dev.langchain4j/langchain4j-qdrant) module.
+
+
+
+## Setup
+
+
+
+Add the `langchain4j-qdrant` to your project dependencies.
+
+
+
+```xml
+
+
+
+ dev.langchain4j
+
+ langchain4j-qdrant
+
+ VERSION
+
+
+
+```
+
+
+
+## Usage
+
+
+
+Before you use the following code sample, customize the following values for your configuration:
+
+
+
+- `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or
+
+ list collections.
+
+- `YOUR_HOST_URL`: Use the GRPC URL for your system. If you used the [Quick Start](/documentation/quick-start/) guide,
+
+ it may be http://localhost:6334. If you've deployed in the [Qdrant Cloud](/documentation/cloud/), you may have a
+
+ longer URL such as `https://example.location.cloud.qdrant.io:6334`.
+
+- `YOUR_API_KEY`: Substitute the API key associated with your configuration.
+
+```java
+
+import dev.langchain4j.store.embedding.EmbeddingStore;
+
+import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore;
+
+
+
+EmbeddingStore embeddingStore =
+
+ QdrantEmbeddingStore.builder()
+
+ // Ensure the collection is configured with the appropriate dimensions
+
+ // of the embedding model.
+
+ // Reference https://qdrant.tech/documentation/concepts/collections/
+
+ .collectionName(""YOUR_COLLECTION_NAME"")
+
+ .host(""YOUR_HOST_URL"")
+
+ // GRPC port of the Qdrant server
+
+ .port(6334)
+
+ .apiKey(""YOUR_API_KEY"")
+
+ .build();
+
+```
+
+
+
+`QdrantEmbeddingStore` supports all the semantic features of Langchain4J.
+
+
+
+## Further Reading
+
+
+
+- You can refer to the [Langchain4J examples](https://github.com/langchain4j/langchain4j-examples/) to get started.
+
+- [Source Code](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-qdrant)
+",documentation/frameworks/langchain4j.md
+"---
+
+title: Langchain
+
+aliases:
+
+ - ../integrations/langchain/
+
+ - /documentation/overview/integrations/langchain/
+
+---
+
+
+
+# Langchain
+
+
+
+Langchain is a library that makes developing Large Language Model-based applications much easier. It unifies the interfaces
+
+to different libraries, including major embedding providers and Qdrant. Using Langchain, you can focus on the business value instead of writing the boilerplate.
+
+
+
+Langchain distributes the Qdrant integration as a partner package.
+
+
+
+It might be installed with pip:
+
+
+
+```bash
+
+pip install langchain-qdrant
+
+```
+
+
+
+The integration supports searching for relevant documents usin dense/sparse and hybrid retrieval.
+
+
+
+Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways to use it, but calling `QdrantVectorStore.from_texts` or `QdrantVectorStore.from_documents` is probably the most straightforward way to get started:
+
+
+
+```python
+
+from langchain_qdrant import QdrantVectorStore
+
+from langchain_openai import OpenAIEmbeddings
+
+
+
+embeddings = OpenAIEmbeddings()
+
+
+
+doc_store = QdrantVectorStore.from_texts(
+
+ texts, embeddings, url="""", api_key="""", collection_name=""texts""
+
+)
+
+```
+
+
+
+## Using an existing collection
+
+
+
+To get an instance of `langchain_qdrant.QdrantVectorStore` without loading any new documents or texts, you can use the `QdrantVectorStore.from_existing_collection()` method.
+
+
+
+```python
+
+doc_store = QdrantVectorStore.from_existing_collection(
+
+ embeddings=embeddings,
+
+ collection_name=""my_documents"",
+
+ url="""",
+
+ api_key="""",
+
+)
+
+```
+
+
+
+## Local mode
+
+
+
+Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things
+
+out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or
+
+persisted on disk.
+
+
+
+### In-memory
+
+
+
+For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the
+
+client is destroyed - usually at the end of your script/notebook.
+
+
+
+```python
+
+qdrant = QdrantVectorStore.from_documents(
+
+ docs,
+
+ embeddings,
+
+ location="":memory:"", # Local mode with in-memory storage only
+
+ collection_name=""my_documents"",
+
+)
+
+```
+
+
+
+### On-disk storage
+
+
+
+Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs.
+
+
+
+```python
+
+qdrant = Qdrant.from_documents(
+
+ docs,
+
+ embeddings,
+
+ path=""/tmp/local_qdrant"",
+
+ collection_name=""my_documents"",
+
+)
+
+```
+
+
+
+### On-premise server deployment
+
+
+
+No matter if you choose to launch QdrantVectorStore locally with [a Docker container](/documentation/guides/installation/), or
+
+select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're
+
+going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service.
+
+
+
+```python
+
+url = ""<---qdrant url here --->""
+
+qdrant = QdrantVectorStore.from_documents(
+
+ docs,
+
+ embeddings,
+
+ url,
+
+ prefer_grpc=True,
+
+ collection_name=""my_documents"",
+
+)
+
+```
+
+
+
+## Similarity search
+
+
+
+`QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class.
+
+
+
+- Dense Vector Search(Default)
+
+- Sparse Vector Search
+
+- Hybrid Search
+
+
+
+### Dense Vector Search
+
+
+
+To search with only dense vectors,
+
+
+
+- The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default).
+
+- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.
+
+
+
+```py
+
+from langchain_qdrant import RetrievalMode
+
+
+
+qdrant = QdrantVectorStore.from_documents(
+
+ docs,
+
+ embedding=embeddings,
+
+ location="":memory:"",
+
+ collection_name=""my_documents"",
+
+ retrieval_mode=RetrievalMode.DENSE,
+
+)
+
+
+
+query = ""What did the president say about Ketanji Brown Jackson""
+
+found_docs = qdrant.similarity_search(query)
+
+```
+
+
+
+### Sparse Vector Search
+
+
+
+To search with only sparse vectors,
+
+
+
+- The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`.
+
+- An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.
+
+
+
+The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box.
+
+
+
+To use it, install the [FastEmbed package](https://github.com/qdrant/fastembed#-installation).
+
+
+
+```python
+
+from langchain_qdrant import FastEmbedSparse, RetrievalMode
+
+
+
+sparse_embeddings = FastEmbedSparse(model_name=""Qdrant/BM25"")
+
+
+
+qdrant = QdrantVectorStore.from_documents(
+
+ docs,
+
+ sparse_embedding=sparse_embeddings,
+
+ location="":memory:"",
+
+ collection_name=""my_documents"",
+
+ retrieval_mode=RetrievalMode.SPARSE,
+
+)
+
+
+
+query = ""What did the president say about Ketanji Brown Jackson""
+
+found_docs = qdrant.similarity_search(query)
+
+```
+
+
+
+### Hybrid Vector Search
+
+
+
+To perform a hybrid search using dense and sparse vectors with score fusion,
+
+
+
+- The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`.
+
+- A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter.
+
+- An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter.
+
+
+
+```python
+
+from langchain_qdrant import FastEmbedSparse, RetrievalMode
+
+
+
+sparse_embeddings = FastEmbedSparse(model_name=""Qdrant/bm25"")
+
+
+
+qdrant = QdrantVectorStore.from_documents(
+
+ docs,
+
+ embedding=embeddings,
+
+ sparse_embedding=sparse_embeddings,
+
+ location="":memory:"",
+
+ collection_name=""my_documents"",
+
+ retrieval_mode=RetrievalMode.HYBRID,
+
+)
+
+
+
+query = ""What did the president say about Ketanji Brown Jackson""
+
+found_docs = qdrant.similarity_search(query)
+
+```
+
+
+
+Note that if you've added documents with HYBRID mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection.
+
+
+
+## Next steps
+
+
+
+If you'd like to know more about running Qdrant in a Langchain-based application, please read our article
+
+[Question Answering with Langchain and Qdrant without boilerplate](/articles/langchain-integration/). Some more information
+
+might also be found in the [Langchain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant).
+
+
+
+- [Source Code](https://github.com/langchain-ai/langchain/tree/master/libs%2Fpartners%2Fqdrant)
+",documentation/frameworks/langchain.md
+"---
+
+title: LlamaIndex
+
+aliases:
+
+ - ../integrations/llama-index/
+
+ - /documentation/overview/integrations/llama-index/
+
+---
+
+
+
+# LlamaIndex
+
+
+
+Llama Index acts as an interface between your external data and Large Language Models. So you can bring your
+
+private data and augment LLMs with it. LlamaIndex simplifies data ingestion and indexing, integrating Qdrant as a vector index.
+
+
+
+Installing Llama Index is straightforward if we use pip as a package manager. Qdrant is not installed by default, so we need to
+
+install it separately. The integration of both tools also comes as another package.
+
+
+
+```bash
+
+pip install llama-index llama-index-vector-stores-qdrant
+
+```
+
+
+
+Llama Index requires providing an instance of `QdrantClient`, so it can interact with Qdrant server.
+
+
+
+```python
+
+from llama_index.core.indices.vector_store.base import VectorStoreIndex
+
+from llama_index.vector_stores.qdrant import QdrantVectorStore
+
+
+
+import qdrant_client
+
+
+
+client = qdrant_client.QdrantClient(
+
+ """",
+
+ api_key="""", # For Qdrant Cloud, None for local instance
+
+)
+
+
+
+vector_store = QdrantVectorStore(client=client, collection_name=""documents"")
+
+index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
+
+
+
+```
+
+
+
+## Further Reading
+
+
+
+- [LlamaIndex Documentation](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo/)
+
+- [Example Notebook](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/QdrantIndexDemo.ipynb)
+
+- [Source Code](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant)
+",documentation/frameworks/llama-index.md
+"---
+
+title: DocArray
+
+aliases: [ ../integrations/docarray/ ]
+
+---
+
+
+
+# DocArray
+
+
+
+You can use Qdrant natively in DocArray, where Qdrant serves as a high-performance document store to enable scalable vector search.
+
+
+
+DocArray is a library from Jina AI for nested, unstructured data in transit, including text, image, audio, video, 3D mesh, etc.
+
+It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the data with a Pythonic API.
+
+
+
+To install DocArray with Qdrant support, please do
+
+
+
+```bash
+
+pip install ""docarray[qdrant]""
+
+```
+
+
+
+## Further Reading
+
+
+
+- [DocArray documentations](https://docarray.jina.ai/advanced/document-store/qdrant/).
+
+- [Source Code](https://github.com/docarray/docarray/blob/main/docarray/index/backends/qdrant.py)
+",documentation/frameworks/docarray.md
+"---
+
+title: Pandas-AI
+
+---
+
+
+
+# Pandas-AI
+
+
+
+Pandas-AI is a Python library that uses a generative AI model to interpret natural language queries and translate them into Python code to interact with pandas data frames and return the final results to the user.
+
+
+
+## Installation
+
+
+
+```console
+
+pip install pandasai[qdrant]
+
+```
+
+
+
+## Usage
+
+
+
+You can begin a conversation by instantiating an `Agent` instance based on your Pandas data frame. The default Pandas-AI LLM requires an [API key](https://pandabi.ai).
+
+
+
+You can find the list of all supported LLMs [here](https://docs.pandas-ai.com/en/latest/LLMs/llms/)
+
+
+
+```python
+
+import os
+
+import pandas as pd
+
+from pandasai import Agent
+
+
+
+# Sample DataFrame
+
+sales_by_country = pd.DataFrame(
+
+ {
+
+ ""country"": [
+
+ ""United States"",
+
+ ""United Kingdom"",
+
+ ""France"",
+
+ ""Germany"",
+
+ ""Italy"",
+
+ ""Spain"",
+
+ ""Canada"",
+
+ ""Australia"",
+
+ ""Japan"",
+
+ ""China"",
+
+ ],
+
+ ""sales"": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000],
+
+ }
+
+)
+
+
+
+os.environ[""PANDASAI_API_KEY""] = ""YOUR_API_KEY""
+
+
+
+agent = Agent(sales_by_country)
+
+agent.chat(""Which are the top 5 countries by sales?"")
+
+# OUTPUT: China, United States, Japan, Germany, Australia
+
+```
+
+
+
+## Qdrant support
+
+
+
+You can train Pandas-AI to understand your data better and improve the quality of the results.
+
+
+
+Qdrant can be configured as a vector store to ingest training data and retrieve semantically relevant content.
+
+
+
+```python
+
+from pandasai.ee.vectorstores.qdrant import Qdrant
+
+
+
+qdrant = Qdrant(
+
+ collection_name="""",
+
+ embedding_model=""sentence-transformers/all-MiniLM-L6-v2"",
+
+ url=""http://localhost:6333"",
+
+ grpc_port=6334,
+
+ prefer_grpc=True
+
+)
+
+
+
+agent = Agent(df, vector_store=qdrant)
+
+
+
+# Train with custom information
+
+agent.train(docs=""The fiscal year starts in April"")
+
+
+
+# Train the q/a pairs of code snippets
+
+query = ""What are the total sales for the current fiscal year?""
+
+response = """"""
+
+import pandas as pd
+
+
+
+df = dfs[0]
+
+
+
+# Calculate the total sales for the current fiscal year
+
+total_sales = df[df['date'] >= pd.to_datetime('today').replace(month=4, day=1)]['sales'].sum()
+
+result = { ""type"": ""number"", ""value"": total_sales }
+
+""""""
+
+agent.train(queries=[query], codes=[response])
+
+
+
+# # The model will use the information provided in the training to generate a response
+
+
+
+```
+
+
+
+## Further reading
+
+
+
+- [Getting Started with Pandas-AI](https://pandasai-docs.readthedocs.io/en/latest/getting-started/)
+
+- [Pandas-AI Reference](https://pandasai-docs.readthedocs.io/en/latest/)
+
+- [Source Code](https://github.com/Sinaptik-AI/pandas-ai/blob/main/pandasai/ee/vectorstores/qdrant.py)
+",documentation/frameworks/pandas-ai.md
+"---
+
+title: MemGPT
+
+---
+
+
+
+# MemGPT
+
+
+
+[MemGPT](https://memgpt.ai/) is a system that enables LLMs to manage their own memory and overcome limited context windows to
+
+
+
+- Create perpetual chatbots that learn about you and change their personalities over time.
+
+- Create perpetual chatbots that can interface with large data stores.
+
+
+
+Qdrant is available as a storage backend in MemGPT for storing and semantically retrieving data.
+
+
+
+## Usage
+
+
+
+#### Installation
+
+
+
+To install the required dependencies, install `pymemgpt` with the `qdrant` extra.
+
+
+
+```sh
+
+pip install 'pymemgpt[qdrant]'
+
+```
+
+
+
+You can configure MemGPT to use either a Qdrant server or an in-memory instance with the `memgpt configure` command.
+
+
+
+#### Configuring the Qdrant server
+
+
+
+When you run `memgpt configure`, go through the prompts as described in the [MemGPT configuration documentation](https://memgpt.readme.io/docs/config).
+
+After you address several `memgpt` questions, you come to the following `memgpt` prompts:
+
+
+
+```console
+
+? Select storage backend for archival data: qdrant
+
+? Select Qdrant backend: server
+
+? Enter the Qdrant instance URI (Default: localhost:6333): https://xyz-example.eu-central.aws.cloud.qdrant.io
+
+```
+
+
+
+You can set an API key for authentication using the `QDRANT_API_KEY` environment variable.
+
+
+
+#### Configuring an in-memory instance
+
+
+
+```console
+
+? Select storage backend for archival data: qdrant
+
+? Select Qdrant backend: local
+
+```
+
+
+
+The data is persisted at the default MemGPT storage directory.
+
+
+
+## Further Reading
+
+
+
+- [MemGPT Examples](https://github.com/cpacker/MemGPT/tree/main/examples)
+
+- [MemGPT Documentation](https://memgpt.readme.io/docs/index).
+",documentation/frameworks/memgpt.md
+"---
+
+title: Vanna.AI
+
+---
+
+
+
+# Vanna.AI
+
+
+
+[Vanna](https://vanna.ai/) is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs.
+
+
+
+Vanna works in two easy steps - train a RAG ""model"" on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database.
+
+
+
+Qdrant is available as a support vector store for ingesting and retrieving your RAG data.
+
+
+
+## Installation
+
+
+
+```console
+
+pip install 'vanna[qdrant]'
+
+```
+
+
+
+## Setup
+
+
+
+You can set up a Vanna agent using Qdrant as your vector store and any of the [LLMs supported by Vanna](https://vanna.ai/docs/postgres-openai-vanna-vannadb/).
+
+
+
+We'll use OpenAI for demonstration.
+
+
+
+```python
+
+from vanna.openai import OpenAI_Chat
+
+from vanna.qdrant import Qdrant_VectorStore
+
+from qdrant_client import QdrantClient
+
+
+
+class MyVanna(Qdrant, OpenAI_Chat):
+
+ def __init__(self, config=None):
+
+ Qdrant_VectorStore.__init__(self, config=config)
+
+ OpenAI_Chat.__init__(self, config=config)
+
+
+
+vn = MyVanna(config={
+
+ 'client': QdrantClient(...),
+
+ 'api_key': sk-...,
+
+ 'model': gpt-4-...,
+
+})
+
+```
+
+
+
+## Usage
+
+
+
+Once a Vanna agent is instantiated, you can connect it to [any SQL database](https://vanna.ai/docs/FAQ/#can-i-use-this-with-my-sql-database) of your choosing.
+
+
+
+For example, Postgres.
+
+
+
+```python
+
+vn.connect_to_postgres(host='my-host', dbname='my-dbname', user='my-user', password='my-password', port='my-port')
+
+```
+
+
+
+You can now train and begin querying your database with SQL.
+
+
+
+```python
+
+# You can add DDL statements that specify table names, column names, types, and potentially relationships
+
+vn.train(ddl=""""""
+
+ CREATE TABLE IF NOT EXISTS my-table (
+
+ id INT PRIMARY KEY,
+
+ name VARCHAR(100),
+
+ age INT
+
+ )
+
+"""""")
+
+
+
+# You can add documentation about your business terminology or definitions.
+
+vn.train(documentation=""Our business defines OTIF score as the percentage of orders that are delivered on time and in full"")
+
+
+
+# You can also add SQL queries to your training data. This is useful if you have some queries already laying around.
+
+vn.train(sql=""SELECT * FROM my-table WHERE name = 'John Doe'"")
+
+
+
+# You can remove training data if there's obsolete/incorrect information.
+
+vn.remove_training_data(id='1-ddl')
+
+
+
+# Whenever you ask a new question, Vanna will retrieve 10 most relevant pieces of training data and use it as part of the LLM prompt to generate the SQL.
+
+
+
+vn.ask(question="""")
+
+```
+
+
+
+## Further reading
+
+
+
+- [Getting started with Vanna.AI](https://vanna.ai/docs/app/)
+
+- [Vanna.AI documentation](https://vanna.ai/docs/)
+
+- [Source Code](https://github.com/vanna-ai/vanna/tree/main/src/vanna/qdrant)
+",documentation/frameworks/vanna-ai.md
+"---
+
+title: Spring AI
+
+---
+
+
+
+# Spring AI
+
+
+
+[Spring AI](https://docs.spring.io/spring-ai/reference/) is a Java framework that provides a [Spring-friendly](https://spring.io/) API and abstractions for developing AI applications.
+
+
+
+Qdrant is available as supported vector database for use within your Spring AI projects.
+
+
+
+## Installation
+
+
+
+You can find the Spring AI installation instructions [here](https://docs.spring.io/spring-ai/reference/getting-started.html).
+
+
+
+Add the Qdrant boot starter package.
+
+
+
+```xml
+
+
+
+ org.springframework.ai
+
+ spring-ai-qdrant-store-spring-boot-starter
+
+
+
+```
+
+
+
+## Usage
+
+
+
+Configure Qdrant with Spring Boot’s `application.properties`.
+
+
+
+```
+
+spring.ai.vectorstore.qdrant.host=
+
+spring.ai.vectorstore.qdrant.port=
+
+spring.ai.vectorstore.qdrant.api-key=
+
+spring.ai.vectorstore.qdrant.collection-name=
+
+```
+
+
+
+Learn more about these options in the [configuration reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html#qdrant-vectorstore-properties).
+
+
+
+Or you can set up the Qdrant vector store with the `QdrantVectorStoreConfig` options.
+
+
+
+```java
+
+@Bean
+
+public QdrantVectorStoreConfig qdrantVectorStoreConfig() {
+
+
+
+ return QdrantVectorStoreConfig.builder()
+
+ .withHost("""")
+
+ .withPort()
+
+ .withCollectionName("""")
+
+ .withApiKey("""")
+
+ .build();
+
+}
+
+```
+
+
+
+Build the vector store using the config and any of the support [Spring AI embedding providers](https://docs.spring.io/spring-ai/reference/api/embeddings.html#available-implementations).
+
+
+
+```java
+
+@Bean
+
+public VectorStore vectorStore(QdrantVectorStoreConfig config, EmbeddingClient embeddingClient) {
+
+ return new QdrantVectorStore(config, embeddingClient);
+
+}
+
+```
+
+
+
+You can now use the `VectorStore` instance backed by Qdrant as a vector store in the Spring AI APIs.
+
+
+
+
+
+
+
+## 📚 Further Reading
+
+
+
+- Spring AI [Qdrant reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html)
+
+- Spring AI [API reference](https://docs.spring.io/spring-ai/reference/index.html)
+
+- [Source Code](https://github.com/spring-projects/spring-ai/tree/main/vector-stores/spring-ai-qdrant-store)
+",documentation/frameworks/spring-ai.md
+"---
+
+title: Autogen
+
+aliases: [ ../integrations/autogen/ ]
+
+---
+
+
+
+# Microsoft Autogen
+
+
+
+[AutoGen](https://github.com/microsoft/autogen) is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
+
+
+
+- Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM.
+
+- Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ.
+
+- Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed.
+
+
+
+With the Autogen-Qdrant integration, you can use the `QdrantRetrieveUserProxyAgent` from autogen to build retrieval augmented generation(RAG) services with ease.
+
+
+
+## Installation
+
+
+
+```bash
+
+pip install ""pyautogen[retrievechat]"" ""qdrant_client[fastembed]""
+
+```
+
+
+
+## Usage
+
+
+
+A demo application that generates code based on context w/o human feedback
+
+
+
+#### Set your API Endpoint
+
+
+
+The config_list_from_json function loads a list of configurations from an environment variable or a JSON file.
+
+
+
+```python
+
+from autogen import config_list_from_json
+
+from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent
+
+from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent
+
+from qdrant_client import QdrantClient
+
+
+
+config_list = config_list_from_json(
+
+ env_or_file=""OAI_CONFIG_LIST"",
+
+ file_location="".""
+
+)
+
+```
+
+
+
+It first looks for the environment variable ""OAI_CONFIG_LIST"" which needs to be a valid JSON string. If that variable is not found, it then looks for a JSON file named ""OAI_CONFIG_LIST"". The file structure sample can be found [here](https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample).
+
+
+
+#### Construct agents for RetrieveChat
+
+
+
+We start by initializing the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent. The system message needs to be set to ""You are a helpful assistant."" for RetrieveAssistantAgent. The detailed instructions are given in the user message.
+
+
+
+```python
+
+# Print the generation steps
+
+autogen.ChatCompletion.start_logging()
+
+
+
+# 1. create a RetrieveAssistantAgent instance named ""assistant""
+
+assistant = RetrieveAssistantAgent(
+
+ name=""assistant"",
+
+ system_message=""You are a helpful assistant."",
+
+ llm_config={
+
+ ""request_timeout"": 600,
+
+ ""seed"": 42,
+
+ ""config_list"": config_list,
+
+ },
+
+)
+
+
+
+# 2. create a QdrantRetrieveUserProxyAgent instance named ""qdrantagent""
+
+# By default, the human_input_mode is ""ALWAYS"", i.e. the agent will ask for human input at every step.
+
+# `docs_path` is the path to the docs directory.
+
+# `task` indicates the kind of task we're working on.
+
+# `chunk_token_size` is the chunk token size for the retrieve chat.
+
+# We use an in-memory QdrantClient instance here. Not recommended for production.
+
+
+
+rag_proxy_agent = QdrantRetrieveUserProxyAgent(
+
+ name=""qdrantagent"",
+
+ human_input_mode=""NEVER"",
+
+ max_consecutive_auto_reply=10,
+
+ retrieve_config={
+
+ ""task"": ""code"",
+
+ ""docs_path"": ""./path/to/docs"",
+
+ ""chunk_token_size"": 2000,
+
+ ""model"": config_list[0][""model""],
+
+ ""client"": QdrantClient("":memory:""),
+
+ ""embedding_model"": ""BAAI/bge-small-en-v1.5"",
+
+ },
+
+)
+
+```
+
+
+
+#### Run the retriever service
+
+
+
+```python
+
+# Always reset the assistant before starting a new conversation.
+
+assistant.reset()
+
+
+
+# We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message.
+
+# The assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing.
+
+# The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected.
+
+
+
+# The query used below is for demonstration. It should usually be related to the docs made available to the agent
+
+code_problem = ""How can I use FLAML to perform a classification task?""
+
+rag_proxy_agent.initiate_chat(assistant, problem=code_problem)
+
+```
+
+
+
+## Next steps
+
+
+
+- Autogen [examples](https://microsoft.github.io/autogen/docs/Examples)
+
+- AutoGen [documentation](https://microsoft.github.io/autogen/)
+
+- [Source Code](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/qdrant_retrieve_user_proxy_agent.py)
+",documentation/frameworks/autogen.md
+"---
+
+title: txtai
+
+aliases: [ ../integrations/txtai/ ]
+
+---
+
+
+
+# txtai
+
+
+
+Qdrant might be also used as an embedding backend in [txtai](https://neuml.github.io/txtai/) semantic applications.
+
+
+
+txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their
+
+properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings'
+
+proximity.
+
+
+
+Qdrant is not built-in txtai backend and requires installing an additional dependency:
+
+
+
+```bash
+
+pip install qdrant-txtai
+
+```
+
+
+
+The examples and some more information might be found in [qdrant-txtai repository](https://github.com/qdrant/qdrant-txtai).
+",documentation/frameworks/txtai.md
+"---
+
+title: Frameworks
+
+weight: 15
+
+---
+
+
+
+## Framework Integrations
+
+
+
+| Framework | Description |
+
+| ------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+
+| [AutoGen](./autogen/) | Framework from Microsoft building LLM applications using multiple conversational agents. |
+
+| [Canopy](./canopy/) | Framework from Pinecone for building RAG applications using LLMs and knowledge bases. |
+
+| [Cheshire Cat](./cheshire-cat/) | Framework to create personalized AI assistants using custom data. |
+
+| [DocArray](./docarray/) | Python library for managing data in multi-modal AI applications. |
+
+| [DSPy](./dspy/) | Framework for algorithmically optimizing LM prompts and weights. |
+
+| [Fifty-One](./fifty-one/) | Toolkit for building high-quality datasets and computer vision models. |
+
+| [Genkit](./genkit/) | Framework to build, deploy, and monitor production-ready AI-powered apps. |
+
+| [Haystack](./haystack/) | LLM orchestration framework to build customizable, production-ready LLM applications. |
+
+| [Langchain](./langchain/) | Python framework for building context-aware, reasoning applications using LLMs. |
+
+| [Langchain-Go](./langchain-go/) | Go framework for building context-aware, reasoning applications using LLMs. |
+
+| [Langchain4j](./langchain4j/) | Java framework for building context-aware, reasoning applications using LLMs. |
+
+| [LlamaIndex](./llama-index/) | A data framework for building LLM applications with modular integrations. |
+
+| [MemGPT](./memgpt/) | System to build LLM agents with long term memory & custom tools |
+
+| [Pandas-AI](./pandas-ai/) | Python library to query/visualize your data (CSV, XLSX, PostgreSQL, etc.) in natural language |
+
+| [Semantic Router](./semantic-router/) | Python library to build a decision-making layer for AI applications using vector search. |
+
+| [Spring AI](./spring-ai/) | Java AI framework for building with Spring design principles such as portability and modular design. |
+
+| [Testcontainers](./testcontainers/) | Set of frameworks for running containerized dependencies in tests. |
+
+| [txtai](./txtai/) | Python library for semantic search, LLM orchestration and language model workflows. |
+
+| [Vanna AI](./vanna-ai/) | Python RAG framework for SQL generation and querying. |
+",documentation/frameworks/_index.md
+"---
+
+title: Haystack
+
+aliases:
+
+ - ../integrations/haystack/
+
+ - /documentation/overview/integrations/haystack/
+
+---
+
+
+
+# Haystack
+
+
+
+[Haystack](https://haystack.deepset.ai/) serves as a comprehensive NLP framework, offering a modular methodology for constructing
+
+cutting-edge generative AI, QA, and semantic knowledge base search systems. A critical element in contemporary NLP systems is an
+
+efficient database for storing and retrieving extensive text data. Vector databases excel in this role, as they house vector
+
+representations of text and implement effective methods for swift retrieval. Thus, we are happy to announce the integration
+
+with Haystack - `QdrantDocumentStore`. This document store is unique, as it is maintained externally by the Qdrant team.
+
+
+
+The new document store comes as a separate package and can be updated independently of Haystack:
+
+
+
+```bash
+
+pip install qdrant-haystack
+
+```
+
+
+
+`QdrantDocumentStore` supports [all the configuration properties](/documentation/collections/#create-collection) available in
+
+the Qdrant Python client. If you want to customize the default configuration of the collection used under the hood, you can
+
+provide that settings when you create an instance of the `QdrantDocumentStore`. For example, if you'd like to enable the
+
+Scalar Quantization, you'd make that in the following way:
+
+
+
+```python
+
+from qdrant_haystack.document_stores import QdrantDocumentStore
+
+from qdrant_client import models
+
+
+
+document_store = QdrantDocumentStore(
+
+ "":memory:"",
+
+ index=""Document"",
+
+ embedding_dim=512,
+
+ recreate_index=True,
+
+ quantization_config=models.ScalarQuantization(
+
+ scalar=models.ScalarQuantizationConfig(
+
+ type=models.ScalarType.INT8,
+
+ quantile=0.99,
+
+ always_ram=True,
+
+ ),
+
+ ),
+
+)
+
+```
+
+
+
+## Further Reading
+
+
+
+- [Haystack Documentation](https://haystack.deepset.ai/integrations/qdrant-document-store)
+
+- [Source Code](https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/qdrant)
+",documentation/frameworks/haystack.md
+"---
+
+title: Cheshire Cat
+
+aliases: [ ../integrations/cheshire-cat/ ]
+
+---
+
+
+
+# Cheshire Cat
+
+
+
+[Cheshire Cat](https://cheshirecat.ai/) is an open-source framework that allows you to develop intelligent agents on top of many Large Language Models (LLM). You can develop your custom AI architecture to assist you in a wide range of tasks.
+
+
+
+![Cheshire cat](/documentation/frameworks/cheshire-cat/cat.jpg)
+
+
+
+## Cheshire Cat and Qdrant
+
+
+
+Cheshire Cat uses Qdrant as the default [Vector Memory](https://cheshire-cat-ai.github.io/docs/faq/llm-concepts/vector-memory/) for ingesting and retrieving documents.
+
+
+
+```
+
+# Decide host and port for your Cat. Default will be localhost:1865
+
+CORE_HOST=localhost
+
+CORE_PORT=1865
+
+
+
+# Qdrant server
+
+# QDRANT_HOST=localhost
+
+# QDRANT_PORT=6333
+
+```
+
+
+
+Cheshire Cat takes great advantage of the following features of Qdrant:
+
+
+
+* [Collection Aliases](../../concepts/collections/#collection-aliases) to manage the change from one embedder to another.
+
+* [Quantization](../../guides/quantization/) to obtain a good balance between speed, memory usage and quality of the results.
+
+* [Snapshots](../../concepts/snapshots/) to not miss any information.
+
+* [Community](https://discord.com/invite/tdtYvXjC4h)
+
+
+
+![RAG Pipeline](/documentation/frameworks/cheshire-cat/stregatto.jpg)
+
+
+
+## How to use the Cheshire Cat
+
+
+
+### Requirements
+
+
+
+To run the Cheshire Cat, you need to have [Docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docs.docker.com/compose/install/) already installed on your system.
+
+
+
+```shell
+
+docker run --rm -it -p 1865:80 ghcr.io/cheshire-cat-ai/core:latest
+
+```
+
+
+
+* Chat with the Cheshire Cat on [localhost:1865/admin](http://localhost:1865/admin).
+
+* You can also interact via REST API and try out the endpoints on [localhost:1865/docs](http://localhost:1865/docs)
+
+
+
+Check the [instructions on github](https://github.com/cheshire-cat-ai/core/blob/main/README.md) for a more comprehensive quick start.
+
+
+
+### First configuration of the LLM
+
+
+
+* Open the Admin Portal in your browser at [localhost:1865/admin](http://localhost:1865/admin).
+
+* Configure the LLM in the `Settings` tab.
+
+* If you don't explicitly choose it using `Settings` tab, the Embedder follows the LLM.
+
+
+
+## Next steps
+
+
+
+For more information, refer to the Cheshire Cat [documentation](https://cheshire-cat-ai.github.io/docs/) and [blog](https://cheshirecat.ai/blog/).
+
+
+
+* [Getting started](https://cheshirecat.ai/hello-world/)
+
+* [How the Cat works](https://cheshirecat.ai/how-the-cat-works/)
+
+* [Write Your First Plugin](https://cheshirecat.ai/write-your-first-plugin/)
+
+* [Cheshire Cat's use of Qdrant - Vector Space](https://cheshirecat.ai/dont-get-lost-in-vector-space/)
+
+* [Cheshire Cat's use of Qdrant - Aliases](https://cheshirecat.ai/the-drunken-cat-effect/)
+
+* [Discord Community](https://discord.com/invite/bHX5sNFCYU)
+",documentation/frameworks/cheshire-cat.md
+"---
+
+title: Understanding Vector Search in Qdrant
+
+weight: 1
+
+social_preview_image: /docs/gettingstarted/vector-social.png
+
+---
+
+
+
+# How Does Vector Search Work in Qdrant?
+
+
+
+
+
+
+
+
+
+
+
+If you are still trying to figure out how vector search works, please read ahead. This document describes how vector search is used, covers Qdrant's place in the larger ecosystem, and outlines how you can use Qdrant to augment your existing projects.
+
+
+
+For those who want to start writing code right away, visit our [Complete Beginners tutorial](/documentation/tutorials/search-beginners/) to build a search engine in 5-15 minutes.
+
+
+
+## A Brief History of Search
+
+
+
+Human memory is unreliable. Thus, as long as we have been trying to collect ‘knowledge’ in written form, we had to figure out how to search for relevant content without rereading the same books repeatedly. That’s why some brilliant minds introduced the inverted index. In the simplest form, it’s an appendix to a book, typically put at its end, with a list of the essential terms-and links to pages they occur at. Terms are put in alphabetical order. Back in the day, that was a manually crafted list requiring lots of effort to prepare. Once digitalization started, it became a lot easier, but still, we kept the same general principles. That worked, and still, it does.
+
+
+
+If you are looking for a specific topic in a particular book, you can try to find a related phrase and quickly get to the correct page. Of course, assuming you know the proper term. If you don’t, you must try and fail several times or find somebody else to help you form the correct query.
+
+
+
+{{< figure src=/docs/gettingstarted/inverted-index.png caption=""A simplified version of the inverted index."" >}}
+
+
+
+Time passed, and we haven’t had much change in that area for quite a long time. But our textual data collection started to grow at a greater pace. So we also started building up many processes around those inverted indexes. For example, we allowed our users to provide many words and started splitting them into pieces. That allowed finding some documents which do not necessarily contain all the query words, but possibly part of them. We also started converting words into their root forms to cover more cases, removing stopwords, etc. Effectively we were becoming more and more user-friendly. Still, the idea behind the whole process is derived from the most straightforward keyword-based search known since the Middle Ages, with some tweaks.
+
+
+
+{{< figure src=/docs/gettingstarted/tokenization.png caption=""The process of tokenization with an additional stopwords removal and converstion to root form of a word."" >}}
+
+
+
+Technically speaking, we encode the documents and queries into so-called sparse vectors where each position has a corresponding word from the whole dictionary. If the input text contains a specific word, it gets a non-zero value at that position. But in reality, none of the texts will contain more than hundreds of different words. So the majority of vectors will have thousands of zeros and a few non-zero values. That’s why we call them sparse. And they might be already used to calculate some word-based similarity by finding the documents which have the biggest overlap.
+
+
+
+{{< figure src=/docs/gettingstarted/query.png caption=""An example of a query vectorized to sparse format."" >}}
+
+
+
+Sparse vectors have relatively **high dimensionality**; equal to the size of the dictionary. And the dictionary is obtained automatically from the input data. So if we have a vector, we are able to partially reconstruct the words used in the text that created that vector.
+
+
+
+## The Tower of Babel
+
+
+
+Every once in a while, when we discover new problems with inverted indexes, we come up with a new heuristic to tackle it, at least to some extent. Once we realized that people might describe the same concept with different words, we started building lists of synonyms to convert the query to a normalized form. But that won’t work for the cases we didn’t foresee. Still, we need to craft and maintain our dictionaries manually, so they can support the language that changes over time. Another difficult issue comes to light with multilingual scenarios. Old methods require setting up separate pipelines and keeping humans in the loop to maintain the quality.
+
+
+
+{{< figure src=/docs/gettingstarted/babel.jpg caption=""The Tower of Babel, Pieter Bruegel."" >}}
+
+
+
+## The Representation Revolution
+
+
+
+The latest research in Machine Learning for NLP is heavily focused on training Deep Language Models. In this process, the neural network takes a large corpus of text as input and creates a mathematical representation of the words in the form of vectors. These vectors are created in such a way that words with similar meanings and occurring in similar contexts are grouped together and represented by similar vectors. And we can also take, for example, an average of all the word vectors to create the vector for a whole text (e.g query, sentence, or paragraph).
+
+
+
+![deep neural](/docs/gettingstarted/deep-neural.png)
+
+
+
+We can take those **dense vectors** produced by the network and use them as a **different data representation**. They are dense because neural networks will rarely produce zeros at any position. In contrary to sparse ones, they have a relatively low dimensionality — hundreds or a few thousand only. Unfortunately, if we want to have a look and understand the content of the document by looking at the vector it’s no longer possible. Dimensions are no longer representing the presence of specific words.
+
+
+
+Dense vectors can capture the meaning, not the words used in a text. That being said, **Large Language Models can automatically handle synonyms**. Moreso, since those neural networks might have been trained with multilingual corpora, they translate the same sentence, written in different languages, to similar vector representations, also called **embeddings**. And we can compare them to find similar pieces of text by calculating the distance to other vectors in our database.
+
+
+
+{{< figure src=/docs/gettingstarted/input.png caption=""Input queries contain different words, but they are still converted into similar vector representations, because the neural encoder can capture the meaning of the sentences. That feature can capture synonyms but also different languages.."" >}}
+
+
+
+**Vector search** is a process of finding similar objects based on their embeddings similarity. The good thing is, you don’t have to design and train your neural network on your own. Many pre-trained models are available, either on **HuggingFace** or by using libraries like [SentenceTransformers](https://www.sbert.net/?ref=hackernoon.com). If you, however, prefer not to get your hands dirty with neural models, you can also create the embeddings with SaaS tools, like [co.embed API](https://docs.cohere.com/reference/embed?ref=hackernoon.com).
+
+
+
+## Why Qdrant?
+
+
+
+The challenge with vector search arises when we need to find similar documents in a big set of objects. If we want to find the closest examples, the naive approach would require calculating the distance to every document. That might work with dozens or even hundreds of examples but may become a bottleneck if we have more than that. When we work with relational data, we set up database indexes to speed things up and avoid full table scans. And the same is true for vector search. Qdrant is a fully-fledged vector database that speeds up the search process by using a graph-like structure to find the closest objects in sublinear time. So you don’t calculate the distance to every object from the database, but some candidates only.
+
+
+
+{{< figure src=/docs/gettingstarted/vector-search.png caption=""Vector search with Qdrant. Thanks to HNSW graph we are able to compare the distance to some of the objects from the database, not to all of them."" >}}
+
+
+
+While doing a semantic search at scale, because this is what we sometimes call the vector search done on texts, we need a specialized tool to do it effectively — a tool like Qdrant.
+
+
+
+## Next Steps
+
+
+
+Vector search is an exciting alternative to sparse methods. It solves the issues we had with the keyword-based search without needing to maintain lots of heuristics manually. It requires an additional component, a neural encoder, to convert text into vectors.
+
+
+
+[**Tutorial 1 - Qdrant for Complete Beginners**](/documentation/tutorials/search-beginners/)
+
+Despite its complicated background, vectors search is extraordinarily simple to set up. With Qdrant, you can have a search engine up-and-running in five minutes. Our [Complete Beginners tutorial](../../tutorials/search-beginners/) will show you how.
+
+
+
+[**Tutorial 2 - Question and Answer System**](/articles/qa-with-cohere-and-qdrant/)
+
+However, you can also choose SaaS tools to generate them and avoid building your model. Setting up a vector search project with Qdrant Cloud and Cohere co.embed API is fairly easy if you follow the [Question and Answer system tutorial](/articles/qa-with-cohere-and-qdrant/).
+
+
+
+There is another exciting thing about vector search. You can search for any kind of data as long as there is a neural network that would vectorize your data type. Do you think about a reverse image search? That’s also possible with vector embeddings.
+
+
+
+
+
+
+
+
+",documentation/overview/vector-search.md
+"---
+
+title: What is Qdrant?
+
+weight: 3
+
+aliases:
+
+ - overview
+
+---
+
+
+
+# Introduction
+
+
+
+Vector databases are a relatively new way for interacting with abstract data representations
+
+derived from opaque machine learning models such as deep learning architectures. These
+
+representations are often called vectors or embeddings and they are a compressed version of
+
+the data used to train a machine learning model to accomplish a task like sentiment analysis,
+
+speech recognition, object detection, and many others.
+
+
+
+These new databases shine in many applications like [semantic search](https://en.wikipedia.org/wiki/Semantic_search)
+
+and [recommendation systems](https://en.wikipedia.org/wiki/Recommender_system), and here, we'll
+
+learn about one of the most popular and fastest growing vector databases in the market, [Qdrant](https://github.com/qdrant/qdrant).
+
+
+
+## What is Qdrant?
+
+
+
+[Qdrant](https://github.com/qdrant/qdrant) ""is a vector similarity search engine that provides a production-ready
+
+service with a convenient API to store, search, and manage points (i.e. vectors) with an additional
+
+payload."" You can think of the payloads as additional pieces of information that can help you
+
+hone in on your search and also receive useful information that you can give to your users.
+
+
+
+You can get started using Qdrant with the Python `qdrant-client`, by pulling the latest docker
+
+image of `qdrant` and connecting to it locally, or by trying out [Qdrant's Cloud](https://cloud.qdrant.io/)
+
+free tier option until you are ready to make the full switch.
+
+
+
+With that out of the way, let's talk about what are vector databases.
+
+
+
+## What Are Vector Databases?
+
+
+
+![dbs](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/databases.png)
+
+
+
+Vector databases are a type of database designed to store and query high-dimensional vectors
+
+efficiently. In traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap)
+
+databases (as seen in the image above), data is organized in rows and columns (and these are
+
+called **Tables**), and queries are performed based on the values in those columns. However,
+
+in certain applications including image recognition, natural language processing, and recommendation
+
+systems, data is often represented as vectors in a high-dimensional space, and these vectors, plus
+
+an id and a payload, are the elements we store in something called a **Collection** within a vector
+
+database like Qdrant.
+
+
+
+A vector in this context is a mathematical representation of an object or data point, where elements of
+
+the vector implicitly or explicitly correspond to specific features or attributes of the object. For example,
+
+in an image recognition system, a vector could represent an image, with each element of the vector
+
+representing a pixel value or a descriptor/characteristic of that pixel. In a music recommendation
+
+system, each vector could represent a song, and elements of the vector would capture song characteristics
+
+such as tempo, genre, lyrics, and so on.
+
+
+
+Vector databases are optimized for **storing** and **querying** these high-dimensional vectors
+
+efficiently, and they often using specialized data structures and indexing techniques such as
+
+Hierarchical Navigable Small World (HNSW) -- which is used to implement Approximate Nearest
+
+Neighbors -- and Product Quantization, among others. These databases enable fast similarity
+
+and semantic search while allowing users to find vectors that are the closest to a given query
+
+vector based on some distance metric. The most commonly used distance metrics are Euclidean
+
+Distance, Cosine Similarity, and Dot Product, and these three are fully supported Qdrant.
+
+
+
+Here's a quick overview of the three:
+
+- [**Cosine Similarity**](https://en.wikipedia.org/wiki/Cosine_similarity) - Cosine similarity
+
+is a way to measure how similar two vectors are. To simplify, it reflects whether the vectors
+
+have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations
+
+to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges
+
+from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity.
+
+- [**Dot Product**](https://en.wikipedia.org/wiki/Dot_product) - The dot product similarity metric is another way
+
+of measuring how similar two vectors are. Unlike cosine similarity, it also considers the length of the vectors.
+
+This might be important when, for example, vector representations of your documents are built
+
+based on the term (word) frequencies. The dot product similarity is calculated by multiplying the respective values
+
+in the two vectors and then summing those products. The higher the sum, the more similar the two vectors are.
+
+If you normalize the vectors (so the numbers in them sum up to 1), the dot product similarity will become
+
+the cosine similarity.
+
+- [**Euclidean Distance**](https://en.wikipedia.org/wiki/Euclidean_distance) - Euclidean
+
+distance is a way to measure the distance between two points in space, similar to how we
+
+measure the distance between two places on a map. It's calculated by finding the square root
+
+of the sum of the squared differences between the two points' coordinates. This distance metric
+
+is also commonly used in machine learning to measure how similar or dissimilar two vectors are.
+
+
+
+Now that we know what vector databases are and how they are structurally different than other
+
+databases, let's go over why they are important.
+
+
+
+## Why do we need Vector Databases?
+
+
+
+Vector databases play a crucial role in various applications that require similarity search, such
+
+as recommendation systems, content-based image retrieval, and personalized search. By taking
+
+advantage of their efficient indexing and searching techniques, vector databases enable faster
+
+and more accurate retrieval of unstructured data already represented as vectors, which can
+
+help put in front of users the most relevant results to their queries.
+
+
+
+In addition, other benefits of using vector databases include:
+
+1. Efficient storage and indexing of high-dimensional data.
+
+3. Ability to handle large-scale datasets with billions of data points.
+
+4. Support for real-time analytics and queries.
+
+5. Ability to handle vectors derived from complex data types such as images, videos, and natural language text.
+
+6. Improved performance and reduced latency in machine learning and AI applications.
+
+7. Reduced development and deployment time and cost compared to building a custom solution.
+
+
+
+Keep in mind that the specific benefits of using a vector database may vary depending on the
+
+use case of your organization and the features of the database you ultimately choose.
+
+
+
+Let's now evaluate, at a high-level, the way Qdrant is architected.
+
+
+
+## High-Level Overview of Qdrant's Architecture
+
+
+
+![qdrant](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/qdrant_overview_high_level.png)
+
+
+
+The diagram above represents a high-level overview of some of the main components of Qdrant. Here
+
+are the terminologies you should get familiar with.
+
+
+
+- [Collections](../concepts/collections/): A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](../concepts/collections/#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements.
+
+- [Distance Metrics](https://en.wikipedia.org/wiki/Metric_space): These are used to measure
+
+similarities among vectors and they must be selected at the same time you are creating a
+
+collection. The choice of metric depends on the way the vectors were obtained and, in particular,
+
+on the neural network that will be used to encode new queries.
+
+- [Points](../concepts/points/): The points are the central entity that
+
+Qdrant operates with and they consist of a vector and an optional id and payload.
+
+ - id: a unique identifier for your vectors.
+
+ - Vector: a high-dimensional representation of data, for example, an image, a sound, a document, a video, etc.
+
+ - [Payload](../concepts/payload/): A payload is a JSON object with additional data you can add to a vector.
+
+- [Storage](../concepts/storage/): Qdrant can use one of two options for
+
+storage, **In-memory** storage (Stores all vectors in RAM, has the highest speed since disk
+
+access is required only for persistence), or **Memmap** storage, (creates a virtual address
+
+space associated with the file on disk).
+
+- Clients: the programming languages you can use to connect to Qdrant.
+
+
+
+## Next Steps
+
+
+
+Now that you know more about vector databases and Qdrant, you are ready to get started with one
+
+of our tutorials. If you've never used a vector database, go ahead and jump straight into
+
+the **Getting Started** section. Conversely, if you are a seasoned developer in these
+
+technology, jump to the section most relevant to your use case.
+
+
+
+As you go through the tutorials, please let us know if any questions come up in our
+
+[Discord channel here](https://qdrant.to/discord). 😎
+",documentation/overview/_index.md
+"---
+
+title: Qdrant Web UI
+
+weight: 2
+
+aliases:
+
+ - /documentation/web-ui/
+
+---
+
+
+
+# Qdrant Web UI
+
+
+
+You can manage both local and cloud Qdrant deployments through the Web UI.
+
+
+
+If you've set up a deployment locally with the Qdrant [Quickstart](/documentation/quick-start/),
+
+navigate to http://localhost:6333/dashboard.
+
+
+
+If you've set up a deployment in a cloud cluster, find your Cluster URL in your
+
+cloud dashboard, at https://cloud.qdrant.io. Add `:6333/dashboard` to the end
+
+of the URL.
+
+
+
+## Access the Web UI
+
+
+
+Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points.
+
+
+
+In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots.
+
+
+
+![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png)
+
+
+
+### Qdrant Web UI features
+
+
+
+In the Qdrant Web UI, you can:
+
+
+
+- Run HTTP-based calls from the console
+
+- List and search existing [collections](/documentation/concepts/collections/)
+
+- Learn from our interactive tutorial
+
+
+
+You can navigate to these options directly. For example, if you used our
+
+[quick start](/documentation/quick-start/) to set up a cluster on localhost,
+
+you can review our tutorial at http://localhost:6333/dashboard#/tutorial.
+",documentation/interfaces/web-ui.md
+"---
+
+title: API & SDKs
+
+weight: 6
+
+aliases:
+
+ - /documentation/interfaces/
+
+---
+
+
+
+# Interfaces
+
+
+
+Qdrant supports these ""official"" clients.
+
+
+
+> **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language
+
+using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json)
+
+or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions.
+
+
+
+## Client Libraries
+
+||Client Repository|Installation|Version|
+
+|-|-|-|-|
+
+|[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)|
+
+|![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)|
+
+|![rust](/docs/misc/rust.png)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)|
+
+|![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)|
+
+|![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)|
+
+|![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)|
+
+
+
+
+
+## API Reference
+
+
+
+All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype.
+
+
+
+| API | Documentation |
+
+| -------- | ------------------------------------------------------------------------------------ |
+
+| REST API | [OpenAPI Specification](https://api.qdrant.tech/api-reference) |
+
+| gRPC API | [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md) |
+
+
+
+### gRPC Interface
+
+
+
+The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method.
+
+
+
+As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port.
+
+
+
+```yaml
+
+service:
+
+ grpc_port: 6334
+
+```
+
+
+
+
+
+Running the service inside of Docker will look like this:
+
+
+
+```bash
+
+docker run -p 6333:6333 -p 6334:6334 \
+
+ -v $(pwd)/qdrant_storage:/qdrant/storage:z \
+
+ qdrant/qdrant
+
+```
+
+
+
+**When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application.
+
+
+
+
+
+
+
+
+",documentation/interfaces/_index.md
+"---
+
+title: API Reference
+
+weight: 1
+
+type: external-link
+
+external_url: https://api.qdrant.tech/api-reference
+
+sitemapExclude: True
+
+---",documentation/interfaces/api-reference.md
+"---
+
+title: About Us
+
+---",about-us/_index.md
+"---
+
+title: Retrieval Augmented Generation (RAG)
+
+description: Unlock the full potential of your AI with RAG powered by Qdrant. Dive into a new era of intelligent applications that understand and interact with unprecedented accuracy and depth.
+
+startFree:
+
+ text: Get Started
+
+ url: https://cloud.qdrant.io/
+
+learnMore:
+
+ text: Contact Us
+
+ url: /contact-us/
+
+image:
+
+ src: /img/vectors/vector-2.svg
+
+ alt: Retrieval Augmented Generation
+
+sitemapExclude: true
+
+---
+
+
+",retrieval-augmented-generation/retrieval-augmented-generation-hero.md
+"---
+
+title: RAG with Qdrant
+
+description: RAG, powered by Qdrant's efficient data retrieval, elevates AI's capacity to generate rich, context-aware content across text, code, and multimedia, enhancing relevance and precision on a scalable platform. Discover why Qdrant is the perfect choice for your RAG project.
+
+features:
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/speedometer-blue.svg
+
+ alt: Speedometer
+
+ title: Highest RPS
+
+ description: Qdrant leads with top requests-per-second, outperforming alternative vector databases in various datasets by up to 4x.
+
+- id: 1
+
+ icon:
+
+ src: /icons/outline/time-blue.svg
+
+ alt: Time
+
+ title: Fast Retrieval
+
+ description: ""Qdrant achieves the lowest latency, ensuring quicker response times in data retrieval: 3ms response for 1M Open AI embeddings.""
+
+- id: 2
+
+ icon:
+
+ src: /icons/outline/vectors-blue.svg
+
+ alt: Vectors
+
+ title: Multi-Vector Support
+
+ description: Integrate the strengths of multiple vectors per document, such as title and body, to create search experiences your customers admire.
+
+- id: 3
+
+ icon:
+
+ src: /icons/outline/compression-blue.svg
+
+ alt: Compression
+
+ title: Built-in Compression
+
+ description: Significantly reduce memory usage, improve search performance and save up to 30x cost for high-dimensional vectors with Quantization.
+
+sitemapExclude: true
+
+---
+
+
+",retrieval-augmented-generation/retrieval-augmented-generation-features.md
+"---
+
+title: Learn how to get started with Qdrant for your RAG use case
+
+features:
+
+- id: 0
+
+ image:
+
+ src: /img/retrieval-augmented-generation-use-cases/case1.svg
+
+ srcMobile: /img/retrieval-augmented-generation-use-cases/case1-mobile.svg
+
+ alt: Music recommendation
+
+ title: Question and Answer System with LlamaIndex
+
+ description: Combine Qdrant and LlamaIndex to create a self-updating Q&A system.
+
+ link:
+
+ text: Video Tutorial
+
+ url: https://www.youtube.com/watch?v=id5ql-Abq4Y&t=56s
+
+- id: 1
+
+ image:
+
+ src: /img/retrieval-augmented-generation-use-cases/case2.svg
+
+ srcMobile: /img/retrieval-augmented-generation-use-cases/case2-mobile.svg
+
+ alt: Food discovery
+
+ title: Retrieval Augmented Generation with OpenAI and Qdrant
+
+ description: Basic RAG pipeline with Qdrant and OpenAI SDKs.
+
+ link:
+
+ text: Learn More
+
+ url: /articles/food-discovery-demo/
+
+caseStudy:
+
+ logo:
+
+ src: /img/retrieval-augmented-generation-use-cases/customer-logo.svg
+
+ alt: Logo
+
+ title: See how Dust is using Qdrant for RAG
+
+ description: Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG.
+
+ link:
+
+ text: Read Case Study
+
+ url: /blog/dust-and-qdrant/
+
+ image:
+
+ src: /img/retrieval-augmented-generation-use-cases/case-study.png
+
+ alt: Preview
+
+sitemapExclude: true
+
+---
+
+
+",retrieval-augmented-generation/retrieval-augmented-generation-use-cases.md
+"---
+
+title: RAG Evaluation
+
+descriptionFirstPart: Retrieval Augmented Generation (RAG) harnesses large language models to enhance content generation by effectively leveraging existing information. By amalgamating specific details from various sources, RAG facilitates accurate and relevant query results, making it invaluable across domains such as medical, finance, and academia for content creation, Q&A applications, and information synthesis.
+
+descriptionSecondPart: However, evaluating RAG systems is essential to refine and optimize their performance, ensuring alignment with user expectations and validating their functionality.
+
+image:
+
+ src: /img/retrieval-augmented-generation-evaluation/become-a-partner-graphic.svg
+
+ alt: Graphic
+
+partnersTitle: ""We work with the best in the industry on RAG evaluation:""
+
+logos:
+
+- id: 0
+
+ icon:
+
+ src: /img/retrieval-augmented-generation-evaluation/arize-logo.svg
+
+ alt: Arize logo
+
+- id: 1
+
+ icon:
+
+ src: /img/retrieval-augmented-generation-evaluation/ragas-logo.svg
+
+ alt: Ragas logo
+
+- id: 2
+
+ icon:
+
+ src: /img/retrieval-augmented-generation-evaluation/quotient-logo.svg
+
+ alt: Quotient logo
+
+sitemapExclude: true
+
+---
+
+
+",retrieval-augmented-generation/retrieval-augmented-generation-evaluation.md
+"---
+
+title: Qdrant integrates with all leading LLM providers and frameworks
+
+integrations:
+
+- id: 0
+
+ icon:
+
+ src: /img/integrations/integration-cohere.svg
+
+ alt: Cohere logo
+
+ title: Cohere
+
+ description: Integrate Qdrant with Cohere's co.embed API and Python SDK.
+
+- id: 1
+
+ icon:
+
+ src: /img/integrations/integration-gemini.svg
+
+ alt: Gemini logo
+
+ title: Gemini
+
+ description: Connect Qdrant with Google's Gemini Embedding Model API seamlessly.
+
+- id: 2
+
+ icon:
+
+ src: /img/integrations/integration-open-ai.svg
+
+ alt: OpenAI logo
+
+ title: OpenAI
+
+ description: Easily integrate OpenAI embeddings with Qdrant using the official Python SDK.
+
+- id: 3
+
+ icon:
+
+ src: /img/integrations/integration-aleph-alpha.svg
+
+ alt: Aleph Alpha logo
+
+ title: Aleph Alpha
+
+ description: Integrate Qdrant with Aleph Alpha's multimodal, multilingual embeddings.
+
+- id: 4
+
+ icon:
+
+ src: /img/integrations/integration-jina.svg
+
+ alt: Jina logo
+
+ title: Jina AI
+
+ description: Easily integrate Qdrant with Jina AI's embeddings API.
+
+- id: 5
+
+ icon:
+
+ src: /img/integrations/integration-aws.svg
+
+ alt: AWS logo
+
+ title: AWS Bedrock
+
+ description: Utilize AWS Bedrock's embedding models with Qdrant seamlessly.
+
+- id: 6
+
+ icon:
+
+ src: /img/integrations/integration-lang-chain.svg
+
+ alt: LangChain logo
+
+ title: LangChain
+
+ description: Qdrant seamlessly integrates with LangChain for LLM development.
+
+- id: 7
+
+ icon:
+
+ src: /img/integrations/integration-llama-index.svg
+
+ alt: LlamaIndex logo
+
+ title: LlamaIndex
+
+ description: Qdrant integrates with LlamaIndex for efficient data indexing in LLMs.
+
+sitemapExclude: true
+
+---
+
+
+",retrieval-augmented-generation/retrieval-augmented-generation-integrations.md
+"---
+
+title: ""RAG Use Case: Advanced Vector Search for AI Applications""
+
+description: ""Learn how Qdrant's advanced vector search enhances Retrieval-Augmented Generation (RAG) AI applications, offering scalable and efficient solutions.""
+
+url: rag
+
+build:
+
+ render: always
+
+cascade:
+
+- build:
+
+ list: local
+
+ publishResources: false
+
+ render: never
+
+---
+",retrieval-augmented-generation/_index.md
+"---
+
+title: Qdrant Hybrid Cloud
+
+salesTitle: Hybrid Cloud
+
+description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud.
+
+cards:
+
+- id: 0
+
+ icon: /icons/outline/separate-blue.svg
+
+ title: Deployment Flexibility
+
+ description: Use your existing infrastructure, whether it be on cloud platforms, on-premise setups, or even at edge locations.
+
+- id: 1
+
+ icon: /icons/outline/money-growth-blue.svg
+
+ title: Unmatched Cost Advantage
+
+ description: Maximum deployment flexibility to leverage the best available resources, in the cloud or on-premise.
+
+- id: 2
+
+ icon: /icons/outline/switches-blue.svg
+
+ title: Transparent Control
+
+ description: Fully managed experience for your Qdrant clusters, while your data remains exclusively yours.
+
+form:
+
+ title: Connect with us
+
+# description:
+
+ id: contact-sales-form
+
+ hubspotFormOptions: '{
+
+ ""region"": ""eu1"",
+
+ ""portalId"": ""139603372"",
+
+ ""formId"": ""f583c7ea-15ff-4c57-9859-650b8f34f5d3"",
+
+ ""submitButtonClass"": ""button button_contained"",
+
+ }'
+
+logosSectionTitle: Qdrant is trusted by top-tier enterprises
+
+---
+
+
+",contact-hybrid-cloud/_index.md
+"---
+
+title: Learn how to get started with Qdrant for your search use case
+
+features:
+
+- id: 0
+
+ image:
+
+ src: /img/advanced-search-use-cases/startup-semantic-search.svg
+
+ alt: Startup Semantic Search
+
+ title: Startup Semantic Search Demo
+
+ description: The demo showcases semantic search for startup descriptions through SentenceTransformer and Qdrant, comparing neural search's accuracy with traditional searches for better content discovery.
+
+ link:
+
+ text: View Demo
+
+ url: https://demo.qdrant.tech/
+
+- id: 1
+
+ image:
+
+ src: /img/advanced-search-use-cases/multimodal-semantic-search.svg
+
+ alt: Multimodal Semantic Search
+
+ title: Multimodal Semantic Search with Aleph Alpha
+
+ description: This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks.
+
+ link:
+
+ text: View Tutorial
+
+ url: /documentation/examples/aleph-alpha-search/
+
+- id: 2
+
+ image:
+
+ src: /img/advanced-search-use-cases/simple-neural-search.svg
+
+ alt: Simple Neural Search
+
+ title: Create a Simple Neural Search Service
+
+ description: This tutorial shows you how to build and deploy your own neural search service.
+
+ link:
+
+ text: View Tutorial
+
+ url: /documentation/tutorials/neural-search/
+
+- id: 3
+
+ image:
+
+ src: /img/advanced-search-use-cases/image-classification.svg
+
+ alt: Image Classification
+
+ title: Image Classification with Qdrant Vector Semantic Search
+
+ description: In this tutorial, you will learn how a semantic search engine for images can help diagnose different types of skin conditions.
+
+ link:
+
+ text: View Tutorial
+
+ url: https://www.youtube.com/watch?v=sNFmN16AM1o
+
+- id: 4
+
+ image:
+
+ src: /img/advanced-search-use-cases/semantic-search-101.svg
+
+ alt: Semantic Search 101
+
+ title: Semantic Search 101
+
+ description: Build a semantic search engine for science fiction books in 5 mins.
+
+ link:
+
+ text: View Tutorial
+
+ url: /documentation/tutorials/search-beginners/
+
+- id: 5
+
+ image:
+
+ src: /img/advanced-search-use-cases/hybrid-search-service-fastembed.svg
+
+ alt: Create a Hybrid Search Service with Fastembed
+
+ title: Create a Hybrid Search Service with Fastembed
+
+ description: This tutorial guides you through building and deploying your own hybrid search service using Fastembed.
+
+ link:
+
+ text: View Tutorial
+
+ url: /documentation/tutorials/hybrid-search-fastembed/
+
+sitemapExclude: true
+
+---
+
+
+",advanced-search/advanced-search-use-cases.md
+"---
+
+title: Search with Qdrant
+
+description: Qdrant enhances search, offering semantic, similarity, multimodal, and hybrid search capabilities for accurate, user-centric results, serving applications in different industries like e-commerce to healthcare.
+
+features:
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/similarity-blue.svg
+
+ alt: Similarity
+
+ title: Semantic Search
+
+ description: Qdrant optimizes similarity search, identifying the closest database items to any query vector for applications like recommendation systems, RAG and image retrieval, enhancing accuracy and user experience.
+
+ link:
+
+ text: Learn More
+
+ url: /documentation/concepts/search/
+
+- id: 1
+
+ icon:
+
+ src: /icons/outline/search-text-blue.svg
+
+ alt: Search text
+
+ title: Hybrid Search for Text
+
+ description: By combining dense vector embeddings with sparse vectors e.g. BM25, Qdrant powers semantic search to deliver context-aware results, transcending traditional keyword search by understanding the deeper meaning of data.
+
+ link:
+
+ text: Learn More
+
+ url: /documentation/tutorials/hybrid-search-fastembed/
+
+- id: 2
+
+ icon:
+
+ src: /icons/outline/selection-blue.svg
+
+ alt: Selection
+
+ title: Multimodal Search
+
+ description: Qdrant's capability extends to multi-modal search, indexing and retrieving various data forms (text, images, audio) once vectorized, facilitating a comprehensive search experience.
+
+ link:
+
+ text: View Tutorial
+
+ url: /documentation/tutorials/aleph-alpha-search/
+
+- id: 3
+
+ icon:
+
+ src: /icons/outline/filter-blue.svg
+
+ alt: Filter
+
+ title: Single Stage filtering that Works
+
+ description: Qdrant enhances search speeds and control and context understanding through filtering on any nested entry in our payload. Unique architecture allows Qdrant to avoid expensive pre-filtering and post-filtering stages, making search faster and accurate.
+
+ link:
+
+ text: Learn More
+
+ url: /articles/filtrable-hnsw/
+
+sitemapExclude: true
+
+---
+
+
+",advanced-search/advanced-search-features.md
+"---
+
+title: ""Advanced Search Solutions: High-Performance Vector Search""
+
+description: Explore how Qdrant's advanced search solutions enhance accuracy and user interaction depth across various industries, from e-commerce to healthcare.
+
+build:
+
+ render: always
+
+cascade:
+
+- build:
+
+ list: local
+
+ publishResources: false
+
+ render: never
+
+---
+",advanced-search/_index.md
+"---
+
+title: Advanced Search
+
+description: Dive into next-gen search capabilities with Qdrant, offering a smarter way to deliver precise and tailored content to users, enhancing interaction accuracy and depth.
+
+startFree:
+
+ text: Get Started
+
+ url: https://cloud.qdrant.io/
+
+learnMore:
+
+ text: Contact Us
+
+ url: /contact-us/
+
+image:
+
+ src: /img/vectors/vector-0.svg
+
+ alt: Advanced search
+
+sitemapExclude: true
+
+---
+
+
+",advanced-search/advanced-search-hero.md
+"---
+
+title: Qdrant Enterprise Solutions
+
+items:
+
+- id: 0
+
+ image:
+
+ src: /img/enterprise-solutions-use-cases/managed-cloud.svg
+
+ alt: Managed Cloud
+
+ title: Managed Cloud
+
+ description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure.
+
+ link:
+
+ text: Learn More
+
+ url: /cloud/
+
+ odd: true
+
+- id: 1
+
+ image:
+
+ src: /img/enterprise-solutions-use-cases/hybrid-cloud.svg
+
+ alt: Hybrid Cloud
+
+ title: Hybrid Cloud
+
+ description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the managed cloud.
+
+ link:
+
+ text: Learn More
+
+ url: /hybrid-cloud/
+
+ odd: false
+
+- id: 2
+
+ image:
+
+ src: /img/enterprise-solutions-use-cases/private-cloud.svg
+
+ alt: Private Cloud
+
+ title: Private Cloud
+
+ description: Experience maximum control and security by deploying Qdrant in your own infrastructure or edge locations.
+
+ link:
+
+ text: Learn More
+
+ url: /private-cloud/
+
+ odd: true
+
+sitemapExclude: true
+
+---
+",enterprise-solutions/enterprise-solutions-use-cases.md
+"---
+
+review: Enterprises like Bosch use Qdrant for unparalleled performance and massive-scale vector search. “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale.”
+
+names: Jeremy Teichmann & Daly Singh
+
+positions: Generative AI Expert & Product Owner
+
+avatar:
+
+ src: /img/customers/jeremy-t-daly-singh.svg
+
+ alt: Jeremy Teichmann Avatar
+
+logo:
+
+ src: /img/brands/bosch-gray.svg
+
+ alt: Logo
+
+sitemapExclude: true
+
+---
+
+
+",enterprise-solutions/testimonial.md
+"---
+
+title: Enterprise-Grade Vector Search
+
+description: ""The premier vector database for enterprises: flexible deployment options for low latency and state-of-the-art privacy and security features. High performance at billion vector scale.""
+
+startFree:
+
+ text: Start Free
+
+ url: https://cloud.qdrant.io/
+
+contactUs:
+
+ text: Talk to Sales
+
+ url: /contact-sales/
+
+image:
+
+ src: /img/enterprise-solutions-hero.png
+
+ srcMobile: /img/mobile/enterprise-solutions-hero-mobile.png
+
+ alt: Enterprise-solutions
+
+sitemapExclude: true
+
+---
+
+
+",enterprise-solutions/enterprise-solutions-hero.md
+"---
+
+title: Enterprise Benefits
+
+cards:
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/security-blue.svg
+
+ alt: Security
+
+ title: Security
+
+ description: Robust access management, backup options, and disaster recovery.
+
+- id: 1
+
+ icon:
+
+ src: /icons/outline/cloud-system-blue.svg
+
+ alt: Cloud System
+
+ title: Data Sovereignty
+
+ description: Keep your sensitive data within your secure premises.
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/speedometer-blue.svg
+
+ alt: Speedometer
+
+ title: Low-Latency
+
+ description: On-premise deployment for lightning-fast, low-latency access.
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/chart-bar-blue.svg
+
+ alt: Chart-Bar
+
+ title: Efficiency
+
+ description: Reduce memory usage with built-in compression, multitenancy, and offloading data to disk.
+
+sitemapExclude: true
+
+---
+",enterprise-solutions/enterprise-benefits.md
+"---
+
+title: Enterprise Search Solutions for Your Business | Qdrant
+
+description: Unlock the power of custom vector search with Qdrant's Enterprise Search Solutions. Tailored to your business needs to grow AI capabilities and data management.
+
+url: enterprise-solutions
+
+build:
+
+ render: always
+
+cascade:
+
+- build:
+
+ list: local
+
+ publishResources: false
+
+ render: never
+
+---
+",enterprise-solutions/_index.md
+"---
+
+title: Components
+
+---
+
+
+
+## Buttons
+
+**.button**
+
+
+
+Text
+
+
+
+
+
+
+
+### Variants
+
+
A well-known quote, contained in a blockquote element.
+
+
+
+
+
+
+
+
+
+
A well-known quote, contained in a blockquote element.
+
+
+
+
+
+ Someone famous in Source Title
+
+
+
+
+
+
+
+
+
+
This is a list.
+
+
It appears completely unstyled.
+
+
Structurally, it's still a list.
+
+
However, this style only applies to immediate child elements.
+
+
Nested lists:
+
+
+
+
are unaffected by this style
+
+
will still show a bullet
+
+
and have appropriate left margin
+
+
+
+
+
+
This may still come in handy in some situations.
+
+
+
+",debug.skip/bootstrap.md
+"---
+
+title: Debugging
+
+---
+",debug.skip/_index.md
+"---
+
+title: ""Qdrant 1.7.0 has just landed!""
+
+short_description: ""Qdrant 1.7.0 brought a bunch of new features. Let's take a closer look at them!""
+
+description: ""Sparse vectors, Discovery API, user-defined sharding, and snapshot-based shard transfer. That's what you can find in the latest Qdrant 1.7.0 release!""
+
+social_preview_image: /articles_data/qdrant-1.7.x/social_preview.png
+
+small_preview_image: /articles_data/qdrant-1.7.x/icon.svg
+
+preview_dir: /articles_data/qdrant-1.7.x/preview
+
+weight: -90
+
+author: Kacper Łukawski
+
+author_link: https://kacperlukawski.com
+
+date: 2023-12-10T10:00:00Z
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - new features
+
+ - sparse vectors
+
+ - discovery
+
+ - exploration
+
+ - custom sharding
+
+ - snapshot-based shard transfer
+
+ - hybrid search
+
+ - bm25
+
+ - tfidf
+
+ - splade
+
+---
+
+
+
+Please welcome the long-awaited [Qdrant 1.7.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.7.0). Except for a handful of minor fixes and improvements, this release brings some cool brand-new features that we are excited to share!
+
+The latest version of your favorite vector search engine finally supports **sparse vectors**. That's the feature many of you requested, so why should we ignore it?
+
+We also decided to continue our journey with [vector similarity beyond search](/articles/vector-similarity-beyond-search/). The new Discovery API covers some utterly new use cases. We're more than excited to see what you will build with it!
+
+But there is more to it! Check out what's new in **Qdrant 1.7.0**!
+
+
+
+1. Sparse vectors: do you want to use keyword-based search? Support for sparse vectors is finally here!
+
+2. Discovery API: an entirely new way of using vectors for restricted search and exploration.
+
+3. User-defined sharding: you can now decide which points should be stored on which shard.
+
+4. Snapshot-based shard transfer: a new option for moving shards between nodes.
+
+
+
+Do you see something missing? Your feedback drives the development of Qdrant, so do not hesitate to [join our Discord community](https://qdrant.to/discord) and help us build the best vector search engine out there!
+
+
+
+## New features
+
+
+
+Qdrant 1.7.0 brings a bunch of new features. Let's take a closer look at them!
+
+
+
+### Sparse vectors
+
+
+
+Traditional keyword-based search mechanisms often rely on algorithms like TF-IDF, BM25, or comparable methods. While these techniques internally utilize vectors, they typically involve sparse vector representations. In these methods, the **vectors are predominantly filled with zeros, containing a relatively small number of non-zero values**.
+
+Those sparse vectors are theoretically high dimensional, definitely way higher than the dense vectors used in semantic search. However, since the majority of dimensions are usually zeros, we store them differently and just keep the non-zero dimensions.
+
+
+
+Until now, Qdrant has not been able to handle sparse vectors natively. Some were trying to convert them to dense vectors, but that was not the best solution or a suggested way. We even wrote a piece with [our thoughts on building a hybrid search](/articles/hybrid-search/), and we encouraged you to use a different tool for keyword lookup.
+
+
+
+Things have changed since then, as so many of you wanted a single tool for sparse and dense vectors. And responding to this [popular](https://github.com/qdrant/qdrant/issues/1678) [demand](https://github.com/qdrant/qdrant/issues/1135), we've now introduced sparse vectors!
+
+
+
+If you're coming across the topic of sparse vectors for the first time, our [Brief History of Search](/documentation/overview/vector-search/) explains the difference between sparse and dense vectors.
+
+
+
+Check out the [sparse vectors article](../sparse-vectors/) and [sparse vectors index docs](/documentation/concepts/indexing/#sparse-vector-index) for more details on what this new index means for Qdrant users.
+
+
+
+### Discovery API
+
+
+
+The recently launched [Discovery API](/documentation/concepts/explore/#discovery-api) extends the range of scenarios for leveraging vectors. While its interface mirrors the [Recommendation API](/documentation/concepts/explore/#recommendation-api), it focuses on refining the search parameters for greater precision.
+
+The concept of 'context' refers to a collection of positive-negative pairs that define zones within a space. Each pair effectively divides the space into positive or negative segments. This concept guides the search operation to prioritize points based on their inclusion within positive zones or their avoidance of negative zones. Essentially, the search algorithm favors points that fall within multiple positive zones or steer clear of negative ones.
+
+
+
+The Discovery API can be used in two ways - either with or without the target point. The first case is called a **discovery search**, while the second is called a **context search**.
+
+
+
+#### Discovery search
+
+
+
+*Discovery search* is an operation that uses a target point to find the most relevant points in the collection, while performing the search in the preferred areas only. That is basically a search operation with more control over the search space.
+
+
+
+![Discovery search visualization](/articles_data/qdrant-1.7.x/discovery-search.png)
+
+
+
+Please refer to the [Discovery API documentation on discovery search](/documentation/concepts/explore/#discovery-search) for more details and the internal mechanics of the operation.
+
+
+
+#### Context search
+
+
+
+The mode of *context search* is similar to the discovery search, but it does not use a target point. Instead, the `context` is used to navigate the [HNSW graph](https://arxiv.org/abs/1603.09320) towards preferred zones. It is expected that the results in that mode will be diverse, and not centered around one point.
+
+*Context Search* could serve as a solution for individuals seeking a more exploratory approach to navigate the vector space.
+
+
+
+![Context search visualization](/articles_data/qdrant-1.7.x/context-search.png)
+
+
+
+### User-defined sharding
+
+
+
+Qdrant's collections are divided into shards. A single **shard** is a self-contained store of points, which can be moved between nodes. Up till now, the points were distributed among shards by using a consistent hashing algorithm, so that shards were managing non-intersecting subsets of points.
+
+The latter one remains true, but now you can define your own sharding and decide which points should be stored on which shard. Sounds cool, right? But why would you need that? Well, there are multiple scenarios in which you may want to use custom sharding. For example, you may want to store some points on a dedicated node, or you may want to store points from the same user on the same shard and
+
+
+
+While the existing behavior is still the default one, you can now define the shards when you create a collection. Then, you can assign each point to a shard by providing a `shard_key` in the `upsert` operation. What's more, you can also search over the selected shards only, by providing the `shard_key` parameter in the search operation.
+
+
+
+```http request
+
+POST /collections/my_collection/points/search
+
+{
+
+ ""vector"": [0.29, 0.81, 0.75, 0.11],
+
+ ""shard_key"": [""cats"", ""dogs""],
+
+ ""limit"": 10,
+
+ ""with_payload"": true,
+
+}
+
+```
+
+
+
+If you want to know more about the user-defined sharding, please refer to the [sharding documentation](/documentation/guides/distributed_deployment/#sharding).
+
+
+
+### Snapshot-based shard transfer
+
+
+
+That's a really more in depth technical improvement for the distributed mode users, that we implemented a new options the shard transfer mechanism. The new approach is based on the snapshot of the shard, which is transferred to the target node.
+
+
+
+Moving shards is required for dynamical scaling of the cluster. Your data can migrate between nodes, and the way you move it is crucial for the performance of the whole system. The good old `stream_records` method (still the default one) transmits all the records between the machines and indexes them on the target node.
+
+In the case of moving the shard, it's necessary to recreate the HNSW index each time. However, with the introduction of the new `snapshot` approach, the snapshot itself, inclusive of all data and potentially quantized content, is transferred to the target node. This comprehensive snapshot includes the entire index, enabling the target node to seamlessly load it and promptly begin handling requests without the need for index recreation.
+
+
+
+There are multiple scenarios in which you may prefer one over the other. Please check out the docs of the [shard transfer method](/documentation/guides/distributed_deployment/#shard-transfer-method) for more details and head-to-head comparison. As for now, the old `stream_records` method is still the default one, but we may decide to change it in the future.
+
+
+
+## Minor improvements
+
+
+
+Beyond introducing new features, Qdrant 1.7.0 enhances performance and addresses various minor issues. Here's a rundown of the key improvements:
+
+
+
+1. Improvement of HNSW Index Building on High CPU Systems ([PR#2869](https://github.com/qdrant/qdrant/pull/2869)).
+
+
+
+2. Improving [Search Tail Latencies](https://github.com/qdrant/qdrant/pull/2931): improvement for high CPU systems with many parallel searches, directly impacting the user experience by reducing latency.
+
+
+
+3. [Adding Index for Geo Map Payloads](https://github.com/qdrant/qdrant/pull/2768): index for geo map payloads can significantly improve search performance, especially for applications involving geographical data.
+
+
+
+4. Stability of Consensus on Big High Load Clusters: enhancing the stability of consensus in large, high-load environments is critical for ensuring the reliability and scalability of the system ([PR#3013](https://github.com/qdrant/qdrant/pull/3013), [PR#3026](https://github.com/qdrant/qdrant/pull/3026), [PR#2942](https://github.com/qdrant/qdrant/pull/2942), [PR#3103](https://github.com/qdrant/qdrant/pull/3103), [PR#3054](https://github.com/qdrant/qdrant/pull/3054)).
+
+
+
+5. Configurable Timeout for Searches: allowing users to configure the timeout for searches provides greater flexibility and can help optimize system performance under different operational conditions ([PR#2748](https://github.com/qdrant/qdrant/pull/2748), [PR#2771](https://github.com/qdrant/qdrant/pull/2771)).
+
+
+
+## Release notes
+
+
+
+[Our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.7.0) are a place to go if you are interested in more details. Please remember that Qdrant is an open source project, so feel free to [contribute](https://github.com/qdrant/qdrant/issues)!
+",articles/qdrant-1.7.x.md
+"---
+
+title: ""Any* Embedding Model Can Become a Late Interaction Model... If You Give It a Chance!""
+
+short_description: ""Standard dense embedding models perform surprisingly well in late interaction scenarios.""
+
+description: ""We recently discovered that embedding models can become late interaction models & can perform surprisingly well in some scenarios. See what we learned here.""
+
+preview_dir: /articles_data/late-interaction-models/preview
+
+social_preview_image: /articles_data/late-interaction-models/social-preview.png
+
+weight: -160
+
+author: Kacper Łukawski
+
+author_link: https://kacperlukawski.com
+
+date: 2024-08-14T00:00:00.000Z
+
+---
+
+
+
+\* At least any open-source model, since you need access to its internals.
+
+
+
+## You Can Adapt Dense Embedding Models for Late Interaction
+
+
+
+Qdrant 1.10 introduced support for multi-vector representations, with late interaction being a prominent example of this model. In essence, both documents and queries are represented by multiple vectors, and identifying the most relevant documents involves calculating a score based on the similarity between the corresponding query and document embeddings. If you're not familiar with this paradigm, our updated [Hybrid Search](/articles/hybrid-search/) article explains how multi-vector representations can enhance retrieval quality.
+
+
+
+**Figure 1:** We can visualize late interaction between corresponding document-query embedding pairs.
+
+
+
+![Late interaction model](/articles_data/late-interaction-models/late-interaction.png)
+
+
+
+There are many specialized late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/), but **it appears that regular dense embedding models can also be effectively utilized in this manner**.
+
+
+
+> In this study, we will demonstrate that standard dense embedding models, traditionally used for single-vector representations, can be effectively adapted for late interaction scenarios using output token embeddings as multi-vector representations.
+
+
+
+By testing out retrieval with Qdrant’s multi-vector feature, we will show that these models can rival or surpass specialized late interaction models in retrieval performance, while offering lower complexity and greater efficiency. This work redefines the potential of dense models in advanced search pipelines, presenting a new method for optimizing retrieval systems.
+
+
+
+## Understanding Embedding Models
+
+
+
+The inner workings of embedding models might be surprising to some. The model doesn’t operate directly on the input text; instead, it requires a tokenization step to convert the text into a sequence of token identifiers. Each token identifier is then passed through an embedding layer, which transforms it into a dense vector. Essentially, the embedding layer acts as a lookup table that maps token identifiers to dense vectors. These vectors are then fed into the transformer model as input.
+
+
+
+**Figure 2:** The tokenization step, which takes place before vectors are added to the transformer model.
+
+
+
+![Input token embeddings](/articles_data/late-interaction-models/input-embeddings.png)
+
+
+
+The input token embeddings are context-free and are learned during the model’s training process. This means that each token always receives the same embedding, regardless of its position in the text. At this stage, the token embeddings are unaware of the context in which they appear. It is the transformer model’s role to contextualize these embeddings.
+
+
+
+Much has been discussed about the role of attention in transformer models, but in essence, this mechanism is responsible for capturing cross-token relationships. Each transformer module takes a sequence of token embeddings as input and produces a sequence of output token embeddings. Both sequences are of the same length, with each token embedding being enriched by information from the other token embeddings at the current step.
+
+
+
+**Figure 3:** The mechanism that produces a sequence of output token embeddings.
+
+
+
+![Output token embeddings](/articles_data/late-interaction-models/output-embeddings.png)
+
+
+
+**Figure 4:** The final step performed by the embedding model is pooling the output token embeddings to generate a single vector representation of the input text.
+
+
+
+![Pooling](/articles_data/late-interaction-models/pooling.png)
+
+
+
+There are several pooling strategies, but regardless of which one a model uses, the output is always a single vector representation, which inevitably loses some information about the input. It’s akin to giving someone detailed, step-by-step directions to the nearest grocery store versus simply pointing in the general direction. While the vague direction might suffice in some cases, the detailed instructions are more likely to lead to the desired outcome.
+
+
+
+## Using Output Token Embeddings for Multi-Vector Representations
+
+
+
+We often overlook the output token embeddings, but the fact is—they also serve as multi-vector representations of the input text. So, why not explore their use in a multi-vector retrieval model, similar to late interaction models?
+
+
+
+### Experimental Findings
+
+
+
+We conducted several experiments to determine whether output token embeddings could be effectively used in place of traditional late interaction models. The results are quite promising.
+
+
+
+
+
+
+
+
+
+
Dataset
+
+
Model
+
+
Experiment
+
+
NDCG@10
+
+
+
+
+
+
+
+
+
+
SciFact
+
+
prithivida/Splade_PP_en_v1
+
+
sparse vectors
+
+
0.70928
+
+
+
+
+
+
colbert-ir/colbertv2.0
+
+
late interaction model
+
+
0.69579
+
+
+
+
+
+
all-MiniLM-L6-v2
+
+
single dense vector representation
+
+
0.64508
+
+
+
+
+
+
output token embeddings
+
+
0.70724
+
+
+
+
+
+
BAAI/bge-small-en
+
+
single dense vector representation
+
+
0.68213
+
+
+
+
+
+
output token embeddings
+
+
0.73696
+
+
+
+
+
+
+
+
+
+
+
+
NFCorpus
+
+
prithivida/Splade_PP_en_v1
+
+
sparse vectors
+
+
0.34166
+
+
+
+
+
+
colbert-ir/colbertv2.0
+
+
late interaction model
+
+
0.35036
+
+
+
+
+
+
all-MiniLM-L6-v2
+
+
single dense vector representation
+
+
0.31594
+
+
+
+
+
+
output token embeddings
+
+
0.35779
+
+
+
+
+
+
BAAI/bge-small-en
+
+
single dense vector representation
+
+
0.29696
+
+
+
+
+
+
output token embeddings
+
+
0.37502
+
+
+
+
+
+
+
+
+
+
+
+
ArguAna
+
+
prithivida/Splade_PP_en_v1
+
+
sparse vectors
+
+
0.47271
+
+
+
+
+
+
colbert-ir/colbertv2.0
+
+
late interaction model
+
+
0.44534
+
+
+
+
+
+
all-MiniLM-L6-v2
+
+
single dense vector representation
+
+
0.50167
+
+
+
+
+
+
output token embeddings
+
+
0.45997
+
+
+
+
+
+
BAAI/bge-small-en
+
+
single dense vector representation
+
+
0.58857
+
+
+
+
+
+
output token embeddings
+
+
0.57648
+
+
+
+
+
+
+
+
+
+The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and utilizes [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir). While this package is not officially maintained by the Qdrant team, it may prove useful for those interested in experimenting with various Qdrant configurations to see how they impact retrieval quality. All experiments were conducted using Qdrant in exact search mode, ensuring the results are not influenced by approximate search.
+
+
+
+Even the simple `all-MiniLM-L6-v2` model can be applied in a late interaction model fashion, resulting in a positive impact on retrieval quality. However, the best results were achieved with the `BAAI/bge-small-en` model, which outperformed both sparse and late interaction models.
+
+
+
+It's important to note that ColBERT has not been trained on BeIR datasets, making its performance fully out of domain. Nevertheless, the `all-MiniLM-L6-v2` [training dataset](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#training-data) also lacks any BeIR data, yet it still performs remarkably well.
+
+
+
+## Comparative Analysis of Dense vs. Late Interaction Models
+
+
+
+The retrieval quality speaks for itself, but there are other important factors to consider.
+
+
+
+The traditional dense embedding models we tested are less complex than late interaction or sparse models. With fewer parameters, these models are expected to be faster during inference and more cost-effective to maintain. Below is a comparison of the models used in the experiments:
+
+
+
+| Model | Number of parameters |
+
+|------------------------------|----------------------|
+
+| `prithivida/Splade_PP_en_v1` | 109,514,298 |
+
+| `colbert-ir/colbertv2.0` | 109,580,544 |
+
+| `BAAI/bge-small-en` | 33,360,000 |
+
+| `all-MiniLM-L6-v2` | 22,713,216 |
+
+
+
+One argument against using output token embeddings is the increased storage requirements compared to ColBERT-like models. For instance, the `all-MiniLM-L6-v2` model produces 384-dimensional output token embeddings, which is three times more than the 128-dimensional embeddings generated by ColBERT-like models. This increase not only leads to higher memory usage but also impacts the computational cost of retrieval, as calculating distances takes more time. Mitigating this issue through vector compression would make a lot of sense.
+
+
+
+## Exploring Quantization for Multi-Vector Representations
+
+
+
+Binary quantization is generally more effective for high-dimensional vectors, making the `all-MiniLM-L6-v2` model, with its relatively low-dimensional outputs, less ideal for this approach. However, scalar quantization appeared to be a viable alternative. The table below summarizes the impact of quantization on retrieval quality.
+
+
+
+
+
+
+
+
+
+
Dataset
+
+
Model
+
+
Experiment
+
+
NDCG@10
+
+
+
+
+
+
+
+
+
+
SciFact
+
+
all-MiniLM-L6-v2
+
+
output token embeddings
+
+
0.70724
+
+
+
+
+
+
output token embeddings (uint8)
+
+
0.70297
+
+
+
+
+
+
+
+
+
+
+
+
NFCorpus
+
+
all-MiniLM-L6-v2
+
+
output token embeddings
+
+
0.35779
+
+
+
+
+
+
output token embeddings (uint8)
+
+
0.35572
+
+
+
+
+
+
+
+
+
+It’s important to note that quantization doesn’t always preserve retrieval quality at the same level, but in this case, scalar quantization appears to have minimal impact on retrieval performance. The effect is negligible, while the memory savings are substantial.
+
+
+
+We managed to maintain the original quality while using four times less memory. Additionally, a quantized vector requires 384 bytes, compared to ColBERT’s 512 bytes. This results in a 25% reduction in memory usage, with retrieval quality remaining nearly unchanged.
+
+
+
+## Practical Application: Enhancing Retrieval with Dense Models
+
+
+
+If you’re using one of the sentence transformer models, the output token embeddings are calculated by default. While a single vector representation is more efficient in terms of storage and computation, there’s no need to discard the output token embeddings. According to our experiments, these embeddings can significantly enhance retrieval quality. You can store both the single vector and the output token embeddings in Qdrant, using the single vector for the initial retrieval step and then reranking the results with the output token embeddings.
+
+
+
+**Figure 5:** A single model pipeline that relies solely on the output token embeddings for reranking.
+
+
+
+![Single model reranking](/articles_data/late-interaction-models/single-model-reranking.png)
+
+
+
+To demonstrate this concept, we implemented a simple reranking pipeline in Qdrant. This pipeline uses a dense embedding model for the initial oversampled retrieval and then relies solely on the output token embeddings for the reranking step.
+
+
+
+### Single Model Retrieval and Reranking Benchmarks
+
+
+
+Our tests focused on using the same model for both retrieval and reranking. The reported metric is NDCG@10. In all tests, we applied an oversampling factor of 5x, meaning the retrieval step returned 50 results, which were then narrowed down to 10 during the reranking step. Below are the results for some of the BeIR datasets:
+
+
+
+
+
+
+
+
+
+
Dataset
+
+
all-miniLM-L6-v2
+
+
BAAI/bge-small-en
+
+
+
+
+
+
dense embeddings only
+
+
dense + reranking
+
+
dense embeddings only
+
+
dense + reranking
+
+
+
+
+
+
+
+
+
+
SciFact
+
+
0.64508
+
+
0.70293
+
+
0.68213
+
+
0.73053
+
+
+
+
+
+
NFCorpus
+
+
0.31594
+
+
0.34297
+
+
0.29696
+
+
0.35996
+
+
+
+
+
+
ArguAna
+
+
0.50167
+
+
0.45378
+
+
0.58857
+
+
0.57302
+
+
+
+
+
+
Touche-2020
+
+
0.16904
+
+
0.19693
+
+
0.13055
+
+
0.19821
+
+
+
+
+
+
TREC-COVID
+
+
0.47246
+
+
0.6379
+
+
0.45788
+
+
0.53539
+
+
+
+
+
+
FiQA-2018
+
+
0.36867
+
+
0.41587
+
+
0.31091
+
+
0.39067
+
+
+
+
+
+
+
+
+
+The source code for the benchmark is publicly available, and [you can find it in the repository of the `beir-qdrant` package](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_reranking.py).
+
+
+
+Overall, adding a reranking step using the same model typically improves retrieval quality. However, the quality of various late interaction models is [often reported based on their reranking performance when BM25 is used for the initial retrieval](https://huggingface.co/mixedbread-ai/mxbai-colbert-large-v1#1-reranking-performance). This experiment aimed to demonstrate how a single model can be effectively used for both retrieval and reranking, and the results are quite promising.
+
+
+
+Now, let's explore how to implement this using the new Query API introduced in Qdrant 1.10.
+
+
+
+## Setting Up Qdrant for Late Interaction
+
+
+
+The new Query API in Qdrant 1.10 enables the construction of even more complex retrieval pipelines. We can use the single vector created after pooling for the initial retrieval step and then rerank the results using the output token embeddings.
+
+
+
+Assuming the collection is named `my-collection` and is configured to store two named vectors: `dense-vector` and `output-token-embeddings`, here’s how such a collection could be created in Qdrant:
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""http://localhost:6333"")
+
+
+
+client.create_collection(
+
+ collection_name=""my-collection"",
+
+ vectors_config={
+
+ ""dense-vector"": models.VectorParams(
+
+ size=384,
+
+ distance=models.Distance.COSINE,
+
+ ),
+
+ ""output-token-embeddings"": models.VectorParams(
+
+ size=384,
+
+ distance=models.Distance.COSINE,
+
+ multivector_config=models.MultiVectorConfig(
+
+ comparator=models.MultiVectorComparator.MAX_SIM
+
+ ),
+
+ ),
+
+ }
+
+)
+
+```
+
+
+
+Both vectors are of the same size since they are produced by the same `all-MiniLM-L6-v2` model.
+
+
+
+```python
+
+from sentence_transformers import SentenceTransformer
+
+
+
+model = SentenceTransformer(""all-MiniLM-L6-v2"")
+
+```
+
+
+
+Now, instead of using the search API with just a single dense vector, we can create a reranking pipeline. First, we retrieve 50 results using the dense vector, and then we rerank them using the output token embeddings to obtain the top 10 results.
+
+
+
+```python
+
+query = ""What else can be done with just all-MiniLM-L6-v2 model?""
+
+
+
+client.query_points(
+
+ collection_name=""my-collection"",
+
+ prefetch=[
+
+ # Prefetch the dense embeddings of the top-50 documents
+
+ models.Prefetch(
+
+ query=model.encode(query).tolist(),
+
+ using=""dense-vector"",
+
+ limit=50,
+
+ )
+
+ ],
+
+ # Rerank the top-50 documents retrieved by the dense embedding model
+
+ # and return just the top-10. Please note we call the same model, but
+
+ # we ask for the token embeddings by setting the output_value parameter.
+
+ query=model.encode(query, output_value=""token_embeddings"").tolist(),
+
+ using=""output-token-embeddings"",
+
+ limit=10,
+
+)
+
+```
+
+## Try the Experiment Yourself
+
+
+
+In a real-world scenario, you might take it a step further by first calculating the token embeddings and then performing pooling to obtain the single vector representation. This approach allows you to complete everything in a single pass.
+
+
+
+The simplest way to start experimenting with building complex reranking pipelines in Qdrant is by using the forever-free cluster on [Qdrant Cloud](https://cloud.qdrant.io/) and reading [Qdrant's documentation](/documentation/).
+
+
+
+The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and uses [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir).
+
+
+
+## Future Directions and Research Opportunities
+
+
+
+The initial experiments using output token embeddings in the retrieval process have yielded promising results. However, we plan to conduct further benchmarks to validate these findings and explore the incorporation of sparse methods for the initial retrieval. Additionally, we aim to investigate the impact of quantization on multi-vector representations and its effects on retrieval quality. Finally, we will assess retrieval speed, a crucial factor for many applications.",articles/late-interaction-models.md
+"---
+
+title: Metric Learning Tips & Tricks
+
+short_description: How to train an object matching model and serve it in production.
+
+description: Practical recommendations on how to train a matching model and serve it in production. Even with no labeled data.
+
+# external_link: https://vasnetsov93.medium.com/metric-learning-tips-n-tricks-2e4cfee6b75b
+
+social_preview_image: /articles_data/metric-learning-tips/preview/social_preview.jpg
+
+preview_dir: /articles_data/metric-learning-tips/preview
+
+small_preview_image: /articles_data/metric-learning-tips/scatter-graph.svg
+
+weight: 20
+
+author: Andrei Vasnetsov
+
+author_link: https://blog.vasnetsov.com/
+
+date: 2021-05-15T10:18:00.000Z
+
+# aliases: [ /articles/metric-learning-tips/ ]
+
+---
+
+
+
+
+
+## How to train object matching model with no labeled data and use it in production
+
+
+
+
+
+Currently, most machine-learning-related business cases are solved as a classification problems.
+
+Classification algorithms are so well studied in practice that even if the original problem is not directly a classification task, it is usually decomposed or approximately converted into one.
+
+
+
+However, despite its simplicity, the classification task has requirements that could complicate its production integration and scaling.
+
+E.g. it requires a fixed number of classes, where each class should have a sufficient number of training samples.
+
+
+
+In this article, I will describe how we overcome these limitations by switching to metric learning.
+
+By the example of matching job positions and candidates, I will show how to train metric learning model with no manually labeled data, how to estimate prediction confidence, and how to serve metric learning in production.
+
+
+
+
+
+## What is metric learning and why using it?
+
+
+
+According to Wikipedia, metric learning is the task of learning a distance function over objects.
+
+In practice, it means that we can train a model that tells a number for any pair of given objects.
+
+And this number should represent a degree or score of similarity between those given objects.
+
+For example, objects with a score of 0.9 could be more similar than objects with a score of 0.5
+
+Actual scores and their direction could vary among different implementations.
+
+
+
+In practice, there are two main approaches to metric learning and two corresponding types of NN architectures.
+
+The first is the interaction-based approach, which first builds local interactions (i.e., local matching signals) between two objects. Deep neural networks learn hierarchical interaction patterns for matching.
+
+Examples of neural network architectures include MV-LSTM, ARC-II, and MatchPyramid.
+
+
+
+![MV-LSTM, example of interaction-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/mv_lstm.png)
+
+> MV-LSTM, example of interaction-based model, [Shengxian Wan et al.
+
+](https://www.researchgate.net/figure/Illustration-of-MV-LSTM-S-X-and-S-Y-are-the-in_fig1_285271115) via Researchgate
+
+
+
+The second is the representation-based approach.
+
+In this case distance function is composed of 2 components:
+
+the Encoder transforms an object into embedded representation - usually a large float point vector, and the Comparator takes embeddings of a pair of objects from the Encoder and calculates their similarity.
+
+The most well-known example of this embedding representation is Word2Vec.
+
+
+
+Examples of neural network architectures also include DSSM, C-DSSM, and ARC-I.
+
+
+
+The Comparator is usually a very simple function that could be calculated very quickly.
+
+It might be cosine similarity or even a dot production.
+
+Two-stage schema allows performing complex calculations only once per object.
+
+Once transformed, the Comparator can calculate object similarity independent of the Encoder much more quickly.
+
+For more convenience, embeddings can be placed into specialized storages or vector search engines.
+
+These search engines allow to manage embeddings using API, perform searches and other operations with vectors.
+
+
+
+![C-DSSM, example of representation-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/cdssm.png)
+
+> C-DSSM, example of representation-based model, [Xue Li et al.](https://arxiv.org/abs/1901.10710v2) via arXiv
+
+
+
+Pre-trained NNs can also be used. The output of the second-to-last layer could work as an embedded representation.
+
+Further in this article, I would focus on the representation-based approach, as it proved to be more flexible and fast.
+
+
+
+So what are the advantages of using metric learning comparing to classification?
+
+Object Encoder does not assume the number of classes.
+
+So if you can't split your object into classes,
+
+if the number of classes is too high, or you suspect that it could grow in the future - consider using metric learning.
+
+
+
+In our case, business goal was to find suitable vacancies for candidates who specify the title of the desired position.
+
+To solve this, we used to apply a classifier to determine the job category of the vacancy and the candidate.
+
+But this solution was limited to only a few hundred categories.
+
+Candidates were complaining that they couldn't find the right category for them.
+
+Training the classifier for new categories would be too long and require new training data for each new category.
+
+Switching to metric learning allowed us to overcome these limitations, the resulting solution could compare any pair position descriptions, even if we don't have this category reference yet.
+
+
+
+![T-SNE with job samples](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/embeddings.png)
+
+> T-SNE with job samples, Image by Author. Play with [Embedding Projector](https://projector.tensorflow.org/?config=https://gist.githubusercontent.com/generall/7e712425e3b340c2c4dbc1a29f515d91/raw/b45b2b6f6c1d5ab3d3363c50805f3834a85c8879/config.json) yourself.
+
+
+
+With metric learning, we learn not a concrete job type but how to match job descriptions from a candidate's CV and a vacancy.
+
+Secondly, with metric learning, it is easy to add more reference occupations without model retraining.
+
+We can then add the reference to a vector search engine.
+
+Next time we will match occupations - this new reference vector will be searchable.
+
+
+
+
+
+## Data for metric learning
+
+
+
+Unlike classifiers, a metric learning training does not require specific class labels.
+
+All that is required are examples of similar and dissimilar objects.
+
+We would call them positive and negative samples.
+
+
+
+At the same time, it could be a relative similarity between a pair of objects.
+
+For example, twins look more alike to each other than a pair of random people.
+
+And random people are more similar to each other than a man and a cat.
+
+A model can use such relative examples for learning.
+
+
+
+The good news is that the division into classes is only a special case of determining similarity.
+
+To use such datasets, it is enough to declare samples from one class as positive and samples from another class as negative.
+
+In this way, it is possible to combine several datasets with mismatched classes into one generalized dataset for metric learning.
+
+
+
+But not only datasets with division into classes are suitable for extracting positive and negative examples.
+
+If, for example, there are additional features in the description of the object, the value of these features can also be used as a similarity factor.
+
+It may not be as explicit as class membership, but the relative similarity is also suitable for learning.
+
+
+
+In the case of job descriptions, there are many ontologies of occupations, which were able to be combined into a single dataset thanks to this approach.
+
+We even went a step further and used identical job titles to find similar descriptions.
+
+
+
+As a result, we got a self-supervised universal dataset that did not require any manual labeling.
+
+
+
+Unfortunately, universality does not allow some techniques to be applied in training.
+
+Next, I will describe how to overcome this disadvantage.
+
+
+
+## Training the model
+
+
+
+There are several ways to train a metric learning model.
+
+Among the most popular is the use of Triplet or Contrastive loss functions, but I will not go deep into them in this article.
+
+However, I will tell you about one interesting trick that helped us work with unified training examples.
+
+
+
+One of the most important practices to efficiently train the metric learning model is hard negative mining.
+
+This technique aims to include negative samples on which model gave worse predictions during the last training epoch.
+
+Most articles that describe this technique assume that training data consists of many small classes (in most cases it is people's faces).
+
+With data like this, it is easy to find bad samples - if two samples from different classes have a high similarity score, we can use it as a negative sample.
+
+But we had no such classes in our data, the only thing we have is occupation pairs assumed to be similar in some way.
+
+We cannot guarantee that there is no better match for each job occupation among this pair.
+
+That is why we can't use hard negative mining for our model.
+
+
+
+
+
+![Loss variations](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/losses.png)
+
+> [Alfonso Medela et al.](https://arxiv.org/abs/1905.10675) via arXiv
+
+
+
+
+
+To compensate for this limitation we can try to increase the number of random (weak) negative samples.
+
+One way to achieve this is to train the model longer, so it will see more samples by the end of the training.
+
+But we found a better solution in adjusting our loss function.
+
+In a regular implementation of Triplet or Contractive loss, each positive pair is compared with some or a few negative samples.
+
+What we did is we allow pair comparison amongst the whole batch.
+
+That means that loss-function penalizes all pairs of random objects if its score exceeds any of the positive scores in a batch.
+
+This extension gives `~ N * B^2` comparisons where `B` is a size of batch and `N` is a number of batches.
+
+Much bigger than `~ N * B` in regular triplet loss.
+
+This means that increasing the size of the batch significantly increases the number of negative comparisons, and therefore should improve the model performance.
+
+We were able to observe this dependence in our experiments.
+
+Similar idea we also found in the article [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362).
+
+
+
+
+
+## Model confidence
+
+
+
+In real life it is often needed to know how confident the model was in the prediction.
+
+Whether manual adjustment or validation of the result is required.
+
+
+
+With conventional classification, it is easy to understand by scores how confident the model is in the result.
+
+If the probability values of different classes are close to each other, the model is not confident.
+
+If, on the contrary, the most probable class differs greatly, then the model is confident.
+
+
+
+At first glance, this cannot be applied to metric learning.
+
+Even if the predicted object similarity score is small it might only mean that the reference set has no proper objects to compare with.
+
+Conversely, the model can group garbage objects with a large score.
+
+
+
+Fortunately, we found a small modification to the embedding generator, which allows us to define confidence in the same way as it is done in conventional classifiers with a Softmax activation function.
+
+The modification consists in building an embedding as a combination of feature groups.
+
+Each feature group is presented as a one-hot encoded sub-vector in the embedding.
+
+If the model can confidently predict the feature value - the corresponding sub-vector will have a high absolute value in some of its elements.
+
+For a more intuitive understanding, I recommend thinking about embeddings not as points in space, but as a set of binary features.
+
+
+
+To implement this modification and form proper feature groups we would need to change a regular linear output layer to a concatenation of several Softmax layers.
+
+Each softmax component would represent an independent feature and force the neural network to learn them.
+
+
+
+Let's take for example that we have 4 softmax components with 128 elements each.
+
+Every such component could be roughly imagined as a one-hot-encoded number in the range of 0 to 127.
+
+Thus, the resulting vector will represent one of `128^4` possible combinations.
+
+If the trained model is good enough, you can even try to interpret the values of singular features individually.
+
+
+
+
+
+![Softmax feature embeddings](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/feature_embedding.png)
+
+> Softmax feature embeddings, Image by Author.
+
+
+
+
+
+## Neural rules
+
+
+
+Machine learning models rarely train to 100% accuracy.
+
+In a conventional classifier, errors can only be eliminated by modifying and repeating the training process.
+
+Metric training, however, is more flexible in this matter and allows you to introduce additional steps that allow you to correct the errors of an already trained model.
+
+
+
+A common error of the metric learning model is erroneously declaring objects close although in reality they are not.
+
+To correct this kind of error, we introduce exclusion rules.
+
+
+
+Rules consist of 2 object anchors encoded into vector space.
+
+If the target object falls into one of the anchors' effects area - it triggers the rule. It will exclude all objects in the second anchor area from the prediction result.
+
+
+
+![Exclusion rules](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/exclusion_rule.png)
+
+> Neural exclusion rules, Image by Author.
+
+
+
+The convenience of working with embeddings is that regardless of the number of rules,
+
+you only need to perform the encoding once per object.
+
+Then to find a suitable rule, it is enough to compare the target object's embedding and the pre-calculated embeddings of the rule's anchors.
+
+Which, when implemented, translates into just one additional query to the vector search engine.
+
+
+
+
+
+## Vector search in production
+
+
+
+When implementing a metric learning model in production, the question arises about the storage and management of vectors.
+
+It should be easy to add new vectors if new job descriptions appear in the service.
+
+
+
+In our case, we also needed to apply additional conditions to the search.
+
+We needed to filter, for example, the location of candidates and the level of language proficiency.
+
+
+
+We did not find a ready-made tool for such vector management, so we created [Qdrant](https://github.com/qdrant/qdrant) - open-source vector search engine.
+
+
+
+It allows you to add and delete vectors with a simple API, independent of a programming language you are using.
+
+You can also assign the payload to vectors.
+
+This payload allows additional filtering during the search request.
+
+
+
+Qdrant has a pre-built docker image and start working with it is just as simple as running
+
+
+
+```bash
+
+docker run -p 6333:6333 qdrant/qdrant
+
+```
+
+
+
+Documentation with examples could be found [here](https://api.qdrant.tech/api-reference).
+
+
+
+
+
+## Conclusion
+
+
+
+In this article, I have shown how metric learning can be more scalable and flexible than the classification models.
+
+I suggest trying similar approaches in your tasks - it might be matching similar texts, images, or audio data.
+
+With the existing variety of pre-trained neural networks and a vector search engine, it is easy to build your metric learning-based application.
+
+
+
+
+",articles/metric-learning-tips.md
+"---
+
+title: Qdrant 0.10 released
+
+short_description: A short review of all the features introduced in Qdrant 0.10
+
+description: Qdrant 0.10 brings a lot of changes. Check out what's new!
+
+preview_dir: /articles_data/qdrant-0-10-release/preview
+
+small_preview_image: /articles_data/qdrant-0-10-release/new-svgrepo-com.svg
+
+social_preview_image: /articles_data/qdrant-0-10-release/preview/social_preview.jpg
+
+weight: 70
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2022-09-19T13:30:00+02:00
+
+draft: false
+
+---
+
+
+
+[Qdrant 0.10 is a new version](https://github.com/qdrant/qdrant/releases/tag/v0.10.0) that brings a lot of performance
+
+improvements, but also some new features which were heavily requested by our users. Here is an overview of what has changed.
+
+
+
+## Storing multiple vectors per object
+
+
+
+Previously, if you wanted to use semantic search with multiple vectors per object, you had to create separate collections
+
+for each vector type. This was even if the vectors shared some other attributes in the payload. With Qdrant 0.10, you can
+
+now store all of these vectors together in the same collection, which allows you to share a single copy of the payload.
+
+This makes it easier to use semantic search with multiple vector types, and reduces the amount of work you need to do to
+
+set up your collections.
+
+
+
+## Batch vector search
+
+
+
+Previously, you had to send multiple requests to the Qdrant API to perform multiple non-related tasks. However, this
+
+can cause significant network overhead and slow down the process, especially if you have a poor connection speed.
+
+Fortunately, the [new batch search feature](/documentation/concepts/search/#batch-search-api) allows
+
+you to avoid this issue. With just one API call, Qdrant will handle multiple search requests in the most efficient way
+
+possible. This means that you can perform multiple tasks simultaneously without having to worry about network overhead
+
+or slow performance.
+
+
+
+## Built-in ARM support
+
+
+
+To make our application accessible to ARM users, we have compiled it specifically for that platform. If it is not
+
+compiled for ARM, the device will have to emulate it, which can slow down performance. To ensure the best possible
+
+experience for ARM users, we have created Docker images specifically for that platform. Keep in mind that using
+
+a limited set of processor instructions may affect the performance of your vector search. Therefore, we have tested
+
+both ARM and non-ARM architectures using similar setups to understand the potential impact on performance.
+
+
+
+## Full-text filtering
+
+
+
+Qdrant is a vector database that allows you to quickly search for the nearest neighbors. However, you may need to apply
+
+additional filters on top of the semantic search. Up until version 0.10, Qdrant only supported keyword filters. With the
+
+release of Qdrant 0.10, [you can now use full-text filters](/documentation/concepts/filtering/#full-text-match)
+
+as well. This new filter type can be used on its own or in combination with other filter types to provide even more
+
+flexibility in your searches.
+",articles/qdrant-0-10-release.md
+"---
+
+title: ""Using LangChain for Question Answering with Qdrant""
+
+short_description: ""Large Language Models might be developed fast with modern tool. Here is how!""
+
+description: ""We combined LangChain, a pre-trained LLM from OpenAI, SentenceTransformers & Qdrant to create a question answering system with just a few lines of code. Learn more!""
+
+social_preview_image: /articles_data/langchain-integration/social_preview.png
+
+small_preview_image: /articles_data/langchain-integration/chain.svg
+
+preview_dir: /articles_data/langchain-integration/preview
+
+weight: 6
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-01-31T10:53:20+01:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - langchain
+
+ - llm
+
+ - large language models
+
+ - question answering
+
+ - openai
+
+ - embeddings
+
+---
+
+
+
+# Streamlining Question Answering: Simplifying Integration with LangChain and Qdrant
+
+
+
+Building applications with Large Language Models doesn't have to be complicated. A lot has been going on recently to simplify the development,
+
+so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. [LangChain](https://langchain.readthedocs.io)
+
+provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring.
+
+
+
+## Why Use Qdrant for Question Answering with LangChain?
+
+
+
+It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context
+
+is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we
+
+know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. [Vector database,
+
+such as Qdrant](https://qdrant.tech/), is of great help here, as their ability to perform a [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) over a huge knowledge base is crucial to preselect some possibly valid
+
+documents, so they can be provided into the LLM. That's also one of the **chains** implemented in [LangChain](https://qdrant.tech/documentation/frameworks/langchain/), which is called `VectorDBQA`. And Qdrant got
+
+integrated with the library, so it might be used to build it effortlessly.
+
+
+
+### The Two-Model Approach
+
+
+
+Surprisingly enough, there will be two models required to set things up. First of all, we need an embedding model that will convert the set of facts into
+
+vectors, and store those into Qdrant. That's an identical process to any other semantic search application. We're going to use one of the
+
+`SentenceTransformers` models, so it can be hosted locally. The embeddings created by that model will be put into Qdrant and used to retrieve the most
+
+similar documents, given the query.
+
+
+
+However, when we receive a query, there are two steps involved. First of all, we ask Qdrant to provide the most relevant documents and simply combine all
+
+of them into a single text. Then, we build a prompt to the LLM (in our case [OpenAI](https://openai.com/)), including those documents as a context, of course together with the
+
+question asked. So the input to the LLM looks like the following:
+
+
+
+```text
+
+Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
+
+It's as certain as 2 + 2 = 4
+
+...
+
+
+
+Question: How much is 2 + 2?
+
+Helpful Answer:
+
+```
+
+
+
+There might be several context documents combined, and it is solely up to LLM to choose the right piece of content. But our expectation is, the model should
+
+respond with just `4`.
+
+
+
+## Why do we need two different models?
+
+Both solve some different tasks. The first model performs feature extraction, by converting the text into vectors, while
+
+the second one helps in text generation or summarization. Disclaimer: This is not the only way to solve that task with LangChain. Such a chain is called `stuff`
+
+in the library nomenclature.
+
+
+
+![](/articles_data/langchain-integration/flow-diagram.png)
+
+
+
+Enough theory! This sounds like a pretty complex application, as it involves several systems. But with LangChain, it might be implemented in just a few lines
+
+of code, thanks to the recent integration with [Qdrant](https://qdrant.tech/). We're not even going to work directly with `QdrantClient`, as everything is already done in the background
+
+by LangChain. If you want to get into the source code right away, all the processing is available as a
+
+[Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
+
+
+
+## How to Implement Question Answering with LangChain and Qdrant
+
+
+
+### Step 1: Configuration
+
+
+
+A journey of a thousand miles begins with a single step, in our case with the configuration of all the services. We'll be using [Qdrant Cloud](https://cloud.qdrant.io),
+
+so we need an API key. The same is for OpenAI - the API key has to be obtained from their website.
+
+
+
+![](/articles_data/langchain-integration/code-configuration.png)
+
+
+
+### Step 2: Building the knowledge base
+
+
+
+We also need some facts from which the answers will be generated. There is plenty of public datasets available, and
+
+[Natural Questions](https://ai.google.com/research/NaturalQuestions/visualization) is one of them. It consists of the whole HTML content of the websites they were
+
+scraped from. That means we need some preprocessing to extract plain text content. As a result, we’re going to have two lists of strings - one for questions and
+
+the other one for the answers.
+
+
+
+The answers have to be vectorized with the first of our models. The `sentence-transformers/all-mpnet-base-v2` is one of the possibilities, but there are some
+
+other options available. LangChain will handle that part of the process in a single function call.
+
+
+
+![](/articles_data/langchain-integration/code-qdrant.png)
+
+
+
+### Step 3: Setting up QA with Qdrant in a loop
+
+
+
+`VectorDBQA` is a chain that performs the process described above. So it, first of all, loads some facts from Qdrant and then feeds them into OpenAI LLM which
+
+should analyze them to find the answer to a given question. The only last thing to do before using it is to put things together, also with a single function call.
+
+
+
+![](/articles_data/langchain-integration/code-vectordbqa.png)
+
+
+
+## Step 4: Testing out the chain
+
+
+
+And that's it! We can put some queries, and LangChain will perform all the required processing to find the answer in the provided context.
+
+
+
+![](/articles_data/langchain-integration/code-answering.png)
+
+
+
+```text
+
+> what kind of music is scott joplin most famous for
+
+ Scott Joplin is most famous for composing ragtime music.
+
+
+
+> who died from the band faith no more
+
+ Chuck Mosley
+
+
+
+> when does maggie come on grey's anatomy
+
+ Maggie first appears in season 10, episode 1, which aired on September 26, 2013.
+
+
+
+> can't take my eyes off you lyrics meaning
+
+ I don't know.
+
+
+
+> who lasted the longest on alone season 2
+
+ David McIntyre lasted the longest on Alone season 2, with a total of 66 days.
+
+```
+
+
+
+The great thing about such a setup is that the knowledge base might be easily extended with some new facts and those will be included in the prompts
+
+sent to LLM later on. Of course, assuming their similarity to the given question will be in the top results returned by Qdrant.
+
+
+
+If you want to run the chain on your own, the simplest way to reproduce it is to open the
+
+[Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
+",articles/langchain-integration.md
+"---
+
+title: ""Optimizing OpenAI Embeddings: Enhance Efficiency with Qdrant's Binary Quantization""
+
+draft: false
+
+slug: binary-quantization-openai
+
+short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings
+
+description: Explore how Qdrant's Binary Quantization can significantly improve the efficiency and performance of OpenAI's Ada-003 embeddings. Learn best practices for real-time search applications.
+
+preview_dir: /articles_data/binary-quantization-openai/preview
+
+preview_image: /articles-data/binary-quantization-openai/Article-Image.png
+
+small_preview_image: /articles_data/binary-quantization-openai/icon.svg
+
+social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png
+
+title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp
+
+
+
+date: 2024-02-21T13:12:08-08:00
+
+author: Nirant Kasliwal
+
+author_link: https://nirantk.com/about/
+
+
+
+featured: false
+
+tags:
+
+ - OpenAI
+
+ - binary quantization
+
+ - embeddings
+
+weight: -130
+
+
+
+aliases: [ /blog/binary-quantization-openai/ ]
+
+---
+
+
+
+OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings.
+
+
+
+In this post, we discuss:
+
+
+
+- The significance of OpenAI embeddings and real-world challenges.
+
+- Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings
+
+- Results of an experiment that highlights improvements in search efficiency and accuracy
+
+- Implications of these findings for real-world applications
+
+- Best practices for leveraging Binary Quantization to enhance OpenAI embeddings
+
+
+
+If you're new to Binary Quantization, consider reading our article which walks you through the concept and [how to use it with Qdrant](/articles/binary-quantization/)
+
+
+
+You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks.
+
+
+
+## New OpenAI embeddings: performance and changes
+
+
+
+As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates).
+
+
+
+These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL).
+
+
+
+#### Matryoshka representation learning
+
+
+
+The new OpenAI models have been trained with a novel approach called ""[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)"". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size.
+
+
+
+Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models.
+
+
+
+## Enhanced performance and efficiency with binary quantization
+
+
+
+By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets.
+
+
+
+The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications.
+
+
+
+![](/blog/openai/Accuracy_Models.png)
+
+
+
+The efficiency gains from Binary Quantization are as follows:
+
+
+
+- Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost.
+
+- Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches.
+
+- Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases.
+
+
+
+### Experiment setup: OpenAI embeddings in focus
+
+
+
+To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature.
+
+
+
+This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny.
+
+
+
+#### Dataset
+
+
+
+ The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset.
+
+
+
+#### Parameters: oversampling, rescoring, and search limits
+
+
+
+For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters:
+
+
+
+- **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources.
+
+
+
+- **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance.
+
+
+
+- **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements.
+
+
+
+Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs.
+
+
+
+### Results: binary quantization's impact on OpenAI embeddings
+
+
+
+To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query.
+
+
+
+#### Rescoring
+
+
+
+![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png)
+
+
+
+Here are some key observations, which analyzes the impact of rescoring (`True` or `False`):
+
+
+
+1. **Significantly Improved Accuracy**:
+
+ - Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`).
+
+ - The improvement in accuracy is true across various search limits (10, 20, 50, 100).
+
+
+
+2. **Model and Dimension Specific Observations**:
+
+ - For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate.
+
+ - The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results.
+
+ - With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions.
+
+
+
+In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces.
+
+
+
+3. **Influence of Search Limit**:
+
+ - The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered.
+
+
+
+In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction.
+
+
+
+### Dataset combinations
+
+
+
+For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes:
+
+
+
+1. **Model Name**: Signifying the specific text embedding model variant, such as ""text-embedding-3-large"" or ""text-embedding-3-small"". This distinction correlates with the model's capacity, with ""large"" models offering more detailed embeddings at the cost of increased computational resources.
+
+
+
+2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant.
+
+
+
+Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results.
+
+
+
+
+
+```python
+
+dataset_combinations = [
+
+ {
+
+ ""model_name"": ""text-embedding-3-large"",
+
+ ""dimensions"": 3072,
+
+ },
+
+ {
+
+ ""model_name"": ""text-embedding-3-large"",
+
+ ""dimensions"": 1024,
+
+ },
+
+ {
+
+ ""model_name"": ""text-embedding-3-large"",
+
+ ""dimensions"": 1536,
+
+ },
+
+ {
+
+ ""model_name"": ""text-embedding-3-small"",
+
+ ""dimensions"": 512,
+
+ },
+
+ {
+
+ ""model_name"": ""text-embedding-3-small"",
+
+ ""dimensions"": 1024,
+
+ },
+
+ {
+
+ ""model_name"": ""text-embedding-3-small"",
+
+ ""dimensions"": 1536,
+
+ },
+
+]
+
+```
+
+#### Exploring dataset combinations and their impacts on model performance
+
+
+
+The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step.
+
+
+
+Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup.
+
+
+
+After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring.
+
+
+
+```python
+
+import pandas as pd
+
+
+
+for combination in dataset_combinations:
+
+ model_name = combination[""model_name""]
+
+ dimensions = combination[""dimensions""]
+
+ print(f""Model: {model_name}, dimensions: {dimensions}"")
+
+ results = pd.read_json(f""../results/results-{model_name}-{dimensions}.json"", lines=True)
+
+ average_accuracy = results[results[""limit""] != 1]
+
+ average_accuracy = average_accuracy[average_accuracy[""limit""] != 5]
+
+ average_accuracy = average_accuracy.groupby([""oversampling"", ""rescore"", ""limit""])[
+
+ ""accuracy""
+
+ ].mean()
+
+ average_accuracy = average_accuracy.reset_index()
+
+ acc = average_accuracy.pivot(
+
+ index=""limit"", columns=[""oversampling"", ""rescore""], values=""accuracy""
+
+ )
+
+ print(acc)
+
+```
+
+
+
+Here is a selected slice of these results, with `rescore=True`:
+
+
+
+|Method|Dimensionality|Test Dataset|Recall|Oversampling|
+
+|-|-|-|-|-|
+
+|OpenAI text-embedding-3-large (highest MTEB score from the table) |3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
+
+|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
+
+|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
+
+
+
+#### Impact of oversampling
+
+
+
+You can use oversampling in machine learning to counteract imbalances in datasets.
+
+It works well when one class significantly outnumbers others. This imbalance
+
+can skew the performance of models, which favors the majority class at the
+
+expense of others. By creating additional samples from the minority classes,
+
+oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios.
+
+
+
+The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one.
+
+
+
+Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects.
+
+
+
+![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png)
+
+
+
+### Leveraging binary quantization: best practices
+
+
+
+We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings:
+
+
+
+1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested.
+
+2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages.
+
+3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications.
+
+4. Rescoring: Enable rescoring to improve the accuracy of search results.
+
+5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible.
+
+
+
+## What's next?
+
+
+
+Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service.
+
+
+
+The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/).
+
+
+
+Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant) ",articles/binary-quantization-openai.md
+"---
+
+title: ""How to Implement Multitenancy and Custom Sharding in Qdrant""
+
+short_description: ""Explore how Qdrant's multitenancy and custom sharding streamline machine-learning operations, enhancing scalability and data security.""
+
+description: ""Discover how multitenancy and custom sharding in Qdrant can streamline your machine-learning operations. Learn how to scale efficiently and manage data securely.""
+
+social_preview_image: /articles_data/multitenancy/social_preview.png
+
+preview_dir: /articles_data/multitenancy/preview
+
+small_preview_image: /articles_data/multitenancy/icon.svg
+
+weight: -120
+
+author: David Myriel
+
+date: 2024-02-06T13:21:00.000Z
+
+draft: false
+
+keywords:
+
+ - multitenancy
+
+ - custom sharding
+
+ - multiple partitions
+
+ - vector database
+
+---
+
+
+
+# Scaling Your Machine Learning Setup: The Power of Multitenancy and Custom Sharding in Qdrant
+
+
+
+We are seeing the topics of [multitenancy](/documentation/guides/multiple-partitions/) and [distributed deployment](/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup.
+
+
+
+Whether you are building a bank fraud-detection system, [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product.
+
+In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs.
+
+
+
+## Multitenancy & custom sharding with Qdrant
+
+
+
+We have developed two major features just for this. __You can now scale a single Qdrant cluster and support all of your customers worldwide.__ Under [multitenancy](/documentation/guides/multiple-partitions/), each customer's data is completely isolated and only accessible by them. At times, if this data is location-sensitive, Qdrant also gives you the option to divide your cluster by region or other criteria that further secure your customer's access. This is called [custom sharding](/documentation/guides/distributed_deployment/#user-defined-sharding).
+
+
+
+Combining these two will result in an efficiently-partitioned architecture that further leverages the convenience of a single Qdrant cluster. This article will briefly explain the benefits and show how you can get started using both features.
+
+
+
+## One collection, many tenants
+
+
+
+When working with Qdrant, you can upsert all your data to a single collection, and then partition each vector via its payload. This means that all your users are leveraging the power of a single Qdrant cluster, but their data is still isolated within the collection. Let's take a look at a two-tenant collection:
+
+
+
+**Figure 1:** Each individual vector is assigned a specific payload that denotes which tenant it belongs to. This is how a large number of different tenants can share a single Qdrant collection.
+
+![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy-single.png)
+
+
+
+Qdrant is built to excel in a single collection with a vast number of tenants. You should only create multiple collections when your data is not homogenous or if users' vectors are created by different embedding models. Creating too many collections may result in resource overhead and cause dependencies. This can increase costs and affect overall performance.
+
+
+
+## Sharding your database
+
+
+
+With Qdrant, you can also specify a shard for each vector individually. This feature is useful if you want to [control where your data is kept in the cluster](/documentation/guides/distributed_deployment/#sharding). For example, one set of vectors can be assigned to one shard on its own node, while another set can be on a completely different node.
+
+
+
+During vector search, your operations will be able to hit only the subset of shards they actually need. In massive-scale deployments, __this can significantly improve the performance of operations that do not require the whole collection to be scanned__.
+
+
+
+This works in the other direction as well. Whenever you search for something, you can specify a shard or several shards and Qdrant will know where to find them. It will avoid asking all machines in your cluster for results. This will minimize overhead and maximize performance.
+
+
+
+### Common use cases
+
+
+
+A clear use-case for this feature is managing a multitenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. Sharding solves the problem of region-based data placement, whereby certain data needs to be kept within specific locations. To do this, however, you will need to [move your shards between nodes](/documentation/guides/distributed_deployment/#moving-shards).
+
+
+
+**Figure 2:** Users can both upsert and query shards that are relevant to them, all within the same collection. Regional sharding can help avoid cross-continental traffic.
+
+![Qdrant Multitenancy](/articles_data/multitenancy/shards.png)
+
+
+
+Custom sharding also gives you precise control over other use cases. A time-based data placement means that data streams can index shards that represent latest updates. If you organize your shards by date, you can have great control over the recency of retrieved data. This is relevant for social media platforms, which greatly rely on time-sensitive data.
+
+
+
+## Before I go any further.....how secure is my user data?
+
+
+
+By design, Qdrant offers three levels of isolation. We initially introduced collection-based isolation, but your scaled setup has to move beyond this level. In this scenario, you will leverage payload-based isolation (from multitenancy) and resource-based isolation (from sharding). The ultimate goal is to have a single collection, where you can manipulate and customize placement of shards inside your cluster more precisely and avoid any kind of overhead. The diagram below shows the arrangement of your data within a two-tier isolation arrangement.
+
+
+
+**Figure 3:** Users can query the collection based on two filters: the `group_id` and the individual `shard_key_selector`. This gives your data two additional levels of isolation.
+
+![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy.png)
+
+
+
+## Create custom shards for a single collection
+
+
+
+When creating a collection, you will need to configure user-defined sharding. This lets you control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations, since you won't need to go through the entire collection to retrieve data.
+
+
+
+```python
+
+client.create_collection(
+
+ collection_name=""{tenant_data}"",
+
+ shard_number=2,
+
+ sharding_method=models.ShardingMethod.CUSTOM,
+
+ # ... other collection parameters
+
+)
+
+client.create_shard_key(""{tenant_data}"", ""canada"")
+
+client.create_shard_key(""{tenant_data}"", ""germany"")
+
+```
+
+In this example, your cluster is divided between Germany and Canada. Canadian and German law differ when it comes to international data transfer. Let's say you are creating a RAG application that supports the healthcare industry. Your Canadian customer data will have to be clearly separated for compliance purposes from your German customer.
+
+
+
+Even though it is part of the same collection, data from each shard is isolated from other shards and can be retrieved as such. For additional examples on shards and retrieval, consult [Distributed Deployments](/documentation/guides/distributed_deployment/) documentation and [Qdrant Client specification](https://python-client.qdrant.tech).
+
+
+
+## Configure a multitenant setup for users
+
+
+
+Let's continue and start adding data. As you upsert your vectors to your new collection, you can add a `group_id` field to each vector. If you do this, Qdrant will assign each vector to its respective group.
+
+
+
+Additionally, each vector can now be allocated to a shard. You can specify the `shard_key_selector` for each individual vector. In this example, you are upserting data belonging to `tenant_1` to the Canadian region.
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{tenant_data}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=1,
+
+ payload={""group_id"": ""tenant_1""},
+
+ vector=[0.9, 0.1, 0.1],
+
+ ),
+
+ models.PointStruct(
+
+ id=2,
+
+ payload={""group_id"": ""tenant_1""},
+
+ vector=[0.1, 0.9, 0.1],
+
+ ),
+
+ ],
+
+ shard_key_selector=""canada"",
+
+)
+
+```
+
+Keep in mind that the data for each `group_id` is isolated. In the example below, `tenant_1` vectors are kept separate from `tenant_2`. The first tenant will be able to access their data in the Canadian portion of the cluster. However, as shown below `tenant_2 `might only be able to retrieve information hosted in Germany.
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=""{tenant_data}"",
+
+ points=[
+
+ models.PointStruct(
+
+ id=3,
+
+ payload={""group_id"": ""tenant_2""},
+
+ vector=[0.1, 0.1, 0.9],
+
+ ),
+
+ ],
+
+ shard_key_selector=""germany"",
+
+)
+
+```
+
+
+
+## Retrieve data via filters
+
+
+
+The access control setup is completed as you specify the criteria for data retrieval. When searching for vectors, you need to use a `query_filter` along with `group_id` to filter vectors for each user.
+
+
+
+```python
+
+client.search(
+
+ collection_name=""{tenant_data}"",
+
+ query_filter=models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""group_id"",
+
+ match=models.MatchValue(
+
+ value=""tenant_1"",
+
+ ),
+
+ ),
+
+ ]
+
+ ),
+
+ query_vector=[0.1, 0.1, 0.9],
+
+ limit=10,
+
+)
+
+```
+
+
+
+## Performance considerations
+
+
+
+The speed of indexation may become a bottleneck if you are adding large amounts of data in this way, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead.
+
+
+
+By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process.
+
+
+
+To implement this approach, you should:
+
+
+
+1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16.
+
+2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection.
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""localhost"", port=6333)
+
+
+
+client.create_collection(
+
+ collection_name=""{tenant_data}"",
+
+ vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE),
+
+ hnsw_config=models.HnswConfigDiff(
+
+ payload_m=16,
+
+ m=0,
+
+ ),
+
+)
+
+```
+
+
+
+3. Create keyword payload index for `group_id` field.
+
+
+
+```python
+
+client.create_payload_index(
+
+ collection_name=""{tenant_data}"",
+
+ field_name=""group_id"",
+
+ field_schema=models.PayloadSchemaType.KEYWORD,
+
+)
+
+```
+
+> Note: Keep in mind that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors.
+
+
+
+## Explore multitenancy and custom sharding in Qdrant for scalable solutions
+
+
+
+Qdrant is ready to support a massive-scale architecture for your machine learning project. If you want to see whether our [vector database](https://qdrant.tech/) is right for you, try the [quickstart tutorial](/documentation/quick-start/) or read our [docs and tutorials](/documentation/).
+
+
+
+To spin up a free instance of Qdrant, sign up for [Qdrant Cloud](https://qdrant.to/cloud) - no strings attached.
+
+
+
+Get support or share ideas in our [Discord](https://qdrant.to/discord) community. This is where we talk about vector search theory, publish examples and demos and discuss vector database setups.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+",articles/multitenancy.md
+"---
+
+title: ""What is RAG: Understanding Retrieval-Augmented Generation""
+
+draft: false
+
+slug: what-is-rag-in-ai?
+
+short_description: What is RAG?
+
+description: Explore how RAG enables LLMs to retrieve and utilize relevant external data when generating responses, rather than being limited to their original training data alone.
+
+preview_dir: /articles_data/what-is-rag-in-ai/preview
+
+weight: -150
+
+social_preview_image: /articles_data/what-is-rag-in-ai/preview/social_preview.jpg
+
+small_preview_image: /articles_data/what-is-rag-in-ai/icon.svg
+
+date: 2024-03-19T9:29:33-03:00
+
+author: Sabrina Aquino
+
+author_link: https://github.com/sabrinaaquino
+
+featured: true
+
+tags:
+
+ - retrieval augmented generation
+
+ - what is rag
+
+ - embeddings
+
+ - llm rag
+
+ - rag application
+
+
+
+
+
+---
+
+
+
+> Retrieval-augmented generation (RAG) integrates external information retrieval into the process of generating responses by Large Language Models (LLMs). It searches a database for information beyond its pre-trained knowledge base, significantly improving the accuracy and relevance of the generated responses.
+
+
+
+Language models have exploded on the internet ever since ChatGPT came out, and rightfully so. They can write essays, code entire programs, and even make memes (though we’re still deciding on whether that's a good thing).
+
+
+
+But as brilliant as these chatbots become, they still have **limitations** in tasks requiring external knowledge and factual information. Yes, it can describe the honeybee's waggle dance in excruciating detail. But they become far more valuable if they can generate insights from **any data** that we provide, rather than just their original training data. Since retraining those large language models from scratch costs millions of dollars and takes months, we need better ways to give our existing LLMs access to our custom data.
+
+
+
+While you could be more creative with your prompts, it is only a short-term solution. LLMs can consider only a **limited** amount of text in their responses, known as a [context window](https://www.hopsworks.ai/dictionary/context-window-for-llms). Some models like GPT-3 can see up to around 12 pages of text (that’s 4,096 tokens of context). That’s not good enough for most knowledge bases.
+
+
+
+![How a RAG works](/articles_data/what-is-rag-in-ai/how-rag-works.jpg)
+
+
+
+The image above shows how a basic RAG system works. Before forwarding the question to the LLM, we have a layer that searches our knowledge base for the ""relevant knowledge"" to answer the user query. Specifically, in this case, the spending data from the last month. Our LLM can now generate a **relevant non-hallucinated** response about our budget.
+
+
+
+As your data grows, you’ll need efficient ways to identify the most relevant information for your LLM's limited memory. This is where you’ll want a proper way to store and retrieve the specific data you’ll need for your query, without needing the LLM to remember it.
+
+
+
+**Vector databases** store information as **vector embeddings**. This format supports efficient similarity searches to retrieve relevant data for your query. For example, Qdrant is specifically designed to perform fast, even in scenarios dealing with billions of vectors.
+
+
+
+This article will focus on RAG systems and architecture. If you’re interested in learning more about vector search, we recommend the following articles: [What is a Vector Database?](/articles/what-is-a-vector-database/) and [What are Vector Embeddings?](/articles/what-are-embeddings/).
+
+
+
+
+
+## RAG architecture
+
+
+
+At its core, a RAG architecture includes the **retriever** and the **generator**. Let's start by understanding what each of these components does.
+
+
+
+
+
+### The Retriever
+
+
+
+When you ask a question to the retriever, it uses **similarity search** to scan through a vast knowledge base of vector embeddings. It then pulls out the most **relevant** vectors to help answer that query. There are a few different techniques it can use to know what’s relevant:
+
+
+
+
+
+#### How indexing works in RAG retrievers
+
+
+
+The indexing process organizes the data into your vector database in a way that makes it easily searchable. This allows the RAG to access relevant information when responding to a query.
+
+
+
+![How indexing works](/articles_data/what-is-rag-in-ai/how-indexing-works.jpg)
+
+
+
+As shown in the image above, here’s the process:
+
+
+
+
+
+
+
+* Start with a _loader_ that gathers _documents_ containing your data. These documents could be anything from articles and books to web pages and social media posts.
+
+* Next, a _splitter_ divides the documents into smaller chunks, typically sentences or paragraphs.
+
+* This is because RAG models work better with smaller pieces of text. In the diagram, these are _document snippets_.
+
+* Each text chunk is then fed into an _embedding machine_. This machine uses complex algorithms to convert the text into [vector embeddings](/articles/what-are-embeddings/).
+
+
+
+All the generated vector embeddings are stored in a knowledge base of indexed information. This supports efficient retrieval of similar pieces of information when needed.
+
+
+
+
+
+#### Query vectorization
+
+
+
+Once you have vectorized your knowledge base you can do the same to the user query. When the model sees a new query, it uses the same preprocessing and embedding techniques. This ensures that the query vector is compatible with the document vectors in the index.
+
+
+
+![How retrieval works](/articles_data/what-is-rag-in-ai/how-retrieval-works.jpg)
+
+
+
+#### Retrieval of relevant documents
+
+
+
+When the system needs to find the most relevant documents or passages to answer a query, it utilizes vector similarity techniques. **Vector similarity** is a fundamental concept in machine learning and natural language processing (NLP) that quantifies the resemblance between vectors, which are mathematical representations of data points.
+
+
+
+The system can employ different vector similarity strategies depending on the type of vectors used to represent the data:
+
+
+
+
+
+##### Sparse vector representations
+
+
+
+A sparse vector is characterized by a high dimensionality, with most of its elements being zero.
+
+
+
+The classic approach is **keyword search**, which scans documents for the exact words or phrases in the query. The search creates sparse vector representations of documents by counting word occurrences and inversely weighting common words. Queries with rarer words get prioritized.
+
+
+
+
+
+![Sparse vector representation](/articles_data/what-is-rag-in-ai/sparse-vectors.jpg)
+
+
+
+
+
+[TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) (Term Frequency-Inverse Document Frequency) and [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) are two classic related algorithms. They're simple and computationally efficient. However, they can struggle with synonyms and don't always capture semantic similarities.
+
+
+
+If you’re interested in going deeper, refer to our article on [Sparse Vectors](/articles/sparse-vectors/).
+
+
+
+
+
+##### Dense vector embeddings
+
+
+
+This approach uses large language models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) to encode the query and passages into dense vector embeddings. These models are compact numerical representations that capture semantic meaning. Vector databases like Qdrant store these embeddings, allowing retrieval based on **semantic similarity** rather than just keywords using distance metrics like cosine similarity.
+
+
+
+This allows the retriever to match based on semantic understanding rather than just keywords. So if I ask about ""compounds that cause BO,"" it can retrieve relevant info about ""molecules that create body odor"" even if those exact words weren't used. We explain more about it in our [What are Vector Embeddings](/articles/what-are-embeddings/) article.
+
+
+
+
+
+#### Hybrid search
+
+
+
+However, neither keyword search nor vector search are always perfect. Keyword search may miss relevant information expressed differently, while vector search can sometimes struggle with specificity or neglect important statistical word patterns. Hybrid methods aim to combine the strengths of different techniques.
+
+
+
+
+
+![Hybrid search overview](/articles_data/what-is-rag-in-ai/hybrid-search.jpg)
+
+
+
+
+
+Some common hybrid approaches include:
+
+
+
+
+
+
+
+* Using keyword search to get an initial set of candidate documents. Next, the documents are re-ranked/re-scored using semantic vector representations.
+
+* Starting with semantic vectors to find generally topically relevant documents. Next, the documents are filtered/re-ranked e based on keyword matches or other metadata.
+
+* Considering both semantic vector closeness and statistical keyword patterns/weights in a combined scoring model.
+
+* Having multiple stages were different techniques. One example: start with an initial keyword retrieval, followed by semantic re-ranking, then a final re-ranking using even more complex models.
+
+
+
+When you combine the powers of different search methods in a complementary way, you can provide higher quality, more comprehensive results. Check out our article on [Hybrid Search](/articles/hybrid-search/) if you’d like to learn more.
+
+
+
+
+
+### The Generator
+
+
+
+With the top relevant passages retrieved, it's now the generator's job to produce a final answer by synthesizing and expressing that information in natural language.
+
+
+
+The LLM is typically a model like GPT, BART or T5, trained on massive datasets to understand and generate human-like text. It now takes not only the query (or question) as input but also the relevant documents or passages that the retriever identified as potentially containing the answer to generate its response.
+
+
+
+
+
+![How a Generator works](/articles_data/what-is-rag-in-ai/how-generation-works.png)
+
+
+
+
+
+The retriever and generator don't operate in isolation. The image bellow shows how the output of the retrieval feeds the generator to produce the final generated response.
+
+
+
+
+
+![The entire architecture of a RAG system](/articles_data/what-is-rag-in-ai/rag-system.jpg)
+
+
+
+
+
+## Where is RAG being used?
+
+
+
+Because of their more knowledgeable and contextual responses, we can find RAG models being applied in many areas today, especially those who need factual accuracy and knowledge depth.
+
+
+
+
+
+### Real-World Applications:
+
+
+
+**Question answering:** This is perhaps the most prominent use case for RAG models. They power advanced question-answering systems that can retrieve relevant information from large knowledge bases and then generate fluent answers.
+
+
+
+**Language generation:** RAG enables more factual and contextualized text generation for contextualized text summarization from multiple sources
+
+
+
+**Data-to-text generation:** By retrieving relevant structured data, RAG models can generate product/business intelligence reports from databases or describing insights from data visualizations and charts
+
+
+
+**Multimedia understanding:** RAG isn't limited to text - it can retrieve multimodal information like images, video, and audio to enhance understanding. Answering questions about images/videos by retrieving relevant textual context.
+
+
+
+
+
+## Creating your first RAG chatbot with Langchain, Groq, and OpenAI
+
+
+
+Are you ready to create your own RAG chatbot from the ground up? We have a video explaining everything from the beginning. Daniel Romero’s will guide you through:
+
+
+
+
+
+
+
+* Setting up your chatbot
+
+* Preprocessing and organizing data for your chatbot's use
+
+* Applying vector similarity search algorithms
+
+* Enhancing the efficiency and response quality
+
+
+
+After building your RAG chatbot, you'll be able to evaluate its performance against that of a chatbot powered solely by a Large Language Model (LLM).
+
+
+
+
+
+
+
+
+
+
+
+## What’s next?
+
+
+
+Have a RAG project you want to bring to life? Join our [Discord community](https://discord.gg/qdrant) where we’re always sharing tips and answering questions on vector search and retrieval.
+
+
+
+Learn more about how to properly evaluate your RAG responses: [Evaluating Retrieval Augmented Generation - a framework for assessment](https://superlinked.com/vectorhub/evaluating-retrieval-augmented-generation-a-framework-for-assessment).",articles/what-is-rag-in-ai.md
+"---
+
+title: Semantic Search As You Type
+
+short_description: ""Instant search using Qdrant""
+
+description: To show off Qdrant's performance, we show how to do a quick search-as-you-type that will come back within a few milliseconds.
+
+social_preview_image: /articles_data/search-as-you-type/preview/social_preview.jpg
+
+small_preview_image: /articles_data/search-as-you-type/icon.svg
+
+preview_dir: /articles_data/search-as-you-type/preview
+
+weight: -2
+
+author: Andre Bogus
+
+author_link: https://llogiq.github.io
+
+date: 2023-08-14T00:00:00+01:00
+
+draft: false
+
+keywords: search, semantic, vector, llm, integration, benchmark, recommend, performance, rust
+
+---
+
+
+
+Qdrant is one of the fastest vector search engines out there, so while looking for a demo to show off, we came upon the idea to do a search-as-you-type box with a fully semantic search backend. Now we already have a semantic/keyword hybrid search on our website. But that one is written in Python, which incurs some overhead for the interpreter. Naturally, I wanted to see how fast I could go using Rust.
+
+
+
+Since Qdrant doesn't embed by itself, I had to decide on an embedding model. The prior version used the [SentenceTransformers](https://www.sbert.net/) package, which in turn employs Bert-based [All-MiniLM-L6-V2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main) model. This model is battle-tested and delivers fair results at speed, so not experimenting on this front I took an [ONNX version](https://huggingface.co/optimum/all-MiniLM-L6-v2/tree/main) and ran that within the service.
+
+
+
+The workflow looks like this:
+
+
+
+![Search Qdrant by Embedding](/articles_data/search-as-you-type/Qdrant_Search_by_Embedding.png)
+
+
+
+This will, after tokenizing and embedding send a `/collections/site/points/search` POST request to Qdrant, sending the following JSON:
+
+
+
+```json
+
+POST collections/site/points/search
+
+{
+
+ ""vector"": [-0.06716014,-0.056464013, ...(382 values omitted)],
+
+ ""limit"": 5,
+
+ ""with_payload"": true,
+
+}
+
+```
+
+
+
+Even with avoiding a network round-trip, the embedding still takes some time. As always in optimization, if you cannot do the work faster, a good solution is to avoid work altogether (please don't tell my employer). This can be done by pre-computing common prefixes and calculating embeddings for them, then storing them in a `prefix_cache` collection. Now the [`recommend`](https://api.qdrant.tech/api-reference/search/recommend-points) API method can find the best matches without doing any embedding. For now, I use short (up to and including 5 letters) prefixes, but I can also parse the logs to get the most common search terms and add them to the cache later.
+
+
+
+![Qdrant Recommendation](/articles_data/search-as-you-type/Qdrant_Recommendation.png)
+
+
+
+Making that work requires setting up the `prefix_cache` collection with points that have the prefix as their `point_id` and the embedding as their `vector`, which lets us do the lookup with no search or index. The `prefix_to_id` function currently uses the `u64` variant of `PointId`, which can hold eight bytes, enough for this use. If the need arises, one could instead encode the names as UUID, hashing the input. Since I know all our prefixes are within 8 bytes, I decided against this for now.
+
+
+
+The `recommend` endpoint works roughly the same as `search_points`, but instead of searching for a vector, Qdrant searches for one or more points (you can also give negative example points the search engine will try to avoid in the results). It was built to help drive recommendation engines, saving the round-trip of sending the current point's vector back to Qdrant to find more similar ones. However Qdrant goes a bit further by allowing us to select a different collection to lookup the points, which allows us to keep our `prefix_cache` collection separate from the site data. So in our case, Qdrant first looks up the point from the `prefix_cache`, takes its vector and searches for that in the `site` collection, using the precomputed embeddings from the cache. The API endpoint expects a POST of the following JSON to `/collections/site/points/recommend`:
+
+
+
+```json
+
+POST collections/site/points/recommend
+
+{
+
+ ""positive"": [1936024932],
+
+ ""limit"": 5,
+
+ ""with_payload"": true,
+
+ ""lookup_from"": {
+
+ ""collection"": ""prefix_cache""
+
+ }
+
+}
+
+```
+
+
+
+Now I have, in the best Rust tradition, a blazingly fast semantic search.
+
+
+
+To demo it, I used our [Qdrant documentation website](/documentation/)'s page search, replacing our previous Python implementation. So in order to not just spew empty words, here is a benchmark, showing different queries that exercise different code paths.
+
+
+
+Since the operations themselves are far faster than the network whose fickle nature would have swamped most measurable differences, I benchmarked both the Python and Rust services locally. I'm measuring both versions on the same AMD Ryzen 9 5900HX with 16GB RAM running Linux. The table shows the average time and error bound in milliseconds. I only measured up to a thousand concurrent requests. None of the services showed any slowdown with more requests in that range. I do not expect our service to become DDOS'd, so I didn't benchmark with more load.
+
+
+
+Without further ado, here are the results:
+
+
+
+
+
+| query length | Short | Long |
+
+|---------------|-----------|------------|
+
+| Python 🐍 | 16 ± 4 ms | 16 ± 4 ms |
+
+| Rust 🦀 | 1½ ± ½ ms | 5 ± 1 ms |
+
+
+
+The Rust version consistently outperforms the Python version and offers a semantic search even on few-character queries. If the prefix cache is hit (as in the short query length), the semantic search can even get more than ten times faster than the Python version. The general speed-up is due to both the relatively lower overhead of Rust + Actix Web compared to Python + FastAPI (even if that already performs admirably), as well as using ONNX Runtime instead of SentenceTransformers for the embedding. The prefix cache gives the Rust version a real boost by doing a semantic search without doing any embedding work.
+
+
+
+As an aside, while the millisecond differences shown here may mean relatively little for our users, whose latency will be dominated by the network in between, when typing, every millisecond more or less can make a difference in user perception. Also search-as-you-type generates between three and five times as much load as a plain search, so the service will experience more traffic. Less time per request means being able to handle more of them.
+
+
+
+Mission accomplished! But wait, there's more!
+
+
+
+### Prioritizing Exact Matches and Headings
+
+
+
+To improve on the quality of the results, Qdrant can do multiple searches in parallel, and then the service puts the results in sequence, taking the first best matches. The extended code searches:
+
+
+
+1. Text matches in titles
+
+2. Text matches in body (paragraphs or lists)
+
+3. Semantic matches in titles
+
+4. Any Semantic matches
+
+
+
+Those are put together by taking them in the above order, deduplicating as necessary.
+
+
+
+![merge workflow](/articles_data/search-as-you-type/sayt_merge.png)
+
+
+
+Instead of sending a `search` or `recommend` request, one can also send a `search/batch` or `recommend/batch` request, respectively. Each of those contain a `""searches""` property with any number of search/recommend JSON requests:
+
+
+
+```json
+
+POST collections/site/points/search/batch
+
+{
+
+ ""searches"": [
+
+ {
+
+ ""vector"": [-0.06716014,-0.056464013, ...],
+
+ ""filter"": {
+
+ ""must"": [
+
+ { ""key"": ""text"", ""match"": { ""text"": }},
+
+ { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }},
+
+ ]
+
+ }
+
+ ...,
+
+ },
+
+ {
+
+ ""vector"": [-0.06716014,-0.056464013, ...],
+
+ ""filter"": {
+
+ ""must"": [ { ""key"": ""body"", ""match"": { ""text"": }} ]
+
+ }
+
+ ...,
+
+ },
+
+ {
+
+ ""vector"": [-0.06716014,-0.056464013, ...],
+
+ ""filter"": {
+
+ ""must"": [ { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }} ]
+
+ }
+
+ ...,
+
+ },
+
+ {
+
+ ""vector"": [-0.06716014,-0.056464013, ...],
+
+ ...,
+
+ },
+
+ ]
+
+}
+
+```
+
+
+
+As the queries are done in a batch request, there isn't any additional network overhead and only very modest computation overhead, yet the results will be better in many cases.
+
+
+
+The only additional complexity is to flatten the result lists and take the first 5 results, deduplicating by point ID. Now there is one final problem: The query may be short enough to take the recommend code path, but still not be in the prefix cache. In that case, doing the search *sequentially* would mean two round-trips between the service and the Qdrant instance. The solution is to *concurrently* start both requests and take the first successful non-empty result.
+
+
+
+![sequential vs. concurrent flow](/articles_data/search-as-you-type/sayt_concurrency.png)
+
+
+
+While this means more load for the Qdrant vector search engine, this is not the limiting factor. The relevant data is already in cache in many cases, so the overhead stays within acceptable bounds, and the maximum latency in case of prefix cache misses is measurably reduced.
+
+
+
+The code is available on the [Qdrant github](https://github.com/qdrant/page-search)
+
+
+
+To sum up: Rust is fast, recommend lets us use precomputed embeddings, batch requests are awesome and one can do a semantic search in mere milliseconds.
+",articles/search-as-you-type.md
+"---
+
+title: ""Vector Similarity: Going Beyond Full-Text Search | Qdrant""
+
+short_description: Explore how vector similarity enhances data discovery beyond full-text search, including diversity sampling and more!
+
+description: Discover how vector similarity expands data exploration beyond full-text search. Explore diversity sampling and more for enhanced data discovery!
+
+preview_dir: /articles_data/vector-similarity-beyond-search/preview
+
+small_preview_image: /articles_data/vector-similarity-beyond-search/icon.svg
+
+social_preview_image: /articles_data/vector-similarity-beyond-search/preview/social_preview.jpg
+
+weight: -1
+
+author: Luis Cossío
+
+author_link: https://coszio.github.io/
+
+date: 2023-08-08T08:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - vector similarity
+
+ - exploration
+
+ - dissimilarity
+
+ - discovery
+
+ - diversity
+
+ - recommendation
+
+---
+
+
+
+# Vector Similarity: Unleashing Data Insights Beyond Traditional Search
+
+
+
+When making use of unstructured data, there are traditional go-to solutions that are well-known for developers:
+
+
+
+- **Full-text search** when you need to find documents that contain a particular word or phrase.
+
+- **[Vector search](https://qdrant.tech/documentation/overview/vector-search/)** when you need to find documents that are semantically similar to a given query.
+
+
+
+Sometimes people mix those two approaches, so it might look like the vector similarity is just an extension of full-text search. However, in this article, we will explore some promising new techniques that can be used to expand the use-case of unstructured data and demonstrate that vector similarity creates its own stack of data exploration tools.
+
+
+
+## What is vector similarity search?
+
+
+
+Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines. From dissimilarity search to diversity and recommendation, these methods can expand the cases in which vectors are useful.
+
+
+
+Vector Databases, which are designed to store and process immense amounts of vectors, are the first candidates to implement these new techniques and allow users to exploit their data to its fullest.
+
+
+
+
+
+## Vector similarity search vs. full-text search
+
+
+
+While there is an intersection in the functionality of these two approaches, there is also a vast area of functions that is unique to each of them.
+
+For example, the exact phrase matching and counting of results are native to full-text search, while vector similarity support for this type of operation is limited.
+
+On the other hand, vector similarity easily allows cross-modal retrieval of images by text or vice-versa, which is impossible with full-text search.
+
+
+
+This mismatch in expectations might sometimes lead to confusion.
+
+Attempting to use a vector similarity as a full-text search can result in a range of frustrations, from slow response times to poor search results, to limited functionality.
+
+As an outcome, they are getting only a fraction of the benefits of vector similarity.
+
+
+
+{{< figure width=70% src=/articles_data/vector-similarity-beyond-search/venn-diagram.png caption=""Full-text search and Vector Similarity Functionality overlap"" >}}
+
+
+
+Below we will explore why the vector similarity stack deserves new interfaces and design patterns that will unlock the full potential of this technology, which can still be used in conjunction with full-text search.
+
+
+
+
+
+## New ways to interact with similarities
+
+
+
+Having a vector representation of unstructured data unlocks new ways of interacting with it.
+
+For example, it can be used to measure semantic similarity between words, to cluster words or documents based on their meaning, to find related images, or even to generate new text.
+
+However, these interactions can go beyond finding their nearest neighbors (kNN).
+
+
+
+There are several other techniques that can be leveraged by vector representations beyond the traditional kNN search. These include dissimilarity search, diversity search, recommendations, and discovery functions.
+
+
+
+
+
+## Dissimilarity ssearch
+
+
+
+The Dissimilarity —or farthest— search is the most straightforward concept after the nearest search, which can’t be reproduced in a traditional full-text search.
+
+It aims to find the most un-similar or distant documents across the collection.
+
+
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/dissimilarity.png caption=""Dissimilarity Search"" >}}
+
+
+
+Unlike full-text match, Vector similarity can compare any pair of documents (or points) and assign a similarity score.
+
+It doesn’t rely on keywords or other metadata.
+
+With vector similarity, we can easily achieve a dissimilarity search by inverting the search objective from maximizing similarity to minimizing it.
+
+
+
+The dissimilarity search can find items in areas where previously no other search could be used.
+
+Let’s look at a few examples.
+
+
+
+### Case: mislabeling detection
+
+
+
+For example, we have a dataset of furniture in which we have classified our items into what kind of furniture they are: tables, chairs, lamps, etc.
+
+To ensure our catalog is accurate, we can use a dissimilarity search to highlight items that are most likely mislabeled.
+
+
+
+To do this, we only need to search for the most dissimilar items using the
+
+embedding of the category title itself as a query.
+
+This can be too broad, so, by combining it with filters —a [Qdrant superpower](/articles/filtrable-hnsw/)—, we can narrow down the search to a specific category.
+
+
+
+
+
+{{< figure src=/articles_data/vector-similarity-beyond-search/mislabelling.png caption=""Mislabeling Detection"" >}}
+
+
+
+The output of this search can be further processed with heavier models or human supervision to detect actual mislabeling.
+
+
+
+### Case: outlier detection
+
+
+
+In some cases, we might not even have labels, but it is still possible to try to detect anomalies in our dataset.
+
+Dissimilarity search can be used for this purpose as well.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/anomaly-detection.png caption=""Anomaly Detection"" >}}
+
+
+
+The only thing we need is a bunch of reference points that we consider ""normal"".
+
+Then we can search for the most dissimilar points to this reference set and use them as candidates for further analysis.
+
+
+
+
+
+## Diversity search
+
+
+
+Even with no input provided vector, (dis-)similarity can improve an overall selection of items from the dataset.
+
+
+
+The naive approach is to do random sampling.
+
+However, unless our dataset has a uniform distribution, the results of such sampling might be biased toward more frequent types of items.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-random.png caption=""Example of random sampling"" >}}
+
+
+
+
+
+The similarity information can increase the diversity of those results and make the first overview more interesting.
+
+That is especially useful when users do not yet know what they are looking for and want to explore the dataset.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-force.png caption=""Example of similarity-based sampling"" >}}
+
+
+
+
+
+The power of vector similarity, in the context of being able to compare any two points, allows making a diverse selection of the collection possible without any labeling efforts.
+
+By maximizing the distance between all points in the response, we can have an algorithm that will sequentially output dissimilar results.
+
+
+
+{{< figure src=/articles_data/vector-similarity-beyond-search/diversity.png caption=""Diversity Search"" >}}
+
+
+
+
+
+Some forms of diversity sampling are already used in the industry and are known as [Maximum Margin Relevance](https://python.langchain.com/docs/integrations/vectorstores/qdrant#maximum-marginal-relevance-search-mmr) (MMR). Techniques like this were developed to enhance similarity on a universal search API.
+
+However, there is still room for new ideas, particularly regarding diversity retrieval.
+
+By utilizing more advanced vector-native engines, it could be possible to take use cases to the next level and achieve even better results.
+
+
+
+
+
+## Vector similarity recommendations
+
+
+
+Vector similarity can go above a single query vector.
+
+It can combine multiple positive and negative examples for a more accurate retrieval.
+
+Building a recommendation API in a vector database can take advantage of using already stored vectors as part of the queries, by specifying the point id.
+
+Doing this, we can skip query-time neural network inference, and make the recommendation search faster.
+
+
+
+There are multiple ways to implement recommendations with vectors.
+
+
+
+### Vector-features recommendations
+
+
+
+The first approach is to take all positive and negative examples and average them to create a single query vector.
+
+In this technique, the more significant components of positive vectors are canceled out by the negative ones, and the resulting vector is a combination of all the features present in the positive examples, but not in the negative ones.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/feature-based-recommendations.png caption=""Vector-Features Based Recommendations"" >}}
+
+
+
+This approach is already implemented in Qdrant, and while it works great when the vectors are assumed to have each of their dimensions represent some kind of feature of the data, sometimes distances are a better tool to judge negative and positive examples.
+
+
+
+### Relative distance recommendations
+
+
+
+Another approach is to use the distance between negative examples to the candidates to help them create exclusion areas.
+
+In this technique, we perform searches near the positive examples while excluding the points that are closer to a negative example than to a positive one.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/relative-distance-recommendations.png caption=""Relative Distance Recommendations"" >}}
+
+
+
+The main use-case of both approaches —of course— is to take some history of user interactions and recommend new items based on it.
+
+
+
+## Discovery
+
+
+
+In many exploration scenarios, the desired destination is not known in advance.
+
+The search process in this case can consist of multiple steps, where each step would provide a little more information to guide the search in the right direction.
+
+
+
+To get more intuition about the possible ways to implement this approach, let’s take a look at how similarity modes are trained in the first place:
+
+
+
+The most well-known loss function used to train similarity models is a [triplet-loss](https://en.wikipedia.org/wiki/Triplet_loss).
+
+In this loss, the model is trained by fitting the information of relative similarity of 3 objects: the Anchor, Positive, and Negative examples.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/triplet-loss.png caption=""Triplet Loss"" >}}
+
+
+
+Using the same mechanics, we can look at the training process from the other side.
+
+Given a trained model, the user can provide positive and negative examples, and the goal of the discovery process is then to find suitable anchors across the stored collection of vectors.
+
+
+
+
+
+{{< figure width=60% src=/articles_data/vector-similarity-beyond-search/discovery.png caption=""Reversed triplet loss"" >}}
+
+
+
+Multiple positive-negative pairs can be provided to make the discovery process more accurate.
+
+Worth mentioning, that as well as in NN training, the dataset may contain noise and some portion of contradictory information, so a discovery process should be tolerant of this kind of data imperfections.
+
+
+
+
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-noise.png caption=""Sample pairs"" >}}
+
+
+
+The important difference between this and the recommendation method is that the positive-negative pairs in the discovery method don’t assume that the final result should be close to positive, it only assumes that it should be closer than the negative one.
+
+
+
+{{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-vs-recommendations.png caption=""Discovery vs Recommendation"" >}}
+
+
+
+In combination with filtering or similarity search, the additional context information provided by the discovery pairs can be used as a re-ranking factor.
+
+
+
+## A new API stack for vector databases
+
+
+
+When you introduce vector similarity capabilities into your text search engine, you extend its functionality.
+
+However, it doesn't work the other way around, as the vector similarity as a concept is much broader than some task-specific implementations of full-text search.
+
+
+
+[Vector databases](https://qdrant.tech/), which introduce built-in full-text functionality, must make several compromises:
+
+
+
+- Choose a specific full-text search variant.
+
+- Either sacrifice API consistency or limit vector similarity functionality to only basic kNN search.
+
+- Introduce additional complexity to the system.
+
+
+
+Qdrant, on the contrary, puts vector similarity in the center of its API and architecture, such that it allows us to move towards a new stack of vector-native operations.
+
+We believe that this is the future of vector databases, and we are excited to see what new use-cases will be unlocked by these techniques.
+
+
+
+## Key takeaways:
+
+
+
+- Vector similarity offers advanced data exploration tools beyond traditional full-text search, including dissimilarity search, diversity sampling, and recommendation systems.
+
+- Practical applications of vector similarity include improving data quality through mislabeling detection and anomaly identification.
+
+- Enhanced user experiences are achieved by leveraging advanced search techniques, providing users with intuitive data exploration, and improving decision-making processes.
+
+
+
+Ready to unlock the full potential of your data? [Try a free demo](https://qdrant.tech/contact-us/) to explore how vector similarity can revolutionize your data insights and drive smarter decision-making.
+
+
+",articles/vector-similarity-beyond-search.md
+"---
+
+title: Q&A with Similarity Learning
+
+short_description: A complete guide to building a Q&A system with similarity learning.
+
+description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers.
+
+social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg
+
+preview_dir: /articles_data/faq-question-answering/preview
+
+small_preview_image: /articles_data/faq-question-answering/icon.svg
+
+weight: 9
+
+author: George Panchuk
+
+author_link: https://medium.com/@george.panchuk
+
+date: 2022-06-28T08:57:07.604Z
+
+# aliases: [ /articles/faq-question-answering/ ]
+
+---
+
+
+
+# Question-answering system with Similarity Learning and Quaterion
+
+
+
+
+
+Many problems in modern machine learning are approached as classification tasks.
+
+Some are the classification tasks by design, but others are artificially transformed into such.
+
+And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions.
+
+In some cases, you would even get worse performance.
+
+
+
+Imagine that you got a new task and decided to solve it with a good old classification approach.
+
+Firstly, you will need labeled data.
+
+If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually.
+
+And I guess you are already familiar with how painful it might be.
+
+
+
+Assuming you somehow labeled all required data and trained a model.
+
+It shows good performance - well done!
+
+But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle.
+
+You repeat your pipeline.
+
+Then, two days later, you've been reached out one more time.
+
+You need to update the model again, and again, and again.
+
+Sounds tedious and expensive for me, does not it for you?
+
+
+
+## Automating customer support
+
+
+
+Let's now take a look at the concrete example. There is a pressing problem with automating customer support.
+
+The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement.
+
+
+
+With the classification approach, you need to build a hierarchy of classification models to determine the question's topic.
+
+You have to collect and label a whole custom dataset of your private documentation topics to train that.
+
+And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data.
+
+Can we make it easier?
+
+
+
+## Similarity option
+
+
+
+One of the possible alternatives is Similarity Learning, which we are going to discuss in this article.
+
+It suggests getting rid of the classes and making decisions based on the similarity between objects instead.
+
+To do it quickly, we would need some intermediate representation - embeddings.
+
+Embeddings are high-dimensional vectors with semantic information accumulated in them.
+
+
+
+As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance.
+
+So with similarity learning, all we need to do is provide pairs of correct questions and answers.
+
+And then, the model will learn to distinguish proper answers by the similarity of embeddings.
+
+
+
+>If you want to learn more about similarity learning and applications, check out this [article](/documentation/tutorials/neural-search/) which might be an asset.
+
+
+
+## Let's build
+
+
+
+Similarity learning approach seems a lot simpler than classification in this case, and if you have some
+
+doubts on your mind, let me dispel them.
+
+
+
+As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers.
+
+The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq).
+
+
+
+Once we have data, we need to obtain embeddings for it.
+
+It is not a novel technique in NLP to represent texts as embeddings.
+
+There are plenty of algorithms and models to calculate them.
+
+You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings.
+
+
+
+However, it is better to produce embeddings with a model trained for semantic similarity tasks.
+
+For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html).
+
+Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial
+
+as it is 5x faster and still offers good results.
+
+
+
+Having all this, we can test our approach. We won't take all our dataset at the moment, but only
+
+a part of it. To measure model's performance we will use two metrics -
+
+[mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and
+
+[precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k).
+
+We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py)
+
+for this experiment, let's just launch it now.
+
+
+
+
+
+
+
+That's already quite decent quality, but maybe we can do better?
+
+
+
+## Improving results with fine-tuning
+
+
+
+Actually, we can! Model we used has a good natural language understanding, but it has never seen
+
+our data. An approach called `fine-tuning` might be helpful to overcome this issue. With
+
+fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on
+
+another task, apply a couple of layers on top and train its parameters.
+
+
+
+Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools.
+
+For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models.
+
+Let's see how we can train models with it
+
+
+
+First, create our project and call it `faq`.
+
+
+
+> All project dependencies, utils scripts not covered in the tutorial can be found in the
+
+> [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
+
+
+
+### Configure training
+
+
+
+The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html).
+
+This class makes model's building process fast and convenient.
+
+
+
+`TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html).
+
+
+
+[Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually.
+
+Also Lightning's modularity is worth to be mentioned.
+
+It improves separation of responsibilities, makes code more readable, robust and easy to write.
+
+All these features make Pytorch Lightning a perfect training backend for Quaterion.
+
+
+
+To use `TrainableModel` you need to inherit your model class from it.
+
+The same way you would use `LightningModule` in pure `pytorch_lightning`.
+
+Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`,
+
+`configure_optimizers`.
+
+
+
+The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of
+
+imports to do that. But `configure_encoders` requires some code:)
+
+
+
+Let's create a `model.py` with model's template and a placeholder for `configure_encoders`
+
+for the moment.
+
+
+
+```python
+
+from typing import Union, Dict, Optional
+
+
+
+from torch.optim import Adam
+
+
+
+from quaterion import TrainableModel
+
+from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss
+
+from quaterion_models.encoders import Encoder
+
+from quaterion_models.heads import EncoderHead
+
+from quaterion_models.heads.skip_connection_head import SkipConnectionHead
+
+
+
+
+
+class FAQModel(TrainableModel):
+
+ def __init__(self, lr=10e-5, *args, **kwargs):
+
+ self.lr = lr
+
+ super().__init__(*args, **kwargs)
+
+
+
+ def configure_optimizers(self):
+
+ return Adam(self.model.parameters(), lr=self.lr)
+
+
+
+ def configure_loss(self) -> SimilarityLoss:
+
+ return MultipleNegativesRankingLoss(symmetric=True)
+
+
+
+ def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+
+ ... # ToDo
+
+
+
+ def configure_head(self, input_embedding_size: int) -> EncoderHead:
+
+ return SkipConnectionHead(input_embedding_size)
+
+```
+
+
+
+- `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice
+
+mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later.
+
+- `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion.
+
+However, since Quaterion's purpose is not to cover all possible losses, or other entities and
+
+features of similarity learning, but to provide a convenient framework to build and use such models,
+
+there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html)
+
+to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses.
+
+You can also implement a custom loss yourself.
+
+- `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head.
+
+As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html).
+
+
+
+At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html).
+
+This loss is especially good for training retrieval tasks.
+
+It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples.
+
+
+
+`MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter.
+
+Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html).
+
+
+
+Now we can come back to `configure_encoders`:)
+
+
+
+### Configure Encoder
+
+
+
+The encoder task is to convert objects into embeddings.
+
+They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`.
+
+In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class.
+
+
+
+Let's create our encoder in `encoder.py`
+
+
+
+```python
+
+import os
+
+
+
+from torch import Tensor, nn
+
+from sentence_transformers.models import Transformer, Pooling
+
+
+
+from quaterion_models.encoders import Encoder
+
+from quaterion_models.types import TensorInterchange, CollateFnType
+
+
+
+
+
+class FAQEncoder(Encoder):
+
+ def __init__(self, transformer, pooling):
+
+ super().__init__()
+
+ self.transformer = transformer
+
+ self.pooling = pooling
+
+ self.encoder = nn.Sequential(self.transformer, self.pooling)
+
+
+
+ @property
+
+ def trainable(self) -> bool:
+
+ # Defines if we want to train encoder itself, or head layer only
+
+ return False
+
+
+
+ @property
+
+ def embedding_size(self) -> int:
+
+ return self.transformer.get_word_embedding_dimension()
+
+
+
+ def forward(self, batch: TensorInterchange) -> Tensor:
+
+ return self.encoder(batch)[""sentence_embedding""]
+
+
+
+ def get_collate_fn(self) -> CollateFnType:
+
+ return self.transformer.tokenize
+
+
+
+ @staticmethod
+
+ def _transformer_path(path: str):
+
+ return os.path.join(path, ""transformer"")
+
+
+
+ @staticmethod
+
+ def _pooling_path(path: str):
+
+ return os.path.join(path, ""pooling"")
+
+
+
+ def save(self, output_path: str):
+
+ transformer_path = self._transformer_path(output_path)
+
+ os.makedirs(transformer_path, exist_ok=True)
+
+ pooling_path = self._pooling_path(output_path)
+
+ os.makedirs(pooling_path, exist_ok=True)
+
+ self.transformer.save(transformer_path)
+
+ self.pooling.save(pooling_path)
+
+
+
+ @classmethod
+
+ def load(cls, input_path: str) -> Encoder:
+
+ transformer = Transformer.load(cls._transformer_path(input_path))
+
+ pooling = Pooling.load(cls._pooling_path(input_path))
+
+ return cls(transformer=transformer, pooling=pooling)
+
+```
+
+
+
+As you can notice, there are more methods implemented, then we've already discussed. Let's go
+
+through them now!
+
+- In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant.
+
+
+
+- `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen.
+
+
+
+- `embedding_size` is a size of encoder's output, it is required for proper `head` configuration.
+
+
+
+- `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw
+
+data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used.
+
+
+
+
+
+The remaining methods are considered self-describing.
+
+
+
+As our encoder is ready, we now are able to fill `configure_encoders`.
+
+Just insert the following code into `model.py`:
+
+
+
+```python
+
+...
+
+from sentence_transformers import SentenceTransformer
+
+from sentence_transformers.models import Transformer, Pooling
+
+from faq.encoder import FAQEncoder
+
+
+
+class FAQModel(TrainableModel):
+
+ ...
+
+ def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+
+ pre_trained_model = SentenceTransformer(""all-MiniLM-L6-v2"")
+
+ transformer: Transformer = pre_trained_model[0]
+
+ pooling: Pooling = pre_trained_model[1]
+
+ encoder = FAQEncoder(transformer, pooling)
+
+ return encoder
+
+```
+
+
+
+### Data preparation
+
+
+
+Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model.
+
+
+
+
+
+Currently, Quaterion takes two types of similarity representation - pairs and groups.
+
+
+
+The groups format assumes that all objects split into groups of similar objects. All objects inside
+
+one group are similar, and all other objects outside this group considered dissimilar to them.
+
+
+
+But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects.
+
+
+
+We can apply any of the approaches with our data, but pairs one seems more intuitive.
+
+
+
+The format in which Similarity is represented determines which loss can be used.
+
+For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format.
+
+
+
+[SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs.
+
+Let's take a look at it:
+
+
+
+```python
+
+@dataclass
+
+class SimilarityPairSample:
+
+ obj_a: Any
+
+ obj_b: Any
+
+ score: float = 1.0
+
+ subgroup: int = 0
+
+```
+
+
+
+Here might be some questions: what `score` and `subgroup` are?
+
+
+
+Well, `score` is a measure of expected samples similarity.
+
+If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively.
+
+
+
+`subgroups` parameter is required for more granular description of what negative examples could be.
+
+By default, all pairs belong the subgroup zero.
+
+That means that we would need to specify all negative examples manually.
+
+But in most cases, we can avoid this by enabling different subgroups.
+
+All objects from different subgroups will be considered as negative examples in loss, and thus it
+
+provides a way to set negative examples implicitly.
+
+
+
+
+
+With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model:
+
+
+
+```python
+
+import json
+
+from typing import List, Dict
+
+
+
+from torch.utils.data import Dataset
+
+from quaterion.dataset.similarity_samples import SimilarityPairSample
+
+
+
+
+
+class FAQDataset(Dataset):
+
+ """"""Dataset class to process .jsonl files with FAQ from popular cloud providers.""""""
+
+
+
+ def __init__(self, dataset_path):
+
+ self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path)
+
+
+
+ def __getitem__(self, index) -> SimilarityPairSample:
+
+ line = self.dataset[index]
+
+ question = line[""question""]
+
+ # All questions have a unique subgroup
+
+ # Meaning that all other answers are considered negative pairs
+
+ subgroup = hash(question)
+
+ return SimilarityPairSample(
+
+ obj_a=question,
+
+ obj_b=line[""answer""],
+
+ score=1,
+
+ subgroup=subgroup
+
+ )
+
+
+
+ def __len__(self):
+
+ return len(self.dataset)
+
+
+
+ @staticmethod
+
+ def read_dataset(dataset_path) -> List[Dict[str, str]]:
+
+ """"""Read jsonl-file into a memory.""""""
+
+ with open(dataset_path, ""r"") as fd:
+
+ return [json.loads(json_line) for json_line in fd]
+
+```
+
+
+
+We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples.
+
+
+
+### Evaluation Metric
+
+
+
+We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`.
+
+We just need to override it and attach interested metrics.
+
+
+
+Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_.
+
+They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package.
+
+But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries.
+
+You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one.
+
+
+
+In `configure_metrics` we need to return a list of `AttachedMetric`.
+
+They are just wrappers around metric instances and helps to log metrics more easily.
+
+Under the hood `logging` is handled by `pytorch-lightning`.
+
+You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`.
+
+For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html)
+
+
+
+Let's add mentioned metrics for our `FAQModel`.
+
+Add this code to `model.py`:
+
+
+
+```python
+
+...
+
+from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank
+
+from quaterion.eval.attached_metric import AttachedMetric
+
+
+
+
+
+class FAQModel(TrainableModel):
+
+ def __init__(self, lr=10e-5, *args, **kwargs):
+
+ self.lr = lr
+
+ super().__init__(*args, **kwargs)
+
+
+
+ ...
+
+ def configure_metrics(self):
+
+ return [
+
+ AttachedMetric(
+
+ ""RetrievalPrecision"",
+
+ RetrievalPrecision(k=1),
+
+ prog_bar=True,
+
+ on_epoch=True,
+
+ ),
+
+ AttachedMetric(
+
+ ""RetrievalReciprocalRank"",
+
+ RetrievalReciprocalRank(),
+
+ prog_bar=True,
+
+ on_epoch=True
+
+ ),
+
+ ]
+
+```
+
+
+
+### Fast training with Cache
+
+
+
+Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders.
+
+If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch.
+
+It provides a way to avoid repeated calculations and reduce training time.
+
+For this purpose Quaterion has a cache functionality.
+
+
+
+
+
+Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU).
+
+Everything you need is to define which encoders are trainable or not and set cache settings.
+
+And that's it: everything else Quaterion will handle for you.
+
+
+
+To configure cache you need to override `configure_cache` method in `TrainableModel`.
+
+This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig).
+
+
+
+Let's add cache to our model:
+
+```python
+
+...
+
+from quaterion.train.cache import CacheConfig, CacheType
+
+...
+
+class FAQModel(TrainableModel):
+
+ ...
+
+ def configure_caches(self) -> Optional[CacheConfig]:
+
+ return CacheConfig(CacheType.AUTO)
+
+ ...
+
+```
+
+
+
+[CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory.
+
+
+
+
+
+### Training
+
+
+
+Now we need to combine all our code together in `train.py` and launch a training process.
+
+
+
+```python
+
+import torch
+
+import pytorch_lightning as pl
+
+
+
+from quaterion import Quaterion
+
+from quaterion.dataset import PairsSimilarityDataLoader
+
+
+
+from faq.dataset import FAQDataset
+
+
+
+
+
+def train(model, train_dataset_path, val_dataset_path, params):
+
+ use_gpu = params.get(""cuda"", torch.cuda.is_available())
+
+
+
+ trainer = pl.Trainer(
+
+ min_epochs=params.get(""min_epochs"", 1),
+
+ max_epochs=params.get(""max_epochs"", 500),
+
+ auto_select_gpus=use_gpu,
+
+ log_every_n_steps=params.get(""log_every_n_steps"", 1),
+
+ gpus=int(use_gpu),
+
+ )
+
+ train_dataset = FAQDataset(train_dataset_path)
+
+ val_dataset = FAQDataset(val_dataset_path)
+
+ train_dataloader = PairsSimilarityDataLoader(
+
+ train_dataset, batch_size=1024
+
+ )
+
+ val_dataloader = PairsSimilarityDataLoader(
+
+ val_dataset, batch_size=1024
+
+ )
+
+
+
+ Quaterion.fit(model, trainer, train_dataloader, val_dataloader)
+
+
+
+if __name__ == ""__main__"":
+
+ import os
+
+ from pytorch_lightning import seed_everything
+
+ from faq.model import FAQModel
+
+ from faq.config import DATA_DIR, ROOT_DIR
+
+ seed_everything(42, workers=True)
+
+ faq_model = FAQModel()
+
+ train_path = os.path.join(
+
+ DATA_DIR,
+
+ ""train_cloud_faq_dataset.jsonl""
+
+ )
+
+ val_path = os.path.join(
+
+ DATA_DIR,
+
+ ""val_cloud_faq_dataset.jsonl""
+
+ )
+
+ train(faq_model, train_path, val_path, {})
+
+ faq_model.save_servable(os.path.join(ROOT_DIR, ""servable""))
+
+```
+
+
+
+Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for
+
+`SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process.
+
+
+
+### Dataset-wise evaluation
+
+
+
+Up to this moment we've calculated only batch-wise metrics.
+
+Such metrics can fluctuate a lot depending on a batch size and can be misleading.
+
+It might be helpful if we can calculate a metric on a whole dataset or some large part of it.
+
+Raw data may consume a huge amount of memory, and usually we can't fit it into one batch.
+
+Embeddings, on the contrary, most probably will consume less.
+
+
+
+That's where `Evaluator` enters the scene.
+
+At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels.
+
+After that, it calculates a metric value, which could be more representative than batch-wise ones.
+
+
+
+However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory.
+
+A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric.
+
+You can mitigate this bottleneck by calculating a rectangle matrix with reduced size.
+
+`Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings.
+
+If sample size is not specified, evaluation is performed on all embeddings.
+
+
+
+Fewer words! Let's add evaluator to our code and finish `train.py`.
+
+
+
+```python
+
+...
+
+from quaterion.eval.evaluator import Evaluator
+
+from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision
+
+from quaterion.eval.samplers.pair_sampler import PairSampler
+
+...
+
+
+
+def train(model, train_dataset_path, val_dataset_path, params):
+
+ ...
+
+
+
+ metrics = {
+
+ ""rrk"": RetrievalReciprocalRank(),
+
+ ""rp@1"": RetrievalPrecision(k=1)
+
+ }
+
+ sampler = PairSampler()
+
+ evaluator = Evaluator(metrics, sampler)
+
+ results = Quaterion.evaluate(evaluator, val_dataset, model.model)
+
+ print(f""results: {results}"")
+
+```
+
+
+
+### Train Results
+
+
+
+At this point we can train our model, I do it via `python3 -m faq.train`.
+
+
+
+
+
+
+
+After training all the metrics have been increased.
+
+And this training was done in just 3 minutes on a single gpu!
+
+There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation.
+
+
+
+## Model serving
+
+
+
+As you could already notice, Quaterion framework is split into two separate libraries: `quaterion`
+
+and [quaterion-models](https://quaterion-models.qdrant.tech/).
+
+The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc.
+
+While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself.
+
+
+
+The reasons for this separation are:
+
+
+
+- less amount of entities you need to operate in a production environment
+
+- reduced memory footprint
+
+
+
+It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated.
+
+Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage.
+
+
+
+
+
+The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production.
+
+
+
+In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in:
+
+
+
+```python
+
+import os
+
+import json
+
+
+
+import torch
+
+from quaterion_models.model import SimilarityModel
+
+from quaterion.distances import Distance
+
+
+
+from faq.config import DATA_DIR, ROOT_DIR
+
+
+
+
+
+if __name__ == ""__main__"":
+
+ device = ""cuda:0"" if torch.cuda.is_available() else ""cpu""
+
+ model = SimilarityModel.load(os.path.join(ROOT_DIR, ""servable""))
+
+ model.to(device)
+
+ dataset_path = os.path.join(DATA_DIR, ""val_cloud_faq_dataset.jsonl"")
+
+
+
+ with open(dataset_path) as fd:
+
+ answers = [json.loads(json_line)[""answer""] for json_line in fd]
+
+
+
+ # everything is ready, let's encode our answers
+
+ answer_embeddings = model.encode(answers, to_numpy=False)
+
+
+
+ # Some prepared questions and answers to ensure that our model works as intended
+
+ questions = [
+
+ ""what is the pricing of aws lambda functions powered by aws graviton2 processors?"",
+
+ ""can i run a cluster or job for a long time?"",
+
+ ""what is the dell open manage system administrator suite (omsa)?"",
+
+ ""what are the differences between the event streams standard and event streams enterprise plans?"",
+
+ ]
+
+ ground_truth_answers = [
+
+ ""aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions"",
+
+ ""yes, you can run a cluster for as long as is required"",
+
+ ""omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system"",
+
+ ""to find out more information about the different event streams plans, see choosing your plan"",
+
+ ]
+
+
+
+ # encode our questions and find the closest to them answer embeddings
+
+ question_embeddings = model.encode(questions, to_numpy=False)
+
+ distance = Distance.get_by_name(Distance.COSINE)
+
+ question_answers_distances = distance.distance_matrix(
+
+ question_embeddings, answer_embeddings
+
+ )
+
+ answers_indices = question_answers_distances.min(dim=1)[1]
+
+ for q_ind, a_ind in enumerate(answers_indices):
+
+ print(""Q:"", questions[q_ind])
+
+ print(""A:"", answers[a_ind], end=""\n\n"")
+
+ assert (
+
+ answers[a_ind] == ground_truth_answers[q_ind]
+
+ ), f""<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>""
+
+```
+
+
+
+We stored our collection of answer embeddings in memory and perform search directly in Python.
+
+For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://github.com/qdrant/qdrant).
+
+It provides durability, speed boost, and a bunch of other features.
+
+
+
+So far, we've implemented a whole training process, prepared model for serving and even applied a
+
+trained model today with `Quaterion`.
+
+
+
+Thank you for your time and attention!
+
+I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects.
+
+
+
+All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
+
+
+
+Stay tuned!:)",articles/faq-question-answering.md
+"---
+
+title: ""Discovery needs context""
+
+short_description: Discover points by constraining the vector space.
+
+description: Discovery Search, an innovative way to constrain the vector space in which a search is performed, relying only on vectors.
+
+social_preview_image: /articles_data/discovery-search/social_preview.jpg
+
+small_preview_image: /articles_data/discovery-search/icon.svg
+
+preview_dir: /articles_data/discovery-search/preview
+
+weight: -110
+
+author: Luis Cossío
+
+author_link: https://coszio.github.io
+
+date: 2024-01-31T08:00:00-03:00
+
+draft: false
+
+keywords:
+
+ - why use a vector database
+
+ - specialty
+
+ - search
+
+ - multimodal
+
+ - state-of-the-art
+
+ - vector-search
+
+---
+
+
+
+# Discovery needs context
+
+
+
+When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for the Americas. They were looking for a new route to India because they were convinced that the Earth was round. They didn't know anything about a new continent, but since they were going west, they stumbled upon it.
+
+
+
+They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new ""discovery"" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep these concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__.
+
+
+
+## What is discovery search?
+
+
+
+In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search.
+
+
+
+You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space.
+
+
+
+## Understanding context
+
+
+
+This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it.
+
+
+
+![Discovery search visualization](/articles_data/discovery-search/discovery-search.png)
+
+
+
+While positive and negative vectors might suggest the use of the recommendation interface, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of _triplet loss_, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by ""moving"" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the ""anchors"", or result candidates, which fit this characteristic better.
+
+
+
+![Triplet loss](/articles_data/discovery-search/triplet-loss.png)
+
+
+
+[__Discovery search__](#discovery-search), then, is made up of two main inputs:
+
+
+
+- __target__: the main point of interest
+
+- __context__: the pairs of positive and negative points we just defined.
+
+
+
+However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search).
+
+
+
+## Real-world discovery search applications
+
+
+
+Let's talk about the first case: context with a target.
+
+
+
+To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images.
+
+CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing ""burger"" in the text input:
+
+
+
+![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png)
+
+
+
+This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example:
+
+
+
+![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png)
+
+
+
+Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken?
+
+
+
+Turns out, multimodal encoders might not work how you expect them to. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text.
+
+
+
+![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png)
+
+
+
+This is where discovery excels because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text).
+
+
+
+![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png)
+
+
+
+Discovery search also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for.
+
+
+
+Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type ""pizza"", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza!
+
+
+
+![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png)
+
+
+
+## Context search
+
+
+
+Now, the second case: only providing context.
+
+
+
+Ever been caught in the same recommendations on your favorite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different.
+
+
+
+![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png)
+
+
+
+__Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _""how much a point is closer to a negative than a positive vector?""_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets.
+
+
+
+![Context search visualization](/articles_data/discovery-search/context-search.png)
+
+
+
+Creating complex tastes in a high-dimensional space becomes easier since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search ""category"" created just from the context in the input.
+
+
+
+![A more complex context search](/articles_data/discovery-search/complex-context-search.png)
+
+
+
+This way you can give refreshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs.
+
+
+
+## Key takeaways:
+
+- Discovery search is a powerful tool for controlled exploration in vector spaces.
+
+Context, consisting of positive and negative vectors constrain the search space, while a target guides the search.
+
+- Real-world applications include multimodal search, diverse recommendations, and context-driven exploration.
+
+- Ready to learn more about the math behind it and how to use it? Check out the [documentation](/documentation/concepts/explore/#discovery-api)",articles/discovery-search.md
+"---
+
+title: ""FastEmbed: Qdrant's Efficient Python Library for Embedding Generation""
+
+short_description: ""FastEmbed: Quantized Embedding models for fast CPU Generation""
+
+description: ""Learn how to accurately and efficiently create text embeddings with FastEmbed.""
+
+social_preview_image: /articles_data/fastembed/preview/social_preview.jpg
+
+small_preview_image: /articles_data/fastembed/preview/lightning.svg
+
+preview_dir: /articles_data/fastembed/preview
+
+weight: -60
+
+author: Nirant Kasliwal
+
+author_link: https://nirantk.com/about/
+
+date: 2023-10-18T10:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - embedding models
+
+ - Flag Embedding
+
+ - OpenAI Ada
+
+ - NLP
+
+ - embeddings
+
+ - ONNX Runtime
+
+ - quantized embedding model
+
+---
+
+
+
+Data Science and Machine Learning practitioners often find themselves navigating through a labyrinth of models, libraries, and frameworks. Which model to choose, what embedding size, and how to approach tokenizing, are just some questions you are faced with when starting your work. We understood how many data scientists wanted an easier and more intuitive means to do their embedding work. This is why we built FastEmbed, a Python library engineered for speed, efficiency, and usability. We have created easy to use default workflows, handling the 80% use cases in NLP embedding.
+
+
+
+## Current State of Affairs for Generating Embeddings
+
+
+
+Usually you make embedding by utilizing PyTorch or TensorFlow models under the hood. However, using these libraries comes at a cost in terms of ease of use and computational speed. This is at least in part because these are built for both: model inference and improvement e.g. via fine-tuning.
+
+
+
+To tackle these problems we built a small library focused on the task of quickly and efficiently creating text embeddings. We also decided to start with only a small sample of best in class transformer models. By keeping it small and focused on a particular use case, we could make our library focused without all the extraneous dependencies. We ship with limited models, quantize the model weights and seamlessly integrate them with the ONNX Runtime. FastEmbed strikes a balance between inference time, resource utilization and performance (recall/accuracy).
+
+
+
+## Quick Embedding Text Document Example
+
+
+
+Here is an example of how simple we have made embedding text documents:
+
+
+
+```python
+
+documents: List[str] = [
+
+ ""Hello, World!"",
+
+ ""fastembed is supported by and maintained by Qdrant.""
+
+]
+
+embedding_model = DefaultEmbedding()
+
+embeddings: List[np.ndarray] = list(embedding_model.embed(documents))
+
+```
+
+
+
+These 3 lines of code do a lot of heavy lifting for you: They download the quantized model, load it using ONNXRuntime, and then run a batched embedding creation of your documents.
+
+
+
+### Code Walkthrough
+
+
+
+Let’s delve into a more advanced example code snippet line-by-line:
+
+
+
+```python
+
+from fastembed.embedding import DefaultEmbedding
+
+```
+
+
+
+Here, we import the FlagEmbedding class from FastEmbed and alias it as Embedding. This is the core class responsible for generating embeddings based on your chosen text model. This is also the class which you can import directly as DefaultEmbedding which is [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5)
+
+
+
+```python
+
+documents: List[str] = [
+
+ ""passage: Hello, World!"",
+
+ ""query: How is the World?"",
+
+ ""passage: This is an example passage."",
+
+ ""fastembed is supported by and maintained by Qdrant.""
+
+]
+
+```
+
+
+
+In this list called documents, we define four text strings that we want to convert into embeddings.
+
+
+
+Note the use of prefixes “passage” and “query” to differentiate the types of embeddings to be generated. This is inherited from the cross-encoder implementation of the BAAI/bge series of models themselves. This is particularly useful for retrieval and we strongly recommend using this as well.
+
+
+
+The use of text prefixes like “query” and “passage” isn’t merely syntactic sugar; it informs the algorithm on how to treat the text for embedding generation. A “query” prefix often triggers the model to generate embeddings that are optimized for similarity comparisons, while “passage” embeddings are fine-tuned for contextual understanding. If you omit the prefix, the default behavior is applied, although specifying it is recommended for more nuanced results.
+
+
+
+Next, we initialize the Embedding model with the default model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5).
+
+
+
+```python
+
+embedding_model = DefaultEmbedding()
+
+```
+
+
+
+The default model and several other models have a context window of a maximum of 512 tokens. This maximum limit comes from the embedding model training and design itself. If you'd like to embed sequences larger than that, we'd recommend using some pooling strategy to get a single vector out of the sequence. For example, you can use the mean of the embeddings of different chunks of a document. This is also what the [SBERT Paper recommends](https://lilianweng.github.io/posts/2021-05-31-contrastive/#sentence-bert)
+
+
+
+This model strikes a balance between speed and accuracy, ideal for real-world applications.
+
+
+
+```python
+
+embeddings: List[np.ndarray] = list(embedding_model.embed(documents))
+
+```
+
+
+
+Finally, we call the `embed()` method on our embedding_model object, passing in the documents list. The method returns a Python generator, so we convert it to a list to get all the embeddings. These embeddings are NumPy arrays, optimized for fast mathematical operations.
+
+
+
+The `embed()` method returns a list of NumPy arrays, each corresponding to the embedding of a document in your original documents list. The dimensions of these arrays are determined by the model you chose e.g. for “BAAI/bge-small-en-v1.5” it’s a 384-dimensional vector.
+
+
+
+You can easily parse these NumPy arrays for any downstream application—be it clustering, similarity comparison, or feeding them into a machine learning model for further analysis.
+
+
+
+## 3 Key Features of FastEmbed
+
+
+
+FastEmbed is built for inference speed, without sacrificing (too much) performance:
+
+
+
+1. 50% faster than PyTorch Transformers
+
+2. Better performance than Sentence Transformers and OpenAI Ada-002
+
+3. Cosine similarity of quantized and original model vectors is 0.92
+
+
+
+We use `BAAI/bge-small-en-v1.5` as our DefaultEmbedding, hence we've chosen that for comparison:
+
+
+
+![](/articles_data/fastembed/throughput.png)
+
+
+
+## Under the Hood of FastEmbed
+
+
+
+**Quantized Models**: We quantize the models for CPU (and Mac Metal) – giving you the best buck for your compute model. Our default model is so small, you can run this in AWS Lambda if you’d like!
+
+
+
+Shout out to Huggingface's [Optimum](https://github.com/huggingface/optimum) – which made it easier to quantize models.
+
+
+
+**Reduced Installation Time**:
+
+
+
+FastEmbed sets itself apart by maintaining a low minimum RAM/Disk usage.
+
+
+
+It’s designed to be agile and fast, useful for businesses looking to integrate text embedding for production usage. For FastEmbed, the list of dependencies is refreshingly brief:
+
+
+
+> - onnx: Version ^1.11 – We’ll try to drop this also in the future if we can!
+
+> - onnxruntime: Version ^1.15
+
+> - tqdm: Version ^4.65 – used only at Download
+
+> - requests: Version ^2.31 – used only at Download
+
+> - tokenizers: Version ^0.13
+
+
+
+This minimized list serves two purposes. First, it significantly reduces the installation time, allowing for quicker deployments. Second, it limits the amount of disk space required, making it a viable option even for environments with storage limitations.
+
+
+
+Notably absent from the dependency list are bulky libraries like PyTorch, and there’s no requirement for CUDA drivers. This is intentional. FastEmbed is engineered to deliver optimal performance right on your CPU, eliminating the need for specialized hardware or complex setups.
+
+
+
+**ONNXRuntime**: The ONNXRuntime gives us the ability to support multiple providers. The quantization we do is limited for CPU (Intel), but we intend to support GPU versions of the same in the future as well. This allows for greater customization and optimization, further aligning with your specific performance and computational requirements.
+
+
+
+## Current Models
+
+
+
+We’ve started with a small set of supported models:
+
+
+
+All the models we support are [quantized](https://pytorch.org/docs/stable/quantization.html) to enable even faster computation!
+
+
+
+If you're using FastEmbed and you've got ideas or need certain features, feel free to let us know. Just drop an issue on our GitHub page. That's where we look first when we're deciding what to work on next. Here's where you can do it: [FastEmbed GitHub Issues](https://github.com/qdrant/fastembed/issues).
+
+
+
+When it comes to FastEmbed's DefaultEmbedding model, we're committed to supporting the best Open Source models.
+
+
+
+If anything changes, you'll see a new version number pop up, like going from 0.0.6 to 0.1. So, it's a good idea to lock in the FastEmbed version you're using to avoid surprises.
+
+
+
+## Using FastEmbed with Qdrant
+
+
+
+Qdrant is a Vector Store, offering comprehensive, efficient, and scalable [enterprise solutions](https://qdrant.tech/enterprise-solutions/) for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant [vector database solution](https://qdrant.tech/qdrant-vector-database/), or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on.
+
+
+
+The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant.
+
+
+
+Below is a detailed guide on how to get started with FastEmbed in conjunction with Qdrant.
+
+
+
+### Step 1: Installation
+
+
+
+Before diving into the code, the initial step involves installing the Qdrant Client along with the FastEmbed library. This can be done using pip:
+
+
+
+```
+
+pip install qdrant-client[fastembed]
+
+```
+
+
+
+For those using zsh as their shell, you might encounter syntax issues. In such cases, wrap the package name in quotes:
+
+
+
+```
+
+pip install 'qdrant-client[fastembed]'
+
+```
+
+
+
+### Step 2: Initializing the Qdrant Client
+
+
+
+After successful installation, the next step involves initializing the Qdrant Client. This can be done either in-memory or by specifying a database path:
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+# Initialize the client
+
+client = QdrantClient("":memory:"") # or QdrantClient(path=""path/to/db"")
+
+```
+
+
+
+### Step 3: Preparing Documents, Metadata, and IDs
+
+
+
+Once the client is initialized, prepare the text documents you wish to embed, along with any associated metadata and unique IDs:
+
+
+
+```python
+
+docs = [
+
+ ""Qdrant has Langchain integrations"",
+
+ ""Qdrant also has Llama Index integrations""
+
+]
+
+metadata = [
+
+ {""source"": ""Langchain-docs""},
+
+ {""source"": ""LlamaIndex-docs""},
+
+]
+
+ids = [42, 2]
+
+```
+
+
+
+Note that the add method we’ll use is overloaded: If you skip the ids, we’ll generate those for you. metadata is obviously optional. So, you can simply use this too:
+
+
+
+```python
+
+docs = [
+
+ ""Qdrant has Langchain integrations"",
+
+ ""Qdrant also has Llama Index integrations""
+
+]
+
+```
+
+
+
+### Step 4: Adding Documents to a Collection
+
+
+
+With your documents, metadata, and IDs ready, you can proceed to add these to a specified collection within Qdrant using the add method:
+
+
+
+```python
+
+client.add(
+
+ collection_name=""demo_collection"",
+
+ documents=docs,
+
+ metadata=metadata,
+
+ ids=ids
+
+)
+
+```
+
+
+
+Inside this function, Qdrant Client uses FastEmbed to make the text embedding, generate ids if they’re missing, and then add them to the index with metadata. This uses the DefaultEmbedding model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5)
+
+
+
+![INDEX TIME: Sequence Diagram for Qdrant and FastEmbed](/articles_data/fastembed/generate-embeddings-from-docs.png)
+
+
+
+### Step 5: Performing Queries
+
+
+
+Finally, you can perform queries on your stored documents. Qdrant offers a robust querying capability, and the query results can be easily retrieved as follows:
+
+
+
+```python
+
+search_result = client.query(
+
+ collection_name=""demo_collection"",
+
+ query_text=""This is a query document""
+
+)
+
+print(search_result)
+
+```
+
+
+
+Behind the scenes, we first convert the query_text to the embedding and use that to query the vector index.
+
+
+
+![QUERY TIME: Sequence Diagram for Qdrant and FastEmbed integration](/articles_data/fastembed/generate-embeddings-query.png)
+
+
+
+By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks.
+
+
+
+Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like [binary quantization](https://qdrant.tech/articles/binary-quantization/) and [scalar quantization](https://qdrant.tech/articles/scalar-quantization/) for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency.
+
+
+
+## Summary
+
+
+
+If you're curious about how FastEmbed and Qdrant can make your search tasks a breeze, why not take it for a spin? You get a real feel for what it can do. Here are two easy ways to get started:
+
+
+
+1. **Cloud**: Get started with a free plan on the [Qdrant Cloud](https://qdrant.to/cloud?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article).
+
+
+
+2. **Docker Container**: If you're the DIY type, you can set everything up on your own machine. Here's a quick guide to help you out: [Quick Start with Docker](/documentation/quick-start/?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article).
+
+
+
+So, go ahead, take it for a test drive. We're excited to hear what you think!
+
+
+
+Lastly, If you find FastEmbed useful and want to keep up with what we're doing, giving our GitHub repo a star would mean a lot to us. Here's the link to [star the repository](https://github.com/qdrant/fastembed).
+
+
+
+If you ever have questions about FastEmbed, please ask them on the Qdrant Discord: [https://discord.gg/Qy6HCJK9Dc](https://discord.gg/Qy6HCJK9Dc)
+",articles/fastembed.md
+"---
+
+title: ""Product Quantization in Vector Search | Qdrant""
+
+short_description: ""Vector search with low memory? Try out our brand-new Product Quantization!""
+
+description: ""Discover product quantization in vector search technology. Learn how it optimizes storage and accelerates search processes for high-dimensional data.""
+
+social_preview_image: /articles_data/product-quantization/social_preview.png
+
+small_preview_image: /articles_data/product-quantization/product-quantization-icon.svg
+
+preview_dir: /articles_data/product-quantization/preview
+
+weight: 4
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-05-30T09:45:00+02:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - product quantization
+
+ - memory optimization
+
+aliases: [ /articles/product_quantization/ ]
+
+---
+
+
+
+# Product Quantization Demystified: Streamlining Efficiency in Data Management
+
+
+
+Qdrant 1.1.0 brought the support of [Scalar Quantization](/articles/scalar-quantization/),
+
+a technique of reducing the memory footprint by even four times, by using `int8` to represent
+
+the values that would be normally represented by `float32`.
+
+
+
+The memory usage in [vector search](https://qdrant.tech/solutions/) might be reduced even further! Please welcome **Product
+
+Quantization**, a brand-new feature of Qdrant 1.2.0!
+
+
+
+## What is Product Quantization?
+
+
+
+Product Quantization converts floating-point numbers into integers like every other quantization
+
+method. However, the process is slightly more complicated than [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/) and is more customizable, so you can find the sweet spot between memory usage and search precision. This article
+
+covers all the steps required to perform Product Quantization and the way it's implemented in Qdrant.
+
+
+
+## How Does Product Quantization Work?
+
+
+
+Let’s assume we have a few vectors being added to the collection and that our optimizer decided
+
+to start creating a new segment.
+
+
+
+![A list of raw vectors](/articles_data/product-quantization/raw-vectors.png)
+
+
+
+### Cutting the vector into pieces
+
+
+
+First of all, our vectors are going to be divided into **chunks** aka **subvectors**. The number
+
+of chunks is configurable, but as a rule of thumb - the lower it is, the higher the compression rate.
+
+That also comes with reduced search precision, but in some cases, you may prefer to keep the memory
+
+usage as low as possible.
+
+
+
+![A list of chunked vectors](/articles_data/product-quantization/chunked-vectors.png)
+
+
+
+Qdrant API allows choosing the compression ratio from 4x up to 64x. In our example, we selected 16x,
+
+so each subvector will consist of 4 floats (16 bytes), and it will eventually be represented by
+
+a single byte.
+
+
+
+### Clustering
+
+
+
+The chunks of our vectors are then used as input for clustering. Qdrant uses the K-means algorithm,
+
+with $ K = 256 $. It was selected a priori, as this is the maximum number of values a single byte
+
+represents. As a result, we receive a list of 256 centroids for each chunk and assign each of them
+
+a unique id. **The clustering is done separately for each group of chunks.**
+
+
+
+![Clustered chunks of vectors](/articles_data/product-quantization/chunks-clustering.png)
+
+
+
+Each chunk of a vector might now be mapped to the closest centroid. That’s where we lose the precision,
+
+as a single point will only represent a whole subspace. Instead of using a subvector, we can store
+
+the id of the closest centroid. If we repeat that for each chunk, we can approximate the original
+
+embedding as a vector of subsequent ids of the centroids. The dimensionality of the created vector
+
+is equal to the number of chunks, in our case 2.
+
+
+
+![A new vector built from the ids of the centroids](/articles_data/product-quantization/vector-of-ids.png)
+
+
+
+### Full process
+
+
+
+All those steps build the following pipeline of Product Quantization:
+
+
+
+![Full process of Product Quantization](/articles_data/product-quantization/full-process.png)
+
+
+
+## Measuring the distance
+
+
+
+Vector search relies on the distances between the points. Enabling Product Quantization slightly changes
+
+the way it has to be calculated. The query vector is divided into chunks, and then we figure the overall
+
+distance as a sum of distances between the subvectors and the centroids assigned to the specific id of
+
+the vector we compare to. We know the coordinates of the centroids, so that's easy.
+
+
+
+![Calculating the distance of between the query and the stored vector](/articles_data/product-quantization/distance-calculation.png)
+
+
+
+#### Qdrant implementation
+
+
+
+Search operation requires calculating the distance to multiple points. Since we calculate the
+
+distance to a finite set of centroids, those might be precomputed and reused. Qdrant creates
+
+a lookup table for each query, so it can then simply sum up several terms to measure the
+
+distance between a query and all the centroids.
+
+
+
+| | Centroid 0 | Centroid 1 | ... |
+
+|-------------|------------|------------|-----|
+
+| **Chunk 0** | 0.14213 | 0.51242 | |
+
+| **Chunk 1** | 0.08421 | 0.00142 | |
+
+| **...** | ... | ... | ... |
+
+
+
+## Product Quantization Benchmarks
+
+
+
+Product Quantization comes with a cost - there are some additional operations to perform so
+
+that the performance might be reduced. However, memory usage might be reduced drastically as
+
+well. As usual, we did some benchmarks to give you a brief understanding of what you may expect.
+
+
+
+Again, we reused the same pipeline as in [the other benchmarks we published](/benchmarks/). We
+
+selected [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets)
+
+and [Glove-100](https://github.com/erikbern/ann-benchmarks/) datasets to measure the impact
+
+of Product Quantization on precision and time. Both experiments were launched with $ EF = 128 $.
+
+The results are summarized in the tables:
+
+
+
+#### Glove-100
+
+
+
+
+
+
+
+
+
+
+
+
Original
+
+
1D clusters
+
+
2D clusters
+
+
3D clusters
+
+
+
+
+
+
+
+
+
+
Mean precision
+
+
0.7158
+
+
0.7143
+
+
0.6731
+
+
0.5854
+
+
+
+
+
+
Mean search time
+
+
2336 µs
+
+
2750 µs
+
+
2597 µs
+
+
2534 µs
+
+
+
+
+
+
Compression
+
+
x1
+
+
x4
+
+
x8
+
+
x12
+
+
+
+
+
+
Upload & indexing time
+
+
147 s
+
+
339 s
+
+
217 s
+
+
178 s
+
+
+
+
+
+
+
+
+
+Product Quantization increases both indexing and searching time. The higher the compression ratio,
+
+the lower the search precision. The main benefit is undoubtedly the reduced usage of memory.
+
+
+
+#### Arxiv-titles-384-angular-no-filters
+
+
+
+
+
+
+
+
+
+
+
+
Original
+
+
1D clusters
+
+
2D clusters
+
+
4D clusters
+
+
8D clusters
+
+
+
+
+
+
+
+
+
+
Mean precision
+
+
0.9837
+
+
0.9677
+
+
0.9143
+
+
0.8068
+
+
0.6618
+
+
+
+
+
+
Mean search time
+
+
2719 µs
+
+
4134 µs
+
+
2947 µs
+
+
2175 µs
+
+
2053 µs
+
+
+
+
+
+
Compression
+
+
x1
+
+
x4
+
+
x8
+
+
x16
+
+
x32
+
+
+
+
+
+
Upload & indexing time
+
+
332 s
+
+
921 s
+
+
597 s
+
+
481 s
+
+
474 s
+
+
+
+
+
+
+
+
+
+It turns out that in some cases, Product Quantization may not only reduce the memory usage,
+
+but also the search time.
+
+
+
+## Product Quantization vs Scalar Quantization
+
+
+
+Compared to [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/), Product Quantization offers a higher compression rate. However, this comes with considerable trade-offs in accuracy, and at times, in-RAM search speed.
+
+
+
+Product Quantization tends to be favored in certain specific scenarios:
+
+
+
+- Deployment in a low-RAM environment where the limiting factor is the number of disk reads rather than the vector comparison itself
+
+- Situations where the dimensionality of the original vectors is sufficiently high
+
+- Cases where indexing speed is not a critical factor
+
+
+
+In circumstances that do not align with the above, Scalar Quantization should be the preferred choice.
+
+
+
+## Using Qdrant for Product Quantization
+
+
+
+
+
+If you’re already a Qdrant user, we have, documentation on [Product Quantization](/documentation/guides/quantization/#setting-up-product-quantization) that will help you to set and configure the new quantization for your data and achieve even
+
+up to 64x memory reduction.
+
+
+
+Ready to experience the power of Product Quantization? [Sign up now](https://cloud.qdrant.io/) for a free Qdrant demo and optimize your data management today!",articles/product-quantization.md
+"---
+
+title: ""What is a Vector Database?""
+
+draft: false
+
+slug: what-is-a-vector-database?
+
+short_description: What is a Vector Database? Use Cases & Examples | Qdrant
+
+description: Discover what a vector database is, its core functionalities, and real-world applications. Unlock advanced data management with our comprehensive guide.
+
+preview_dir: /articles_data/what-is-a-vector-database/preview
+
+weight: -100
+
+social_preview_image: /articles_data/what-is-a-vector-database/preview/social-preview.jpg
+
+small_preview_image: /articles_data/what-is-a-vector-database/icon.svg
+
+date: 2024-01-25T09:29:33-03:00
+
+author: Sabrina Aquino
+
+featured: true
+
+tags:
+
+ - vector-search
+
+ - vector-database
+
+ - embeddings
+
+
+
+aliases: [ /blog/what-is-a-vector-database/ ]
+
+---
+
+
+
+# Why use a Vector Database & How Does it Work?
+
+
+
+In the ever-evolving landscape of data management and artificial intelligence, [vector databases](https://qdrant.tech/qdrant-vector-database/) have emerged as a revolutionary tool for efficiently handling complex, high-dimensional data. But what exactly is a vector database? This comprehensive guide delves into the fundamentals of vector databases, exploring their unique capabilities, core functionalities, and real-world applications.
+
+
+
+## What is a Vector Database?
+
+
+
+A [Vector Database](https://qdrant.tech/qdrant-vector-database/) is a specialized database system designed for efficiently indexing, querying, and retrieving high-dimensional vector data. Those systems enable advanced data analysis and similarity-search operations that extend well beyond the traditional, structured query approach of conventional databases.
+
+
+
+## Why use a Vector Database?
+
+
+
+The data flood is real.
+
+
+
+In 2024, we're drowning in unstructured data like images, text, and audio, that don’t fit into neatly organized tables. Still, we need a way to easily tap into the value within this chaos of almost 330 million terabytes of data being created each day.
+
+
+
+Traditional databases, even with extensions that provide some vector handling capabilities, struggle with the complexities and demands of high-dimensional vector data.
+
+
+
+Handling of vector data is extremely resource-intensive. A traditional vector is around 6Kb. You can see how scaling to millions of vectors can demand substantial system memory and computational resources. Which is at least very challenging for traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap) databases to manage.
+
+
+
+![](/articles_data/what-is-a-vector-database/Why-Use-Vector-Database.jpg)
+
+
+
+Vector databases allow you to understand the **context** or **conceptual similarity** of unstructured data by representing them as **vectors**, enabling advanced analysis and retrieval based on data similarity.
+
+
+
+For example, in recommendation systems, vector databases can analyze user behavior and item characteristics to suggest products or content with a high degree of personal relevance.
+
+
+
+In search engines and research databases, they enhance the user experience by providing results that are **semantically** similar to the query. They do not rely solely on the exact words typed into the search bar.
+
+
+
+If you're new to the vector search space, this article explains the key concepts and relationships that you need to know.
+
+
+
+So let's get into it.
+
+
+
+
+
+## What is Vector Data?
+
+
+
+To understand vector databases, let's begin by defining what is a 'vector' or 'vector data'.
+
+
+
+Vectors are a **numerical representation** of some type of complex information.
+
+
+
+To represent textual data, for example, it will encapsulate the nuances of language, such as semantics and context.
+
+
+
+With an image, the vector data encapsulates aspects like color, texture, and shape. The **dimensions** relate to the complexity and the amount of information each image contains.
+
+
+
+Each pixel in an image can be seen as one dimension, as it holds data (like color intensity values for red, green, and blue channels in a color image). So even a small image with thousands of pixels translates to thousands of dimensions.
+
+
+
+So from now on, when we talk about high-dimensional data, we mean that the data contains a large number of data points (pixels, features, semantics, syntax).
+
+
+
+The **creation** of vector data (so we can store this high-dimensional data on our vector database) is primarily done through **embeddings**.
+
+
+
+![](/articles_data/what-is-a-vector-database/Vector-Data.jpg)
+
+
+
+### How do Embeddings Work?
+
+
+
+[Embeddings](https://qdrant.tech/articles/what-are-embeddings/) translate this high-dimensional data into a more manageable, **lower-dimensional** vector form that's more suitable for machine learning and data processing applications, typically through **neural network models**.
+
+
+
+In creating dimensions for text, for example, the process involves analyzing the text to capture its linguistic elements.
+
+
+
+Transformer-based neural networks like **BERT** (Bidirectional Encoder Representations from Transformers) and **GPT** (Generative Pre-trained Transformer), are widely used for creating text embeddings.
+
+
+
+Each layer extracts different levels of features, such as context, semantics, and syntax.
+
+
+
+![](/articles_data/what-is-a-vector-database/How-Do-Embeddings-Work_.jpg)
+
+
+
+
+
+The final layers of the network condense this information into a vector that is a compact, lower-dimensional representation of the image but still retains the essential information.
+
+
+
+
+
+## The Core Functionalities of Vector Databases
+
+
+
+### Vector Database Indexing
+
+
+
+Have you ever tried to find a specific face in a massive crowd photo? Well, vector databases face a similar challenge when dealing with tons of high-dimensional vectors.
+
+
+
+Now, imagine dividing the crowd into smaller groups based on hair color, then eye color, then clothing style. Each layer gets you closer to who you’re looking for. Vector databases use similar **multi-layered** structures called indexes to organize vectors based on their ""likeness.""
+
+
+
+This way, finding similar images becomes a quick hop across related groups, instead of scanning every picture one by one.
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/Indexing.jpg)
+
+
+
+
+
+Different indexing methods exist, each with its strengths. [HNSW](/articles/filtrable-hnsw/) balances speed and accuracy like a well-connected network of shortcuts in the crowd. Others, like IVF or Product Quantization, focus on specific tasks or memory efficiency.
+
+
+
+
+
+### Binary Quantization
+
+
+
+Quantization is a technique used for reducing the total size of the database. It works by compressing vectors into a more compact representation at the cost of accuracy.
+
+
+
+[Binary Quantization](/articles/binary-quantization/) is a fast indexing and data compression method used by Qdrant. It supports vector comparisons, which can dramatically speed up query processing times (up to 40x faster!).
+
+
+
+Think of each data point as a ruler. Binary quantization splits this ruler in half at a certain point, marking everything above as ""1"" and everything below as ""0"". This [binarization](https://deepai.org/machine-learning-glossary-and-terms/binarization) process results in a string of bits, representing the original vector.
+
+
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/Binary-Quant.png)
+
+
+
+
+
+This ""quantized"" code is much smaller and easier to compare. Especially for OpenAI embeddings, this type of quantization has proven to achieve a massive performance improvement at a lower cost of accuracy.
+
+
+
+
+
+### Similarity Search
+
+
+
+[Similarity search](/documentation/concepts/search/) allows you to search not by keywords but by meaning. This way you can do searches such as similar songs that evoke the same mood, finding images that match your artistic vision, or even exploring emotional patterns in text.
+
+
+
+The way it works is, when the user queries the database, this query is also converted into a vector (the query vector). The [vector search](/documentation/overview/vector-search/) starts at the top layer of the HNSW index, where the algorithm quickly identifies the area of the graph likely to contain vectors closest to the query vector. The algorithm compares your query vector to all the others, using metrics like ""distance"" or ""similarity"" to gauge how close they are.
+
+
+
+The search then moves down progressively narrowing down to more closely related vectors. The goal is to narrow down the dataset to the most relevant items. The image below illustrates this.
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/Similarity-Search-and-Retrieval.jpg)
+
+
+
+
+
+Once the closest vectors are identified at the bottom layer, these points translate back to actual data, like images or music, representing your search results.
+
+
+
+
+
+### Scalability
+
+
+
+[Vector databases](https://qdrant.tech/qdrant-vector-database/) often deal with datasets that comprise billions of high-dimensional vectors. This data isn't just large in volume but also complex in nature, requiring more computing power and memory to process. Scalable systems can handle this increased complexity without performance degradation. This is achieved through a combination of a **distributed architecture**, **dynamic resource allocation**, **data partitioning**, **load balancing**, and **optimization techniques**.
+
+
+
+Systems like Qdrant exemplify scalability in vector databases. It [leverages Rust's efficiency](https://qdrant.tech/articles/why-rust/) in **memory management** and **performance**, which allows the handling of large-scale data with optimized resource usage.
+
+
+
+
+
+### Efficient Query Processing
+
+
+
+The key to efficient query processing in these databases is linked to their **indexing methods**, which enable quick navigation through complex data structures. By mapping and accessing the high-dimensional vector space, HNSW and similar indexing techniques significantly reduce the time needed to locate and retrieve relevant data.
+
+
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/search-query.jpg)
+
+
+
+
+
+Other techniques like **handling computational load** and **parallel processing** are used for performance, especially when managing multiple simultaneous queries. Complementing them, **strategic caching** is also employed to store frequently accessed data, facilitating a quicker retrieval for subsequent queries.
+
+
+
+
+
+### Using Metadata and Filters
+
+
+
+Filters use metadata to refine search queries within the database. For example, in a database containing text documents, a user might want to search for documents not only based on textual similarity but also filter the results by publication date or author.
+
+
+
+When a query is made, the system can use **both** the vector data and the metadata to process the query. In other words, the database doesn’t just look for the closest vectors. It also considers the additional criteria set by the metadata filters, creating a more customizable search experience.
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/metadata.jpg)
+
+
+
+
+
+
+
+### Data Security and Access Control
+
+
+
+Vector databases often store sensitive information. This could include personal data in customer databases, confidential images, or proprietary text documents. Ensuring data security means protecting this information from unauthorized access, breaches, and other forms of cyber threats.
+
+
+
+At Qdrant, this includes mechanisms such as:
+
+
+
+ - User authentication
+
+ - Encryption for data at rest and in transit
+
+ - Keeping audit trails
+
+ - Advanced database monitoring and anomaly detection
+
+
+
+
+
+## What is the Architecture of a Vector Database?
+
+
+
+A vector database is made of multiple different entities and relations. Here's a high-level overview of Qdrant's terminologies and how they fit into the larger picture:
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/Architecture-of-a-Vector-Database.jpg)
+
+
+
+
+
+**Collections**: [Collections](/documentation/concepts/collections/) are a named set of data points, where each point is a vector with an associated payload. All vectors within a collection must have the same dimensionality and be comparable using a single metric.
+
+
+
+**Distance Metrics**: These metrics are used to measure the similarity between vectors. The choice of distance metric is made when creating a collection. It depends on the nature of the vectors and how they were generated, considering the neural network used for the encoding.
+
+
+
+**Points**: Each [point](/documentation/concepts/points/) consists of a **vector** and can also include an optional **identifier** (ID) and **[payload](/documentation/concepts/payload/)**. The vector represents the high-dimensional data and the payload carries metadata information in a JSON format, giving the data point more context or attributes.
+
+
+
+**Storage Options**: There are two primary storage options. The in-memory storage option keeps all vectors in RAM, which allows for the highest speed in data access since disk access is only required for persistence.
+
+
+
+Alternatively, the Memmap storage option creates a virtual address space linked with the file on disk, giving a balance between memory usage and access speed.
+
+
+
+**Clients**: Qdrant supports various programming languages for client interaction, such as Python, Go, Rust, and Typescript. This way developers can connect to and interact with Qdrant using the programming language they prefer.
+
+
+
+
+
+## Vector Database Use Cases
+
+
+
+If we had to summarize the [use cases for vector databases](https://qdrant.tech/use-cases/) into a single word, it would be ""match"". They are great at finding non-obvious ways to correspond or “match” data with a given query. Whether it's through similarity in images, text, user preferences, or patterns in data.
+
+
+
+Here are some examples of how to take advantage of using vector databases:
+
+
+
+[Personalized recommendation systems](https://qdrant.tech/recommendations/) to analyze and interpret complex user data, such as preferences, behaviors, and interactions. For example, on Spotify, if a user frequently listens to the same song or skips it, the recommendation engine takes note of this to personalize future suggestions.
+
+
+
+[Semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) allows for systems to be able to capture the deeper semantic meaning of words and text. In modern search engines, if someone searches for ""tips for planting in spring,"" it tries to understand the intent and contextual meaning behind the query. It doesn’t try just matching the words themselves.
+
+
+
+Here’s an example of a [vector search engine for Startups](https://demo.qdrant.tech/) made with Qdrant:
+
+
+
+
+
+![](/articles_data/what-is-a-vector-database/semantic-search.png)
+
+
+
+There are many other use cases like for **fraud detection and anomaly analysis** used in sectors like finance and cybersecurity, to detect anomalies and potential fraud. And **Content-Based Image Retrieval (CBIR)** for images by comparing vector representations rather than metadata or tags.
+
+
+
+Those are just a few examples. The ability of vector databases to “match” data with queries makes them essential for multiple types of applications. Here are some more [use cases examples](/use-cases/) you can take a look at.
+
+
+
+
+
+### Get Started With Qdrant’s Vector Database Today
+
+
+
+Now that you're familiar with the core concepts around vector databases, it’s time to get your hands dirty. [Start by building your own semantic search engine](/documentation/tutorials/search-beginners/) for science fiction books in just about 5 minutes with the help of Qdrant. You can also watch our [video tutorial](https://www.youtube.com/watch?v=AASiqmtKo54).
+
+
+
+Feeling ready to dive into a more complex project? Take the next step and get started building an actual [Neural Search Service with a complete API and a dataset](/documentation/tutorials/neural-search/).
+
+
+
+Let’s get into action!
+",articles/what-is-a-vector-database.md
+"---
+
+title: Layer Recycling and Fine-tuning Efficiency
+
+short_description: Tradeoff between speed and performance in layer recycling
+
+description: Learn when and how to use layer recycling to achieve different performance targets.
+
+preview_dir: /articles_data/embedding-recycling/preview
+
+small_preview_image: /articles_data/embedding-recycling/icon.svg
+
+social_preview_image: /articles_data/embedding-recycling/preview/social_preview.jpg
+
+weight: 10
+
+author: Yusuf Sarıgöz
+
+author_link: https://medium.com/@yusufsarigoz
+
+date: 2022-08-23T13:00:00+03:00
+
+draft: false
+
+aliases: [ /articles/embedding-recycler/ ]
+
+---
+
+
+
+A recent [paper](https://arxiv.org/abs/2207.04993)
+
+by Allen AI has attracted attention in the NLP community as they cache the output of a certain intermediate layer
+
+in the training and inference phases to achieve a speedup of ~83%
+
+with a negligible loss in model performance.
+
+This technique is quite similar to [the caching mechanism in Quaterion](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html),
+
+but the latter is intended for any data modalities while the former focuses only on language models
+
+despite presenting important insights from their experiments.
+
+In this post, I will share our findings combined with those,
+
+hoping to provide the community with a wider perspective on layer recycling.
+
+
+
+## How layer recycling works
+
+The main idea of layer recycling is to accelerate the training (and inference)
+
+by avoiding repeated passes of the same data object through the frozen layers.
+
+Instead, it is possible to pass objects through those layers only once,
+
+cache the output
+
+and use them as inputs to the unfrozen layers in future epochs.
+
+
+
+In the paper, they usually cache 50% of the layers, e.g., the output of the 6th multi-head self-attention block in a 12-block encoder.
+
+However, they find out that it does not work equally for all the tasks.
+
+For example, the question answering task suffers from a more significant degradation in performance with 50% of the layers recycled,
+
+and they choose to lower it down to 25% for this task,
+
+so they suggest determining the level of caching based on the task at hand.
+
+they also note that caching provides a more considerable speedup for larger models and on lower-end machines.
+
+
+
+In layer recycling, the cache is hit for exactly the same object.
+
+It is easy to achieve this in textual data as it is easily hashable,
+
+but you may need more advanced tricks to generate keys for the cache
+
+when you want to generalize this technique to diverse data types.
+
+For instance, hashing PyTorch tensors [does not work as you may expect](https://github.com/joblib/joblib/issues/1282).
+
+Quaterion comes with an intelligent key extractor that may be applied to any data type,
+
+but it is also allowed to customize it with a callable passed as an argument.
+
+Thanks to this flexibility, we were able to run a variety of experiments in different setups,
+
+and I believe that these findings will be helpful for your future projects.
+
+
+
+## Experiments
+
+We conducted different experiments to test the performance with:
+
+1. Different numbers of layers recycled in [the similar cars search example](https://quaterion.qdrant.tech/tutorials/cars-tutorial.html).
+
+2. Different numbers of samples in the dataset for training and fine-tuning for similar cars search.
+
+3. Different numbers of layers recycled in [the question answerring example](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html).
+
+
+
+## Easy layer recycling with Quaterion
+
+The easiest way of caching layers in Quaterion is to compose a [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel)
+
+with a frozen [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder)
+
+and an unfrozen [EncoderHead](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead).
+
+Therefore, we modified the `TrainableModel` in the [example](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py)
+
+as in the following:
+
+
+
+```python
+
+class Model(TrainableModel):
+
+ # ...
+
+
+
+
+
+ def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+
+ pre_trained_encoder = torchvision.models.resnet34(pretrained=True)
+
+ self.avgpool = copy.deepcopy(pre_trained_encoder.avgpool)
+
+ self.finetuned_block = copy.deepcopy(pre_trained_encoder.layer4)
+
+ modules = []
+
+
+
+ for name, child in pre_trained_encoder.named_children():
+
+ modules.append(child)
+
+ if name == ""layer3"":
+
+ break
+
+
+
+ pre_trained_encoder = nn.Sequential(*modules)
+
+
+
+ return CarsEncoder(pre_trained_encoder)
+
+
+
+ def configure_head(self, input_embedding_size) -> EncoderHead:
+
+ return SequentialHead(self.finetuned_block,
+
+ self.avgpool,
+
+ nn.Flatten(),
+
+ SkipConnectionHead(512, dropout=0.3, skip_dropout=0.2),
+
+ output_size=512)
+
+
+
+
+
+ # ...
+
+```
+
+
+
+This trick lets us finetune one more layer from the base model as a part of the `EncoderHead`
+
+while still benefiting from the speedup in the frozen `Encoder` provided by the cache.
+
+
+
+
+
+## Experiment 1: Percentage of layers recycled
+
+The paper states that recycling 50% of the layers yields little to no loss in performance when compared to full fine-tuning.
+
+In this setup, we compared performances of four methods:
+
+1. Freeze the whole base model and train only `EncoderHead`.
+
+2. Move one of the four residual blocks `EncoderHead` and train it together with the head layer while freezing the rest (75% layer recycling).
+
+3. Move two of the four residual blocks to `EncoderHead` while freezing the rest (50% layer recycling).
+
+4. Train the whole base model together with `EncoderHead`.
+
+
+
+**Note**: During these experiments, we used ResNet34 instead of ResNet152 as the pretrained model
+
+in order to be able to use a reasonable batch size in full training.
+
+The baseline score with ResNet34 is 0.106.
+
+
+
+| Model | RRP |
+
+| ------------- | ---- |
+
+| Full training | 0.32 |
+
+| 50% recycling | 0.31 |
+
+| 75% recycling | 0.28 |
+
+| Head only | 0.22 |
+
+| Baseline | 0.11 |
+
+
+
+As is seen in the table, the performance in 50% layer recycling is very close to that in full training.
+
+Additionally, we can still have a considerable speedup in 50% layer recycling with only a small drop in performance.
+
+Although 75% layer recycling is better than training only `EncoderHead`,
+
+its performance drops quickly when compared to 50% layer recycling and full training.
+
+
+
+## Experiment 2: Amount of available data
+
+In the second experiment setup, we compared performances of fine-tuning strategies with different dataset sizes.
+
+We sampled 50% of the training set randomly while still evaluating models on the whole validation set.
+
+
+
+| Model | RRP |
+
+| ------------- | ---- |
+
+| Full training | 0.27 |
+
+| 50% recycling | 0.26 |
+
+| 75% recycling | 0.25 |
+
+| Head only | 0.21 |
+
+| Baseline | 0.11 |
+
+
+
+This experiment shows that, the smaller the available dataset is,
+
+the bigger drop in performance we observe in full training, 50% and 75% layer recycling.
+
+On the other hand, the level of degradation in training only `EncoderHead` is really small when compared to others.
+
+When we further reduce the dataset size, full training becomes untrainable at some point,
+
+while we can still improve over the baseline by training only `EncoderHead`.
+
+
+
+
+
+## Experiment 3: Layer recycling in question answering
+
+We also wanted to test layer recycling in a different domain
+
+as one of the most important takeaways of the paper is that
+
+the performance of layer recycling is task-dependent.
+
+To this end, we set up an experiment with the code from the [Question Answering with Similarity Learning tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html).
+
+
+
+| Model | RP@1 | RRK |
+
+| ------------- | ---- | ---- |
+
+| Full training | 0.76 | 0.65 |
+
+| 50% recycling | 0.75 | 0.63 |
+
+| 75% recycling | 0.69 | 0.59 |
+
+| Head only | 0.67 | 0.58 |
+
+| Baseline | 0.64 | 0.55 |
+
+
+
+
+
+In this task, 50% layer recycling can still do a good job with only a small drop in performance when compared to full training.
+
+However, the level of degradation is smaller than that in the similar cars search example.
+
+This can be attributed to several factors such as the pretrained model quality, dataset size and task definition,
+
+and it can be the subject of a more elaborate and comprehensive research project.
+
+Another observation is that the performance of 75% layer recycling is closer to that of training only `EncoderHead`
+
+than 50% layer recycling.
+
+
+
+## Conclusion
+
+We set up several experiments to test layer recycling under different constraints
+
+and confirmed that layer recycling yields varying performances with different tasks and domains.
+
+One of the most important observations is the fact that the level of degradation in layer recycling
+
+is sublinear with a comparison to full training, i.e., we lose a smaller percentage of performance than
+
+the percentage we recycle. Additionally, training only `EncoderHead`
+
+is more resistant to small dataset sizes.
+
+There is even a critical size under which full training does not work at all.
+
+The issue of performance differences shows that there is still room for further research on layer recycling,
+
+and luckily Quaterion is flexible enough to run such experiments quickly.
+
+We will continue to report our findings on fine-tuning efficiency.
+
+
+
+**Fun fact**: The preview image for this article was created with Dall.e with the following prompt: ""Photo-realistic robot using a tuning fork to adjust a piano.""
+
+[Click here](/articles_data/embedding-recycling/full.png)
+
+to see it in full size!",articles/embedding-recycler.md
+"---
+
+title: ""What are Vector Embeddings? - Revolutionize Your Search Experience""
+
+draft: false
+
+slug: what-are-embeddings?
+
+short_description: Explore the power of vector embeddings. Learn to use numerical machine learning representations to build a personalized Neural Search Service with Fastembed.
+
+description: Discover the power of vector embeddings. Learn how to harness the potential of numerical machine learning representations to create a personalized Neural Search Service with FastEmbed.
+
+preview_dir: /articles_data/what-are-embeddings/preview
+
+weight: -102
+
+social_preview_image: /articles_data/what-are-embeddings/preview/social-preview.jpg
+
+small_preview_image: /articles_data/what-are-embeddings/icon.svg
+
+date: 2024-02-06T15:29:33-03:00
+
+author: Sabrina Aquino
+
+author_link: https://github.com/sabrinaaquino
+
+featured: true
+
+tags:
+
+ - vector-search
+
+ - vector-database
+
+ - embeddings
+
+ - machine-learning
+
+ - artificial intelligence
+
+
+
+---
+
+
+
+> **Embeddings** are numerical machine learning representations of the semantic of the input data. They capture the meaning of complex, high-dimensional data, like text, images, or audio, into vectors. Enabling algorithms to process and analyze the data more efficiently.
+
+
+
+You know when you’re scrolling through your social media feeds and the content just feels incredibly tailored to you? There's the news you care about, followed by a perfect tutorial with your favorite tech stack, and then a meme that makes you laugh so hard you snort.
+
+
+
+Or what about how YouTube recommends videos you ended up loving. It’s by creators you've never even heard of and you didn’t even send YouTube a note about your ideal content lineup.
+
+
+
+This is the magic of embeddings.
+
+
+
+These are the result of **deep learning models** analyzing the data of your interactions online. From your likes, shares, comments, searches, the kind of content you linger on, and even the content you decide to skip. It also allows the algorithm to predict future content that you are likely to appreciate.
+
+
+
+The same embeddings can be repurposed for search, ads, and other features, creating a highly personalized user experience.
+
+
+
+
+
+![How embeddings are applied to perform recommendantions and other use cases](/articles_data/what-are-embeddings/Embeddings-Use-Case.jpg)
+
+
+
+
+
+They make [high-dimensional](https://www.sciencedirect.com/topics/computer-science/high-dimensional-data) data more manageable. This reduces storage requirements, improves computational efficiency, and makes sense of a ton of **unstructured** data.
+
+
+
+
+
+## Why use vector embeddings?
+
+
+
+The **nuances** of natural language or the hidden **meaning** in large datasets of images, sounds, or user interactions are hard to fit into a table. Traditional relational databases can't efficiently query most types of data being currently used and produced, making the **retrieval** of this information very limited.
+
+
+
+In the embeddings space, synonyms tend to appear in similar contexts and end up having similar embeddings. The space is a system smart enough to understand that ""pretty"" and ""attractive"" are playing for the same team. Without being explicitly told so.
+
+
+
+That’s the magic.
+
+
+
+At their core, vector embeddings are about semantics. They take the idea that ""a word is known by the company it keeps"" and apply it on a grand scale.
+
+
+
+
+
+![Example of how synonyms are placed closer together in the embeddings space](/articles_data/what-are-embeddings/Similar-Embeddings.jpg)
+
+
+
+
+
+This capability is crucial for creating search systems, recommendation engines, retrieval augmented generation (RAG) and any application that benefits from a deep understanding of content.
+
+
+
+## How do embeddings work?
+
+
+
+Embeddings are created through neural networks. They capture complex relationships and semantics into [dense vectors](https://www1.se.cuhk.edu.hk/~seem5680/lecture/semantics-with-dense-vectors-2018.pdf) which are more suitable for machine learning and data processing applications. They can then project these vectors into a proper **high-dimensional** space, specifically, a [Vector Database](/articles/what-is-a-vector-database/).
+
+
+
+
+
+
+
+![The process for turning raw data into embeddings and placing them into the vector space](/articles_data/what-are-embeddings/How-Embeddings-Work.jpg)
+
+
+
+
+
+The meaning of a data point is implicitly defined by its **position** on the vector space. After the vectors are stored, we can use their spatial properties to perform [nearest neighbor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search#:~:text=Nearest%20neighbor%20search%20(NNS)%2C,the%20larger%20the%20function%20values.). These searches retrieve semantically similar items based on how close they are in this space.
+
+
+
+> The quality of the vector representations drives the performance. The embedding model that works best for you depends on your use case.
+
+
+
+
+
+### Creating vector embeddings
+
+
+
+Embeddings translate the complexities of human language to a format that computers can understand. It uses neural networks to assign **numerical values** to the input data, in a way that similar data has similar values.
+
+
+
+
+
+![The process of using Neural Networks to create vector embeddings](/articles_data/what-are-embeddings/How-Do-Embeddings-Work_.jpg)
+
+
+
+
+
+For example, if I want to make my computer understand the word 'right', I can assign a number like 1.3. So when my computer sees 1.3, it sees the word 'right’.
+
+
+
+Now I want to make my computer understand the context of the word ‘right’. I can use a two-dimensional vector, such as [1.3, 0.8], to represent 'right'. The first number 1.3 still identifies the word 'right', but the second number 0.8 specifies the context.
+
+
+
+We can introduce more dimensions to capture more nuances. For example, a third dimension could represent formality of the word, a fourth could indicate its emotional connotation (positive, neutral, negative), and so on.
+
+
+
+The evolution of this concept led to the development of embedding models like [Word2Vec](https://en.wikipedia.org/wiki/Word2vec) and [GloVe](https://en.wikipedia.org/wiki/GloVe). They learn to understand the context in which words appear to generate high-dimensional vectors for each word, capturing far more complex properties.
+
+
+
+
+
+
+
+![How Word2Vec model creates the embeddings for a word](/articles_data/what-are-embeddings/Word2Vec-model.jpg)
+
+
+
+
+
+However, these models still have limitations. They generate a single vector per word, based on its usage across texts. This means all the nuances of the word ""right"" are blended into one vector representation. That is not enough information for computers to fully understand the context.
+
+
+
+So, how do we help computers grasp the nuances of language in different contexts? In other words, how do we differentiate between:
+
+
+
+
+
+
+
+* ""your answer is right""
+
+* ""turn right at the corner""
+
+* ""everyone has the right to freedom of speech""
+
+
+
+Each of these sentences use the word 'right', with different meanings.
+
+
+
+More advanced models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) and [GPT](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) use deep learning models based on the [transformer architecture](https://arxiv.org/abs/1706.03762), which helps computers consider the full context of a word. These models pay attention to the entire context. The model understands the specific use of a word in its **surroundings**, and then creates different embeddings for each.
+
+
+
+
+
+
+
+![How the BERT model creates the embeddings for a word](/articles_data/what-are-embeddings/BERT-model.jpg)
+
+
+
+
+
+But how does this process of understanding and interpreting work in practice? Think of the term: ""biophilic design"", for example. To generate its embedding, the transformer architecture can use the following contexts:
+
+
+
+
+
+
+
+* ""Biophilic design incorporates natural elements into architectural planning.""
+
+* ""Offices with biophilic design elements report higher employee well-being.""
+
+* ""...plant life, natural light, and water features are key aspects of biophilic design.""
+
+
+
+And then it compares contexts to known architectural and design principles:
+
+
+
+
+
+
+
+* ""Sustainable designs prioritize environmental harmony.""
+
+* ""Ergonomic spaces enhance user comfort and health.""
+
+
+
+The model creates a vector embedding for ""biophilic design"" that encapsulates the concept of integrating natural elements into man-made environments. Augmented with attributes that highlight the correlation between this integration and its positive impact on health, well-being, and environmental sustainability.
+
+
+
+
+
+### Integration with embedding APIs
+
+
+
+Selecting the right embedding model for your use case is crucial to your application performance. Qdrant makes it easier by offering seamless integration with the best selection of embedding APIs, including [Cohere](/documentation/embeddings/cohere/), [Gemini](/documentation/embeddings/gemini/), [Jina Embeddings](/documentation/embeddings/jina-embeddings/), [OpenAI](/documentation/embeddings/openai/), [Aleph Alpha](/documentation/embeddings/aleph-alpha/), [Fastembed](https://github.com/qdrant/fastembed), and [AWS Bedrock](/documentation/embeddings/bedrock/).
+
+
+
+If you’re looking for NLP and rapid prototyping, including language translation, question-answering, and text generation, OpenAI is a great choice. Gemini is ideal for image search, duplicate detection, and clustering tasks.
+
+
+
+Fastembed, which we’ll use on the example below, is designed for efficiency and speed, great for applications needing low-latency responses, such as autocomplete and instant content recommendations.
+
+
+
+We plan to go deeper into selecting the best model based on performance, cost, integration ease, and scalability in a future post.
+
+
+
+## Create a neural search service with Fastmbed
+
+
+
+Now that you’re familiar with the core concepts around vector embeddings, how about start building your own [Neural Search Service](/documentation/tutorials/neural-search/)?
+
+
+
+Tutorial guides you through a practical application of how to use Qdrant for document management based on descriptions of companies from [startups-list.com](https://www.startups-list.com/). From embedding data, integrating it with Qdrant's vector database, constructing a search API, and finally deploying your solution with FastAPI.
+
+
+
+Check out what the final version of this project looks like on the [live online demo](https://qdrant.to/semantic-search-demo).
+
+
+
+Let us know what you’re building with embeddings! Join our [Discord](https://discord.gg/qdrant-907569970500743200) community and share your projects!",articles/what-are-embeddings.md
+"---
+
+title: ""Scalar Quantization: Background, Practices & More | Qdrant""
+
+short_description: ""Discover scalar quantization for optimized data storage and improved performance, including data compression benefits and efficiency enhancements.""
+
+description: ""Discover the efficiency of scalar quantization for optimized data storage and enhanced performance. Learn about its data compression benefits and efficiency improvements.""
+
+social_preview_image: /articles_data/scalar-quantization/social_preview.png
+
+small_preview_image: /articles_data/scalar-quantization/scalar-quantization-icon.svg
+
+preview_dir: /articles_data/scalar-quantization/preview
+
+weight: 5
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-03-27T10:45:00+01:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - scalar quantization
+
+ - memory optimization
+
+---
+
+# Efficiency Unleashed: The Power of Scalar Quantization
+
+
+
+High-dimensional vector embeddings can be memory-intensive, especially when working with
+
+large datasets consisting of millions of vectors. Memory footprint really starts being
+
+a concern when we scale things up. A simple choice of the data type used to store a single
+
+number impacts even billions of numbers and can drive the memory requirements crazy. The
+
+higher the precision of your type, the more accurately you can represent the numbers.
+
+The more accurate your vectors, the more precise is the distance calculation. But the
+
+advantages stop paying off when you need to order more and more memory.
+
+
+
+Qdrant chose `float32` as a default type used to store the numbers of your embeddings.
+
+So a single number needs 4 bytes of the memory and a 512-dimensional vector occupies
+
+2 kB. That's only the memory used to store the vector. There is also an overhead of the
+
+HNSW graph, so as a rule of thumb we estimate the memory size with the following formula:
+
+
+
+```text
+
+memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes
+
+```
+
+
+
+While Qdrant offers various options to store some parts of the data on disk, starting
+
+from version 1.1.0, you can also optimize your memory by compressing the embeddings.
+
+We've implemented the mechanism of **Scalar Quantization**! It turns out to have not
+
+only a positive impact on memory but also on the performance.
+
+
+
+## Scalar quantization
+
+
+
+Scalar quantization is a data compression technique that converts floating point values
+
+into integers. In case of Qdrant `float32` gets converted into `int8`, so a single number
+
+needs 75% less memory. It's not a simple rounding though! It's a process that makes that
+
+transformation partially reversible, so we can also revert integers back to floats with
+
+a small loss of precision.
+
+
+
+### Theoretical background
+
+
+
+Assume we have a collection of `float32` vectors and denote a single value as `f32`.
+
+In reality neural embeddings do not cover a whole range represented by the floating
+
+point numbers, but rather a small subrange. Since we know all the other vectors, we can
+
+establish some statistics of all the numbers. For example, the distribution of the values
+
+will be typically normal:
+
+
+
+![A distribution of the vector values](/articles_data/scalar-quantization/float32-distribution.png)
+
+
+
+Our example shows that 99% of the values come from a `[-2.0, 5.0]` range. And the
+
+conversion to `int8` will surely lose some precision, so we rather prefer keeping the
+
+representation accuracy within the range of 99% of the most probable values and ignoring
+
+the precision of the outliers. There might be a different choice of the range width,
+
+actually, any value from a range `[0, 1]`, where `0` means empty range, and `1` would
+
+keep all the values. That's a hyperparameter of the procedure called `quantile`. A value
+
+of `0.95` or `0.99` is typically a reasonable choice, but in general `quantile ∈ [0, 1]`.
+
+
+
+#### Conversion to integers
+
+
+
+Let's talk about the conversion to `int8`. Integers also have a finite set of values that
+
+might be represented. Within a single byte they may represent up to 256 different values,
+
+either from `[-128, 127]` or `[0, 255]`.
+
+
+
+![Value ranges represented by int8](/articles_data/scalar-quantization/int8-value-range.png)
+
+
+
+Since we put some boundaries on the numbers that might be represented by the `f32`, and
+
+`i8` has some natural boundaries, the process of converting the values between those
+
+two ranges is quite natural:
+
+
+
+$$ f32 = \alpha \times i8 + offset $$
+
+
+
+$$ i8 = \frac{f32 - offset}{\alpha} $$
+
+
+
+The parameters $ \alpha $ and $ offset $ has to be calculated for a given set of vectors,
+
+but that comes easily by putting the minimum and maximum of the represented range for
+
+both `f32` and `i8`.
+
+
+
+![Float32 to int8 conversion](/articles_data/scalar-quantization/float32-to-int8-conversion.png)
+
+
+
+For the unsigned `int8` it will go as following:
+
+
+
+$$ \begin{equation}
+
+\begin{cases} -2 = \alpha \times 0 + offset \\\\ 5 = \alpha \times 255 + offset \end{cases}
+
+\end{equation} $$
+
+
+
+In case of signed `int8`, we'll just change the represented range boundaries:
+
+
+
+$$ \begin{equation}
+
+\begin{cases} -2 = \alpha \times (-128) + offset \\\\ 5 = \alpha \times 127 + offset \end{cases}
+
+\end{equation} $$
+
+
+
+For any set of vector values we can simply calculate the $ \alpha $ and $ offset $ and
+
+those values have to be stored along with the collection to enable to conversion between
+
+the types.
+
+
+
+#### Distance calculation
+
+
+
+We do not store the vectors in the collections represented by `int8` instead of `float32`
+
+just for the sake of compressing the memory. But the coordinates are being used while we
+
+calculate the distance between the vectors. Both dot product and cosine distance requires
+
+multiplying the corresponding coordinates of two vectors, so that's the operation we
+
+perform quite often on `float32`. Here is how it would look like if we perform the
+
+conversion to `int8`:
+
+
+
+$$ f32 \times f32' = $$
+
+$$ = (\alpha \times i8 + offset) \times (\alpha \times i8' + offset) = $$
+
+$$ = \alpha^{2} \times i8 \times i8' + \underbrace{offset \times \alpha \times i8' + offset \times \alpha \times i8 + offset^{2}}_\text{pre-compute} $$
+
+
+
+The first term, $ \alpha^{2} \times i8 \times i8' $ has to be calculated when we measure the
+
+distance as it depends on both vectors. However, both the second and the third term
+
+($ offset \times \alpha \times i8' $ and $ offset \times \alpha \times i8 $ respectively),
+
+depend only on a single vector and those might be precomputed and kept for each vector.
+
+The last term, $ offset^{2} $ does not depend on any of the values, so it might be even
+
+computed once and reused.
+
+
+
+If we had to calculate all the terms to measure the distance, the performance could have
+
+been even worse than without the conversion. But thanks for the fact we can precompute
+
+the majority of the terms, things are getting simpler. And in turns out the scalar
+
+quantization has a positive impact not only on the memory usage, but also on the
+
+performance. As usual, we performed some benchmarks to support this statement!
+
+
+
+## Benchmarks
+
+
+
+We simply used the same approach as we use in all [the other benchmarks we publish](/benchmarks/).
+
+Both [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets)
+
+and [Gist-960](https://github.com/erikbern/ann-benchmarks/) datasets were chosen to make
+
+the comparison between non-quantized and quantized vectors. The results are summarized
+
+in the tables:
+
+
+
+#### Arxiv-titles-384-angular-no-filters
+
+
+
+
+
+
+
+
+
+
+
+
ef = 128
+
+
ef = 256
+
+
ef = 512
+
+
+
+
+
+
+
+
Upload and indexing time
+
+
Mean search precision
+
+
Mean search time
+
+
Mean search precision
+
+
Mean search time
+
+
Mean search precision
+
+
Mean search time
+
+
+
+
+
+
+
+
+
+
Non-quantized vectors
+
+
649 s
+
+
0.989
+
+
0.0094
+
+
0.994
+
+
0.0932
+
+
0.996
+
+
0.161
+
+
+
+
+
+
Scalar Quantization
+
+
496 s
+
+
0.986
+
+
0.0037
+
+
0.993
+
+
0.060
+
+
0.996
+
+
0.115
+
+
+
+
+
+
Difference
+
+
-23.57%
+
+
-0.3%
+
+
-60.64%
+
+
-0.1%
+
+
-35.62%
+
+
0%
+
+
-28.57%
+
+
+
+
+
+
+
+
+
+A slight decrease in search precision results in a considerable improvement in the
+
+latency. Unless you aim for the highest precision possible, you should not notice the
+
+difference in your search quality.
+
+
+
+#### Gist-960
+
+
+
+
+
+
+
+
+
+
+
+
ef = 128
+
+
ef = 256
+
+
ef = 512
+
+
+
+
+
+
+
+
Upload and indexing time
+
+
Mean search precision
+
+
Mean search time
+
+
Mean search precision
+
+
Mean search time
+
+
Mean search precision
+
+
Mean search time
+
+
+
+
+
+
+
+
+
+
Non-quantized vectors
+
+
452
+
+
0.802
+
+
0.077
+
+
0.887
+
+
0.135
+
+
0.941
+
+
0.231
+
+
+
+
+
+
Scalar Quantization
+
+
312
+
+
0.802
+
+
0.043
+
+
0.888
+
+
0.077
+
+
0.941
+
+
0.135
+
+
+
+
+
+
Difference
+
+
-30.79%
+
+
0%
+
+
-44,16%
+
+
+0.11%
+
+
-42.96%
+
+
0%
+
+
-41,56%
+
+
+
+
+
+
+
+
+
+In all the cases, the decrease in search precision is negligible, but we keep a latency
+
+reduction of at least 28.57%, even up to 60,64%, while searching. As a rule of thumb,
+
+the higher the dimensionality of the vectors, the lower the precision loss.
+
+
+
+### Oversampling and rescoring
+
+
+
+A distinctive feature of the Qdrant architecture is the ability to combine the search for quantized and original vectors in a single query.
+
+This enables the best combination of speed, accuracy, and RAM usage.
+
+
+
+Qdrant stores the original vectors, so it is possible to rescore the top-k results with
+
+the original vectors after doing the neighbours search in quantized space. That obviously
+
+has some impact on the performance, but in order to measure how big it is, we made the
+
+comparison in different search scenarios.
+
+We used a machine with a very slow network-mounted disk and tested the following scenarios with different amounts of allowed RAM:
+
+
+
+| Setup | RPS | Precision |
+
+|-----------------------------|------|-----------|
+
+| 4.5GB memory | 600 | 0.99 |
+
+| 4.5GB memory + SQ + rescore | 1000 | 0.989 |
+
+
+
+And another group with more strict memory limits:
+
+
+
+| Setup | RPS | Precision |
+
+|------------------------------|------|-----------|
+
+| 2GB memory | 2 | 0.99 |
+
+| 2GB memory + SQ + rescore | 30 | 0.989 |
+
+| 2GB memory + SQ + no rescore | 1200 | 0.974 |
+
+
+
+In those experiments, throughput was mainly defined by the number of disk reads, and quantization efficiently reduces it by allowing more vectors in RAM.
+
+Read more about on-disk storage in Qdrant and how we measure its performance in our article: [Minimal RAM you need to serve a million vectors
+
+](/articles/memory-consumption/).
+
+
+
+The mechanism of Scalar Quantization with rescoring disabled pushes the limits of low-end
+
+machines even further. It seems like handling lots of requests does not require an
+
+expensive setup if you can agree to a small decrease in the search precision.
+
+
+
+### Accessing best practices
+
+
+
+Qdrant documentation on [Scalar Quantization](/documentation/quantization/#setting-up-quantization-in-qdrant)
+
+is a great resource describing different scenarios and strategies to achieve up to 4x
+
+lower memory footprint and even up to 2x performance increase.
+",articles/scalar-quantization.md
+"---
+
+title: Extending ChatGPT with a Qdrant-based knowledge base
+
+short_description: ""ChatGPT factuality might be improved with semantic search. Here is how.""
+
+description: ""ChatGPT factuality might be improved with semantic search. Here is how.""
+
+social_preview_image: /articles_data/chatgpt-plugin/social_preview.jpg
+
+small_preview_image: /articles_data/chatgpt-plugin/chatgpt-plugin-icon.svg
+
+preview_dir: /articles_data/chatgpt-plugin/preview
+
+weight: 7
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-03-23T18:01:00+01:00
+
+draft: false
+
+keywords:
+
+ - openai
+
+ - chatgpt
+
+ - chatgpt plugin
+
+ - knowledge base
+
+ - similarity search
+
+---
+
+
+
+In recent months, ChatGPT has revolutionised the way we communicate, learn, and interact
+
+with technology. Our social platforms got flooded with prompts, responses to them, whole
+
+articles and countless other examples of using Large Language Models to generate content
+
+unrecognisable from the one written by a human.
+
+
+
+Despite their numerous benefits, these models have flaws, as evidenced by the phenomenon
+
+of hallucination - the generation of incorrect or nonsensical information in response to
+
+user input. This issue, which can compromise the reliability and credibility of
+
+AI-generated content, has become a growing concern among researchers and users alike.
+
+Those concerns started another wave of entirely new libraries, such as Langchain, trying
+
+to overcome those issues, for example, by combining tools like vector databases to bring
+
+the required context into the prompts. And that is, so far, the best way to incorporate
+
+new and rapidly changing knowledge into the neural model. So good that OpenAI decided to
+
+introduce a way to extend the model capabilities with external plugins at the model level.
+
+These plugins, designed to enhance the model's performance, serve as modular extensions
+
+that seamlessly interface with the core system. By adding a knowledge base plugin to
+
+ChatGPT, we can effectively provide the AI with a curated, trustworthy source of
+
+information, ensuring that the generated content is more accurate and relevant. Qdrant
+
+may act as a vector database where all the facts will be stored and served to the model
+
+upon request.
+
+
+
+If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or
+
+emails, starting with the official [ChatGPT retrieval plugin repository](https://github.com/openai/chatgpt-retrieval-plugin)
+
+is the easiest way. Qdrant is already integrated, so that you can use it right away. In
+
+the following sections, we will guide you through setting up the knowledge base using
+
+Qdrant and demonstrate how this powerful combination can significantly improve ChatGPT's
+
+performance and output quality.
+
+
+
+## Implementing a knowledge base with Qdrant
+
+
+
+The official ChatGPT retrieval plugin uses a vector database to build your knowledge base.
+
+Your documents are chunked and vectorized with the OpenAI's text-embedding-ada-002 model
+
+to be stored in Qdrant. That enables semantic search capabilities. So, whenever ChatGPT
+
+thinks it might be relevant to check the knowledge base, it forms a query and sends it
+
+to the plugin to incorporate the results into its response. You can now modify the
+
+knowledge base, and ChatGPT will always know the most recent facts. No model fine-tuning
+
+is required. Let’s implement that for your documents. In our case, this will be Qdrant’s
+
+documentation, so you can ask even technical questions about Qdrant directly in ChatGPT.
+
+
+
+Everything starts with cloning the plugin's repository.
+
+
+
+```bash
+
+git clone git@github.com:openai/chatgpt-retrieval-plugin.git
+
+```
+
+
+
+Please use your favourite IDE to open the project once cloned.
+
+
+
+### Prerequisites
+
+
+
+You’ll need to ensure three things before we start:
+
+
+
+1. Create an OpenAI API key, so you can use their embeddings model programmatically. If
+
+ you already have an account, you can generate one at https://platform.openai.com/account/api-keys.
+
+ Otherwise, registering an account might be required.
+
+2. Run a Qdrant instance. The instance has to be reachable from the outside, so you
+
+ either need to launch it on-premise or use the [Qdrant Cloud](https://cloud.qdrant.io/)
+
+ offering. A free 1GB cluster is available, which might be enough in many cases. We’ll
+
+ use the cloud.
+
+3. Since ChatGPT will interact with your service through the network, you must deploy it,
+
+ making it possible to connect from the Internet. Unfortunately, localhost is not an
+
+ option, but any provider, such as Heroku or fly.io, will work perfectly. We will use
+
+ [fly.io](https://fly.io/), so please register an account. You may also need to install
+
+ the flyctl tool for the deployment. The process is described on the homepage of fly.io.
+
+
+
+### Configuration
+
+
+
+The retrieval plugin is a FastAPI-based application, and its default functionality might
+
+be enough in most cases. However, some configuration is required so ChatGPT knows how and
+
+when to use it. However, we can start setting up Fly.io, as we need to know the service's
+
+hostname to configure it fully.
+
+
+
+First, let’s login into the Fly CLI:
+
+
+
+```bash
+
+flyctl auth login
+
+```
+
+
+
+That will open the browser, so you can simply provide the credentials, and all the further
+
+commands will be executed with your account. If you have never used fly.io, you may need
+
+to give the credit card details before running any instance, but there is a Hobby Plan
+
+you won’t be charged for.
+
+
+
+Let’s try to launch the instance already, but do not deploy it. We’ll get the hostname
+
+assigned and have all the details to fill in the configuration. The retrieval plugin
+
+uses TCP port 8080, so we need to configure fly.io, so it redirects all the traffic to it
+
+as well.
+
+
+
+```bash
+
+flyctl launch --no-deploy --internal-port 8080
+
+```
+
+
+
+We’ll be prompted about the application name and the region it should be deployed to.
+
+Please choose whatever works best for you. After that, we should see the hostname of the
+
+newly created application:
+
+
+
+```text
+
+...
+
+Hostname: your-application-name.fly.dev
+
+...
+
+```
+
+
+
+Let’s note it down. We’ll need it for the configuration of the service. But we’re going
+
+to start with setting all the applications secrets:
+
+
+
+```bash
+
+flyctl secrets set DATASTORE=qdrant \
+
+ OPENAI_API_KEY= \
+
+ QDRANT_URL=https://.aws.cloud.qdrant.io \
+
+ QDRANT_API_KEY= \
+
+ BEARER_TOKEN=eyJhbGciOiJIUzI1NiJ9.e30.ZRrHA1JJJW8opsbCGfG_HACGpVUMN_a9IV7pAx_Zmeo
+
+```
+
+
+
+The secrets will be staged for the first deployment. There is an example of a minimal
+
+Bearer token generated by https://jwt.io/. **Please adjust the token and do not expose
+
+it publicly, but you can keep the same value for the demo.**
+
+
+
+Right now, let’s dive into the application config files. You can optionally provide your
+
+icon and keep it as `.well-known/logo.png` file, but there are two additional files we’re
+
+going to modify.
+
+
+
+The `.well-known/openapi.yaml` file describes the exposed API in the OpenAPI format.
+
+Lines 3 to 5 might be filled with the application title and description, but the essential
+
+part is setting the server URL the application will run. Eventually, the top part of the
+
+file should look like the following:
+
+
+
+```yaml
+
+openapi: 3.0.0
+
+info:
+
+ title: Qdrant Plugin API
+
+ version: 1.0.0
+
+ description: Plugin for searching through the Qdrant doc…
+
+servers:
+
+ - url: https://your-application-name.fly.dev
+
+...
+
+```
+
+
+
+There is another file in the same directory, and that’s the most crucial piece to
+
+configure. It contains the description of the plugin we’re implementing, and ChatGPT
+
+uses this description to determine if it should communicate with our knowledge base.
+
+The file is called `.well-known/ai-plugin.json`, and let’s edit it before we finally
+
+deploy the app. There are various properties we need to fill in:
+
+
+
+| **Property** | **Meaning** | **Example** |
+
+|-------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+| `name_for_model` | Name of the plugin for the ChatGPT model | *qdrant* |
+
+| `name_for_human` | Human-friendly model name, to be displayed in ChatGPT UI | *Qdrant Documentation Plugin* |
+
+| `description_for_model` | Description of the purpose of the plugin, so ChatGPT knows in what cases it should be using it to answer a question. | *Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search* |
+
+| `description_for_human` | Short description of the plugin, also to be displayed in the ChatGPT UI. | *Search through Qdrant docs* |
+
+| `auth` | Authorization scheme used by the application. By default, the bearer token has to be configured. | ```{""type"": ""user_http"", ""authorization_type"": ""bearer""}``` |
+
+| `api.url` | Link to the OpenAPI schema definition. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/openapi.yaml* |
+
+| `logo_url` | Link to the application logo. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/logo.png* |
+
+
+
+A complete file may look as follows:
+
+
+
+```json
+
+{
+
+ ""schema_version"": ""v1"",
+
+ ""name_for_model"": ""qdrant"",
+
+ ""name_for_human"": ""Qdrant Documentation Plugin"",
+
+ ""description_for_model"": ""Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search"",
+
+ ""description_for_human"": ""Search through Qdrant docs"",
+
+ ""auth"": {
+
+ ""type"": ""user_http"",
+
+ ""authorization_type"": ""bearer""
+
+ },
+
+ ""api"": {
+
+ ""type"": ""openapi"",
+
+ ""url"": ""https://your-application-name.fly.dev/.well-known/openapi.yaml"",
+
+ ""has_user_authentication"": false
+
+ },
+
+ ""logo_url"": ""https://your-application-name.fly.dev/.well-known/logo.png"",
+
+ ""contact_email"": ""email@domain.com"",
+
+ ""legal_info_url"": ""email@domain.com""
+
+}
+
+```
+
+
+
+That was the last step before running the final command. The command that will deploy
+
+the application on the server:
+
+
+
+```bash
+
+flyctl deploy
+
+```
+
+
+
+The command will build the image using the Dockerfile and deploy the service at a given
+
+URL. Once the command is finished, the service should be running on the hostname we got
+
+previously:
+
+
+
+```text
+
+https://your-application-name.fly.dev
+
+```
+
+
+
+## Integration with ChatGPT
+
+
+
+Once we have deployed the service, we can point ChatGPT to it, so the model knows how to
+
+connect. When you open the ChatGPT UI, you should see a dropdown with a Plugins tab
+
+included:
+
+
+
+![](/articles_data/chatgpt-plugin/step-1.png)
+
+
+
+Once selected, you should be able to choose one of check the plugin store:
+
+
+
+![](/articles_data/chatgpt-plugin/step-2.png)
+
+
+
+There are some premade plugins available, but there’s also a possibility to install your
+
+own plugin by clicking on the ""*Develop your own plugin*"" option in the bottom right
+
+corner:
+
+
+
+![](/articles_data/chatgpt-plugin/step-3.png)
+
+
+
+We need to confirm our plugin is ready, but since we relied on the official retrieval
+
+plugin from OpenAI, this should be all fine:
+
+
+
+![](/articles_data/chatgpt-plugin/step-4.png)
+
+
+
+After clicking on ""*My manifest is ready*"", we can already point ChatGPT to our newly
+
+created service:
+
+
+
+![](/articles_data/chatgpt-plugin/step-5.png)
+
+
+
+A successful plugin installation should end up with the following information:
+
+
+
+![](/articles_data/chatgpt-plugin/step-6.png)
+
+
+
+There is a name and a description of the plugin we provided. Let’s click on ""*Done*"" and
+
+return to the ""*Plugin store*"" window again. There is another option we need to choose in
+
+the bottom right corner:
+
+
+
+![](/articles_data/chatgpt-plugin/step-7.png)
+
+
+
+Our plugin is not officially verified, but we can, of course, use it freely. The
+
+installation requires just the service URL:
+
+
+
+![](/articles_data/chatgpt-plugin/step-8.png)
+
+
+
+OpenAI cannot guarantee the plugin provides factual information, so there is a warning
+
+we need to accept:
+
+
+
+![](/articles_data/chatgpt-plugin/step-9.png)
+
+
+
+Finally, we need to provide the Bearer token again:
+
+
+
+![](/articles_data/chatgpt-plugin/step-10.png)
+
+
+
+Our plugin is now ready to be tested. Since there is no data inside the knowledge base,
+
+extracting any facts is impossible, but we’re going to put some data using the Swagger UI
+
+exposed by our service at https://your-application-name.fly.dev/docs. We need to authorize
+
+first, and then call the upsert method with some docs. For the demo purposes, we can just
+
+put a single document extracted from the Qdrant documentation to see whether integration
+
+works properly:
+
+
+
+![](/articles_data/chatgpt-plugin/step-11.png)
+
+
+
+We can come back to ChatGPT UI, and send a prompt, but we need to make sure the plugin
+
+is selected:
+
+
+
+![](/articles_data/chatgpt-plugin/step-12.png)
+
+
+
+Now if our prompt seems somehow related to the plugin description provided, the model
+
+will automatically form a query and send it to the HTTP API. The query will get vectorized
+
+by our app, and then used to find some relevant documents that will be used as a context
+
+to generate the response.
+
+
+
+![](/articles_data/chatgpt-plugin/step-13.png)
+
+
+
+We have a powerful language model, that can interact with our knowledge base, to return
+
+not only grammatically correct but also factual information. And this is how your
+
+interactions with the model may start to look like:
+
+
+
+
+
+
+
+However, a single document is not enough to enable the full power of the plugin. If you
+
+want to put more documents that you have collected, there are already some scripts
+
+available in the `scripts/` directory that allows converting JSON, JSON lines or even
+
+zip archives.
+",articles/chatgpt-plugin.md
+"---
+
+title: Deliver Better Recommendations with Qdrant’s new API
+
+short_description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API.
+
+description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API.
+
+preview_dir: /articles_data/new-recommendation-api/preview
+
+social_preview_image: /articles_data/new-recommendation-api/preview/social_preview.png
+
+small_preview_image: /articles_data/new-recommendation-api/icon.svg
+
+weight: -80
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-10-25T09:46:00.000Z
+
+---
+
+
+
+The most popular use case for vector search engines, such as Qdrant, is Semantic search with a single query vector. Given the
+
+query, we can vectorize (embed) it and find the closest points in the index. But [Vector Similarity beyond Search](/articles/vector-similarity-beyond-search/)
+
+does exist, and recommendation systems are a great example. Recommendations might be seen as a multi-aim search, where we want
+
+to find items close to positive and far from negative examples. This use of vector databases has many applications, including
+
+recommendation systems for e-commerce, content, or even dating apps.
+
+
+
+Qdrant has provided the [Recommendation API](/documentation/concepts/search/#recommendation-api) for a while, and with the latest release, [Qdrant 1.6](https://github.com/qdrant/qdrant/releases/tag/v1.6.0),
+
+we're glad to give you more flexibility and control over the Recommendation API.
+
+Here, we'll discuss some internals and show how they may be used in practice.
+
+
+
+### Recap of the old recommendations API
+
+
+
+The previous [Recommendation API](/documentation/concepts/search/#recommendation-api) in Qdrant came with some limitations. First of all, it was required to pass vector IDs for
+
+both positive and negative example points. If you wanted to use vector embeddings directly, you had to either create a new point
+
+in a collection or mimic the behaviour of the Recommendation API by using the [Search API](/documentation/concepts/search/#search-api).
+
+Moreover, in the previous releases of Qdrant, you were always asked to provide at least one positive example. This requirement
+
+was based on the algorithm used to combine multiple samples into a single query vector. It was a simple, yet effective approach.
+
+However, if the only information you had was that your user dislikes some items, you couldn't use it directly.
+
+
+
+Qdrant 1.6 brings a more flexible API. You can now provide both IDs and vectors of positive and negative examples. You can even
+
+combine them within a single request. That makes the new implementation backward compatible, so you can easily upgrade an existing
+
+Qdrant instance without any changes in your code. And the default behaviour of the API is still the same as before. However, we
+
+extended the API, so **you can now choose the strategy of how to find the recommended points**.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/recommend
+
+{
+
+ ""positive"": [100, 231],
+
+ ""negative"": [718, [0.2, 0.3, 0.4, 0.5]],
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""city"",
+
+ ""match"": {
+
+ ""value"": ""London""
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""strategy"": ""average_vector"",
+
+ ""limit"": 3
+
+}
+
+```
+
+
+
+There are two key changes in the request. First of all, we can adjust the strategy of search and set it to `average_vector` (the
+
+default) or `best_score`. Moreover, we can pass both IDs (`718`) and embeddings (`[0.2, 0.3, 0.4, 0.5]`) as both positive and
+
+negative examples.
+
+
+
+## HNSW ANN example and strategy
+
+
+
+Let’s start with an example to help you understand the [HNSW graph](/articles/filtrable-hnsw/). Assume you want
+
+to travel to a small city on another continent:
+
+
+
+1. You start from your hometown and take a bus to the local airport.
+
+2. Then, take a flight to one of the closest hubs.
+
+3. From there, you have to take another flight to a hub on your destination continent.
+
+4. Hopefully, one last flight to your destination city.
+
+5. You still have one more leg on local transport to get to your final address.
+
+
+
+This journey is similar to the HNSW graph’s use in Qdrant's approximate nearest neighbours search.
+
+
+
+![Transport network](/articles_data/new-recommendation-api/example-transport-network.png)
+
+
+
+HNSW is a multilayer graph of vectors (embeddings), with connections based on vector proximity. The top layer has the least
+
+points, and the distances between those points are the biggest. The deeper we go, the more points we have, and the distances
+
+get closer. The graph is built in a way that the points are connected to their closest neighbours at every layer.
+
+
+
+All the points from a particular layer are also in the layer below, so switching the search layer while staying in the same
+
+location is possible. In the case of transport networks, the top layer would be the airline hubs, well-connected but with big
+
+distances between the airports. Local airports, along with railways and buses, with higher density and smaller distances, make
+
+up the middle layers. Lastly, our bottom layer consists of local means of transport, which is the densest and has the smallest
+
+distances between the points.
+
+
+
+You don’t have to check all the possible connections when you travel. You select an intercontinental flight, then a local one,
+
+and finally a bus or a taxi. All the decisions are made based on the distance between the points.
+
+
+
+The search process in HNSW is also based on similarly traversing the graph. Start from the entry point in the top layer, find
+
+its closest point and then use that point as the entry point into the next densest layer. This process repeats until we reach
+
+the bottom layer. Visited points and distances to the original query vector are kept in memory. If none of the neighbours of
+
+the current point is better than the best match, we can stop the traversal, as this is a local minimum. We start at the biggest
+
+scale, and then gradually zoom in.
+
+
+
+In this oversimplified example, we assumed that the distance between the points is the only factor that matters. In reality, we
+
+might want to consider other criteria, such as the ticket price, or avoid some specific locations due to certain restrictions.
+
+That means, there are various strategies for choosing the best match, which is also true in the case of vector recommendations.
+
+We can use different approaches to determine the path of traversing the HNSW graph by changing how we calculate the score of a
+
+candidate point during traversal. The default behaviour is based on pure distance, but Qdrant 1.6 exposes two strategies for the
+
+recommendation API.
+
+
+
+### Average vector
+
+
+
+The default strategy, called `average_vector` is the previous one, based on the average of positive and negative examples. It
+
+simplifies the recommendations process and converts it into a single vector search. It supports both point IDs and vectors as
+
+parameters. For example, you can get recommendations based on past interactions with existing points combined with query vector
+
+embedding. Internally, that mechanism is based on the averages of positive and negative examples and was calculated with the
+
+following formula:
+
+
+
+$$
+
+\text{average vector} = \text{avg}(\text{positive vectors}) + \left( \text{avg}(\text{positive vectors}) - \text{avg}(\text{negative vectors}) \right)
+
+$$
+
+
+
+The `average_vector` converts the problem of recommendations into a single vector search.
+
+
+
+### The new hotness - Best score
+
+
+
+The new strategy is called `best_score`. It does not rely on averages and is more flexible. It allows you to pass just negative
+
+samples and uses a slightly more sophisticated algorithm under the hood.
+
+
+
+The best score is chosen at every step of HNSW graph traversal. We separately calculate the distance between a traversed point
+
+and every positive and negative example. In the case of the best score strategy, **there is no single query vector anymore, but a
+
+bunch of positive and negative queries**. As a result, for each sample in the query, we have a set of distances, one for each
+
+sample. In the next step, we simply take the best scores for positives and negatives, creating two separate values. Best scores
+
+are just the closest distances of a query to positives and negatives. The idea is: **if a point is closer to any negative than to
+
+any positive example, we do not want it**. We penalize being close to the negatives, so instead of using the similarity value
+
+directly, we check if it’s closer to positives or negatives. The following formula is used to calculate the score of a traversed
+
+potential point:
+
+
+
+```rust
+
+if best_positive_score > best_negative_score {
+
+ score = best_positive_score
+
+} else {
+
+ score = -(best_negative_score * best_negative_score)
+
+}
+
+```
+
+
+
+If the point is closer to the negatives, we penalize it by taking the negative squared value of the best negative score. For a
+
+closer negative, the score of the candidate point will always be lower or equal to zero, making the chances of choosing that point
+
+significantly lower. However, if the best negative score is higher than the best positive score, we still prefer those that are
+
+further away from the negatives. That procedure effectively **pulls the traversal procedure away from the negative examples**.
+
+
+
+If you want to know more about the internals of HNSW, you can check out the article about the
+
+[Filtrable HNSW](/articles/filtrable-hnsw/) that covers the topic thoroughly.
+
+
+
+## Food Discovery demo
+
+
+
+Our [Food Discovery demo](/articles/food-discovery-demo/) is an application built on top of the new [Recommendation API](/documentation/concepts/search/#recommendation-api).
+
+It allows you to find a meal based on liked and disliked photos. There are some updates, enabled by the new Qdrant release:
+
+
+
+* **Ability to include multiple textual queries in the recommendation request.** Previously, we only allowed passing a single
+
+ query to solve the cold start problem. Right now, you can pass multiple queries and mix them with the liked/disliked photos.
+
+ This became possible because of the new flexibility in parameters. We can pass both point IDs and embedding vectors in the same
+
+ request, and user queries are obviously not a part of the collection.
+
+* **Switch between the recommendation strategies.** You can now choose between the `average_vector` and the `best_score` scoring
+
+ algorithm.
+
+
+
+### Differences between the strategies
+
+
+
+The UI of the Food Discovery demo allows you to switch between the strategies. The `best_vector` is the default one, but with just
+
+a single switch, you can see how the results differ when using the previous `average_vector` strategy.
+
+
+
+If you select just a single positive example, both algorithms work identically.
+
+
+
+##### One positive example
+
+
+
+
+
+
+
+The difference only becomes apparent when you start adding more examples, especially if you choose some negatives.
+
+
+
+##### One positive and one negative example
+
+
+
+
+
+
+
+The more likes and dislikes we add, the more diverse the results of the `best_score` strategy will be. In the old strategy, there
+
+is just a single vector, so all the examples are similar to it. The new one takes into account all the examples separately, making
+
+the variety richer.
+
+
+
+##### Multiple positive and negative examples
+
+
+
+
+
+
+
+Choosing the right strategy is dataset-dependent, and the embeddings play a significant role here. Thus, it’s always worth trying
+
+both of them and comparing the results in a particular case.
+
+
+
+#### Handling the negatives only
+
+
+
+In the case of our Food Discovery demo, passing just the negative images can work as an outlier detection mechanism. While the dataset
+
+was supposed to contain only food photos, this is not actually true. A simple way to find these outliers is to pass in food item photos
+
+as negatives, leading to the results being the most ""unlike"" food images. In our case you will see pill bottles and books.
+
+
+
+**The `average_vector` strategy still requires providing at least one positive example!** However, since cosine distance is set up
+
+for the collection used in the demo, we faked it using [a trick described in the previous article](/articles/food-discovery-demo/#negative-feedback-only).
+
+In a nutshell, if you only pass negative examples, their vectors will be averaged, and the negated resulting vector will be used as
+
+a query to the search endpoint.
+
+
+
+##### Negatives only
+
+
+
+
+
+
+
+Still, both methods return different results, so they each have their place depending on the questions being asked and the datasets
+
+being used.
+
+
+
+#### Challenges with multimodality
+
+
+
+Food Discovery uses the [CLIP embeddings model](https://huggingface.co/sentence-transformers/clip-ViT-B-32), which is multimodal,
+
+allowing both images and texts encoded into the same vector space. Using this model allows for image queries, text queries, or both of
+
+them combined. We utilized that mechanism in the updated demo, allowing you to pass the textual queries to filter the results further.
+
+
+
+##### A single text query
+
+
+
+
+
+
+
+Text queries might be mixed with the liked and disliked photos, so you can combine them in a single request. However, you might be
+
+surprised by the results achieved with the new strategy, if you start adding the negative examples.
+
+
+
+##### A single text query with negative example
+
+
+
+
+
+
+
+This is an issue related to the embeddings themselves. Our dataset contains a bunch of image embeddings that are pretty close to each
+
+other. On the other hand, our text queries are quite far from most of the image embeddings, but relatively close to some of them, so the
+
+text-to-image search seems to work well. When all query items come from the same domain, such as only text, everything works fine.
+
+However, if we mix positive text and negative image embeddings, the results of the `best_score` are overwhelmed by the negative samples,
+
+which are simply closer to the dataset embeddings. If you experience such a problem, the `average_vector` strategy might be a better
+
+choice.
+
+
+
+### Check out the demo
+
+
+
+The [Food Discovery Demo](https://food-discovery.qdrant.tech/) is available online, so you can test and see the difference.
+
+This is an open source project, so you can easily deploy it on your own. The source code is available in the [GitHub repository
+
+](https://github.com/qdrant/demo-food-discovery/) and the [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes the process of setting it up.
+
+Since calculating the embeddings takes a while, we precomputed them and exported them as a [snapshot](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot),
+
+which might be easily imported into any Qdrant instance. [Qdrant Cloud is the easiest way to start](https://cloud.qdrant.io/), though!
+",articles/new-recommendation-api.md
+"---
+
+title: "" Data Privacy with Qdrant: Implementing Role-Based Access Control (RBAC)"" #required
+
+short_description: ""Secure Your Data with Qdrant: Implementing RBAC""
+
+description: Discover how Qdrant's Role-Based Access Control (RBAC) ensures data privacy and compliance for your AI applications. Build secure and scalable systems with ease. Read more now!
+
+social_preview_image: /articles_data/data-privacy/preview/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required.
+
+preview_dir: /articles_data/data-privacy/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required.
+
+weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list.
+
+author: Qdrant Team # Author of the article. Required.
+
+author_link: https://qdrant.tech/ # Link to the author's page. Required.
+
+date: 2024-06-18T08:00:00-03:00 # Date of the article. Required.
+
+draft: false # If true, the article will not be published
+
+keywords: # Keywords for SEO
+
+ - Role-Based Access Control (RBAC)
+
+ - Data Privacy in Vector Databases
+
+ - Secure AI Data Management
+
+ - Qdrant Data Security
+
+ - Enterprise Data Compliance
+
+---
+
+
+
+Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as ""[embedding inversion attack](https://arxiv.org/abs/2004.00053),"" where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves.
+
+
+
+Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.**
+
+
+
+This article explores various strategies to ensure the security of your critical data while leveraging the benefits of vector search. Implementing some of these security approaches can help you build privacy-enhanced similarity search algorithms and integrate them into your AI applications.
+
+Additionally, you will learn how to build a fully data-sovereign architecture, allowing you to retain control over your data and comply with relevant data laws and regulations.
+
+
+
+> To skip right to the code implementation, [click here](/articles/data-privacy/#jwt-on-qdrant).
+
+
+
+## Vector Database Security: An Overview
+
+
+
+Vector databases are often unsecured by default to facilitate rapid prototyping and experimentation. This approach allows developers to quickly ingest data, build vector representations, and test similarity search algorithms without initial security concerns. However, in production environments, unsecured databases pose significant data breach risks.
+
+
+
+For production use, robust security systems are essential. Authentication, particularly using static API keys, is a common approach to control access and prevent unauthorized modifications. Yet, simple API authentication is insufficient for enterprise data, which requires granular control.
+
+
+
+The primary challenge with static API keys is their all-or-nothing access, inadequate for role-based data segregation in enterprise applications. Additionally, a compromised key could grant attackers full access to manipulate or steal data. To strengthen the security of the vector database, developers typically need the following:
+
+
+
+1. **Encryption**: This ensures that sensitive data is scrambled as it travels between the application and the vector database. This safeguards against Man-in-the-Middle ([MitM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)) attacks, where malicious actors can attempt to intercept and steal data during transmission.
+
+2. **Role-Based Access Control**: As mentioned before, traditional static API keys grant all-or-nothing access, which is a significant security risk in enterprise environments. RBAC offers a more granular approach by defining user roles and assigning specific data access permissions based on those roles. For example, an analyst might have read-only access to specific datasets, while an administrator might have full CRUD (Create, Read, Update, Delete) permissions across the database.
+
+3. **Deployment Flexibility**: Data residency regulations like GDPR (General Data Protection Regulation) and industry-specific compliance requirements dictate where data can be stored, processed, and accessed. Developers would need to choose a database solution which offers deployment options that comply with these regulations. This might include on-premise deployments within a company's private cloud or geographically distributed cloud deployments that adhere to data residency laws.
+
+
+
+## How Qdrant Handles Data Privacy and Security
+
+
+
+One of the cornerstones of our design choices at Qdrant has been the focus on security features. We have built in a range of features keeping the enterprise user in mind, which allow building of granular access control on a fully data sovereign architecture.
+
+
+
+A Qdrant instance is unsecured by default. However, when you are ready to deploy in production, Qdrant offers a range of security features that allow you to control access to your data, protect it from breaches, and adhere to regulatory requirements. Using Qdrant, you can build granular access control, segregate roles and privileges, and create a fully data sovereign architecture.
+
+
+
+### API Keys and TLS Encryption
+
+
+
+For simpler use cases, Qdrant offers API key-based authentication. This includes both regular API keys and read-only API keys. Regular API keys grant full access to read, write, and delete operations, while read-only keys restrict access to data retrieval operations only, preventing write actions.
+
+
+
+On Qdrant Cloud, you can create API keys using the [Cloud Dashboard](https://qdrant.to/cloud). This allows you to generate API keys that give you access to a single node or cluster, or multiple clusters. You can read the steps to do so [here](/documentation/cloud/authentication/).
+
+
+
+![web-ui](/articles_data/data-privacy/web-ui.png)
+
+
+
+For on-premise or local deployments, you'll need to configure API key authentication. This involves specifying a key in either the Qdrant configuration file or as an environment variable. This ensures that all requests to the server must include a valid API key sent in the header.
+
+
+
+When using the simple API key-based authentication, you should also turn on TLS encryption. Otherwise, you are exposing the connection to sniffing and MitM attacks. To secure your connection using TLS, you would need to create a certificate and private key, and then [enable TLS](/documentation/guides/security/#tls) in the configuration.
+
+
+
+API authentication, coupled with TLS encryption, offers a first layer of security for your Qdrant instance. However, to enable more granular access control, the recommended approach is to leverage JSON Web Tokens (JWTs).
+
+
+
+### JWT on Qdrant
+
+
+
+JSON Web Tokens (JWTs) are a compact, URL-safe, and stateless means of representing _claims_ to be transferred between two parties. These claims are encoded as a JSON object and are cryptographically signed.
+
+
+
+JWT is composed of three parts: a header, a payload, and a signature, which are concatenated with dots (.) to form a single string. The header contains the type of token and algorithm being used. The payload contains the claims (explained in detail later). The signature is a cryptographic hash and ensures the token’s integrity.
+
+
+
+In Qdrant, JWT forms the foundation through which powerful access controls can be built. Let’s understand how.
+
+
+
+JWT is enabled on the Qdrant instance by specifying the API key and turning on the **jwt_rbac** feature in the configuration (alternatively, they can be set as environment variables). For any subsequent request, the API key is used to encode or decode the token.
+
+
+
+The way JWT works is that just the API key is enough to generate the token, and doesn’t require any communication with the Qdrant instance or server. There are several libraries that help generate tokens by encoding a payload, such as [PyJWT](https://pyjwt.readthedocs.io/en/stable/) (for Python), [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) (for JavaScript), and [jsonwebtoken](https://crates.io/crates/jsonwebtoken) (for Rust). Qdrant uses the HS256 algorithm to encode or decode the tokens.
+
+
+
+We will look at the payload structure shortly, but here’s how you can generate a token using PyJWT.
+
+
+
+```python
+
+import jwt
+
+import datetime
+
+
+
+# Define your API key and other payload data
+
+api_key = ""your_api_key""
+
+payload = { ...
+
+}
+
+
+
+token = jwt.encode(payload, api_key, algorithm=""HS256"")
+
+print(token)
+
+```
+
+
+
+Once you have generated the token, you should include it in the subsequent requests. You can do so by providing it as a bearer token in the Authorization header, or in the API Key header of your requests.
+
+
+
+Below is an example of how to do so using QdrantClient in Python:
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+qdrant_client = QdrantClient(
+
+ ""http://localhost:6333"",
+
+ api_key="""", # the token goes here
+
+)
+
+# Example search vector
+
+search_vector = [0.1, 0.2, 0.3, 0.4]
+
+
+
+# Example similarity search request
+
+response = qdrant_client.search(
+
+ collection_name=""demo_collection"",
+
+ query_vector=search_vector,
+
+ limit=5 # Number of results to retrieve
+
+)
+
+```
+
+
+
+For convenience, we have added a JWT generation tool in the Qdrant Web UI, which is present under the 🔑 tab. For your local deployments, you will find it at [http://localhost:6333/dashboard#/jwt](http://localhost:6333/dashboard#/jwt).
+
+
+
+### Payload Configuration
+
+
+
+There are several different options (claims) you can use in the JWT payload that help control access and functionality. Let’s look at them one by one.
+
+
+
+**exp**: This claim is the expiration time of the token, and is a unix timestamp in seconds. After the expiration time, the token will be invalid.
+
+
+
+**value_exists**: This claim validates the token against a specific key-value stored in a collection. By using this claim, you can revoke access by simply changing a value without having to invalidate the API key.
+
+
+
+**access**: This claim defines the access level of the token. The access level can be global read (r) or manage (m). It can also be specific to a collection, or even a subset of a collection, using read (r) and read-write (rw).
+
+
+
+Let’s look at a few example JWT payload configurations.
+
+
+
+**Scenario 1: 1-hour expiry time, and read-only access to a collection**
+
+```json
+
+{
+
+ ""exp"": 1690995200, // Set to 1 hour from the current time (Unix timestamp)
+
+ ""access"": [
+
+ {
+
+ ""collection"": ""demo_collection"",
+
+ ""access"": ""r"" // Read-only access
+
+ }
+
+ ]
+
+}
+
+
+
+```
+
+
+
+**Scenario 2: 1-hour expiry time, and access to user with a specific role**
+
+
+
+Suppose you have a ‘users’ collection and have defined specific roles for each user, such as ‘developer’, ‘manager’, ‘admin’, ‘analyst’, and ‘revoked’. In such a scenario, you can use a combination of **exp** and **value_exists**.
+
+```json
+
+{
+
+ ""exp"": 1690995200,
+
+ ""value_exists"": {
+
+ ""collection"": ""users"",
+
+ ""matches"": [
+
+ { ""key"": ""username"", ""value"": ""john"" },
+
+ { ""key"": ""role"", ""value"": ""developer"" }
+
+ ],
+
+ },
+
+}
+
+
+
+```
+
+
+
+
+
+
+
+Now, if you ever want to revoke access for a user, simply change the value of their role. All future requests will be invalid using a token payload of the above type.
+
+
+
+**Scenario 3: 1-hour expiry time, and read-write access to a subset of a collection**
+
+
+
+You can even specify access levels specific to subsets of a collection. This can be especially useful when you are leveraging [multitenancy](/documentation/guides/multiple-partitions/), and want to segregate access.
+
+```json
+
+{
+
+ ""exp"": 1690995200,
+
+ ""access"": [
+
+ {
+
+ ""collection"": ""demo_collection"",
+
+ ""access"": ""r"",
+
+ ""payload"": {
+
+ ""user_id"": ""user_123456""
+
+ }
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+
+
+By combining the claims, you can fully customize the access level that a user or a role has within the vector store.
+
+
+
+### Creating Role-Based Access Control (RBAC) Using JWT
+
+
+
+As we saw above, JWT claims create powerful levers through which you can create granular access control on Qdrant. Let’s bring it all together and understand how it helps you create Role-Based Access Control (RBAC).
+
+
+
+In a typical enterprise application, you will have a segregation of users based on their roles and permissions. These could be:
+
+
+
+1. **Admin or Owner:** with full access, and can generate API keys.
+
+2. **Editor:** with read-write access levels to specific collections.
+
+3. **Viewer:** with read-only access to specific collections.
+
+4. **Data Scientist or Analyst:** with read-only access to specific collections.
+
+5. **Developer:** with read-write access to development- or testing-specific collections, but limited access to production data.
+
+6. **Guest:** with limited read-only access to publicly available collections.
+
+
+
+In addition, you can create access levels within sections of a collection. In a multi-tenant application, where you have used payload-based partitioning, you can create read-only access for specific user roles for a subset of the collection that belongs to that user.
+
+
+
+Your application requirements will eventually help you decide the roles and access levels you should create. For example, in an application managing customer data, you could create additional roles such as:
+
+
+
+**Customer Support Representative**: read-write access to customer service-related data but no access to billing information.
+
+
+
+**Billing Department**: read-only access to billing data and read-write access to payment records.
+
+
+
+**Marketing Analyst**: read-only access to anonymized customer data for analytics.
+
+
+
+Each role can be assigned a JWT with claims that specify expiration times, read/write permissions for collections, and validating conditions.
+
+
+
+In such an application, an example JWT payload for a customer support representative role could be:
+
+
+
+```json
+
+{
+
+ ""exp"": 1690995200,
+
+ ""access"": [
+
+ {
+
+ ""collection"": ""customer_data"",
+
+ ""access"": ""rw"",
+
+ ""payload"": {
+
+ ""department"": ""support""
+
+ }
+
+ }
+
+ ],
+
+ ""value_exists"": {
+
+ ""collection"": ""departments"",
+
+ ""matches"": [
+
+ { ""key"": ""department"", ""value"": ""support"" }
+
+ ]
+
+ }
+
+}
+
+```
+
+
+
+
+
+As you can see, by implementing RBAC, you can ensure proper segregation of roles and their privileges, and avoid privacy loopholes in your application.
+
+
+
+## Qdrant Hybrid Cloud and Data Sovereignty
+
+
+
+Data governance varies by country, especially for global organizations dealing with different regulations on data privacy, security, and access. This often necessitates deploying infrastructure within specific geographical boundaries.
+
+
+
+To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring.
+
+
+
+Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure.
+
+
+
+With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control.
+
+
+
+## Conclusion
+
+
+
+Vector similarity is increasingly becoming the backbone of AI applications that leverage unstructured data. By transforming data into vectors – their numerical representations – organizations can build powerful applications that harness semantic search, ranging from better recommendation systems to algorithms that help with personalization, or powerful customer support chatbots.
+
+
+
+However, to fully leverage the power of AI in production, organizations need to choose a vector database that offers strong privacy and security features, while also helping them adhere to local laws and regulations.
+
+
+
+Qdrant provides exceptional efficiency and performance, along with the capability to implement granular access control to data, Role-Based Access Control (RBAC), and the ability to build a fully data-sovereign architecture.
+
+
+
+Interested in mastering vector search security and deployment strategies? [Join our Discord community](https://discord.gg/qdrant) to explore more advanced search strategies, connect with other developers and researchers in the industry, and stay updated on the latest innovations!
+",articles/data-privacy.md
+"---
+
+title: Question Answering as a Service with Cohere and Qdrant
+
+short_description: ""End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant""
+
+description: ""End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant""
+
+social_preview_image: /articles_data/qa-with-cohere-and-qdrant/social_preview.png
+
+small_preview_image: /articles_data/qa-with-cohere-and-qdrant/q-and-a-article-icon.svg
+
+preview_dir: /articles_data/qa-with-cohere-and-qdrant/preview
+
+weight: 7
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2022-11-29T15:45:00+01:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - question answering
+
+ - cohere
+
+ - co.embed
+
+ - embeddings
+
+---
+
+
+
+Bi-encoders are probably the most efficient way of setting up a semantic Question Answering system.
+
+This architecture relies on the same neural model that creates vector embeddings for both questions and answers.
+
+The assumption is, both question and answer should have representations close to each other in the latent space.
+
+It should be like that because they should both describe the same semantic concept. That doesn't apply
+
+to answers like ""Yes"" or ""No"" though, but standard FAQ-like problems are a bit easier as there is typically
+
+an overlap between both texts. Not necessarily in terms of wording, but in their semantics.
+
+
+
+![Bi-encoder structure. Both queries (questions) and documents (answers) are vectorized by the same neural encoder.
+
+Output embeddings are then compared by a chosen distance function, typically cosine similarity.](/articles_data/qa-with-cohere-and-qdrant/biencoder-diagram.png)
+
+
+
+And yeah, you need to **bring your own embeddings**, in order to even start. There are various ways how
+
+to obtain them, but using Cohere [co.embed API](https://docs.cohere.ai/reference/embed) is probably
+
+the easiest and most convenient method.
+
+
+
+## Why co.embed API and Qdrant go well together?
+
+
+
+Maintaining a **Large Language Model** might be hard and expensive. Scaling it up and down, when the traffic
+
+changes, require even more effort and becomes unpredictable. That might be definitely a blocker for any semantic
+
+search system. But if you want to start right away, you may consider using a SaaS model, Cohere’s
+
+[co.embed API](https://docs.cohere.ai/reference/embed) in particular. It gives you state-of-the-art language
+
+models available as a Highly Available HTTP service with no need to train or maintain your own service. As all
+
+the communication is done with JSONs, you can simply provide the co.embed output as Qdrant input.
+
+
+
+```python
+
+# Putting the co.embed API response directly as Qdrant method input
+
+qdrant_client.upsert(
+
+ collection_name=""collection"",
+
+ points=rest.Batch(
+
+ ids=[...],
+
+ vectors=cohere_client.embed(...).embeddings,
+
+ payloads=[...],
+
+ ),
+
+)
+
+```
+
+
+
+Both tools are easy to combine, so you can start working with semantic search in a few minutes, not days.
+
+
+
+And what if your needs are so specific that you need to fine-tune a general usage model? Co.embed API goes beyond
+
+pre-trained encoders and allows providing some custom datasets to
+
+[customize the embedding model with your own data](https://docs.cohere.com/docs/finetuning).
+
+As a result, you get the quality of domain-specific models, but without worrying about infrastructure.
+
+
+
+## System architecture overview
+
+
+
+In real systems, answers get vectorized and stored in an efficient vector search database. We typically don’t
+
+even need to provide specific answers, but just use sentences or paragraphs of text and vectorize them instead.
+
+Still, if a bit longer piece of text contains the answer to a particular question, its distance to the question
+
+embedding should not be that far away. And for sure closer than all the other, non-matching answers. Storing the
+
+answer embeddings in a vector database makes the search process way easier.
+
+
+
+![Building the database of possible answers. All the texts are converted into their vector embeddings and those
+
+embeddings are stored in a vector database, i.e. Qdrant.](/articles_data/qa-with-cohere-and-qdrant/vector-database.png)
+
+
+
+## Looking for the correct answer
+
+
+
+Once our database is working and all the answer embeddings are already in place, we can start querying it.
+
+We basically perform the same vectorization on a given question and ask the database to provide some near neighbours.
+
+We rely on the embeddings to be close to each other, so we expect the points with the smallest distance in the latent
+
+space to contain the proper answer.
+
+
+
+![While searching, a question gets vectorized by the same neural encoder. Vector database is a component that looks
+
+for the closest answer vectors using i.e. cosine similarity. A proper system, like Qdrant, will make the lookup
+
+process more efficient, as it won’t calculate the distance to all the answer embeddings. Thanks to HNSW, it will
+
+be able to find the nearest neighbours with sublinear complexity.](/articles_data/qa-with-cohere-and-qdrant/search-with-vector-database.png)
+
+
+
+## Implementing the QA search system with SaaS tools
+
+
+
+We don’t want to maintain our own service for the neural encoder, nor even set up a Qdrant instance. There are SaaS
+
+solutions for both — Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed)
+
+and [Qdrant Cloud](https://qdrant.to/cloud), so we’ll use them instead of on-premise tools.
+
+
+
+### Question Answering on biomedical data
+
+
+
+We’re going to implement the Question Answering system for the biomedical data. There is a
+
+*[pubmed_qa](https://huggingface.co/datasets/pubmed_qa)* dataset, with it *pqa_labeled* subset containing 1,000 examples
+
+of questions and answers labelled by domain experts. Our system is going to be fed with the embeddings generated by
+
+co.embed API and we’ll load them to Qdrant. Using Qdrant Cloud vs your own instance does not matter much here.
+
+There is a subtle difference in how to connect to the cloud instance, but all the other operations are executed
+
+in the same way.
+
+
+
+```python
+
+from datasets import load_dataset
+
+
+
+# Loading the dataset from HuggingFace hub. It consists of several columns: pubid,
+
+# question, context, long_answer and final_decision. For the purposes of our system,
+
+# we’ll use question and long_answer.
+
+dataset = load_dataset(""pubmed_qa"", ""pqa_labeled"")
+
+```
+
+
+
+| **pubid** | **question** | **context** | **long_answer** | **final_decision** |
+
+|-----------|---------------------------------------------------|-------------|---------------------------------------------------|--------------------|
+
+| 18802997 | Can calprotectin predict relapse risk in infla... | ... | Measuring calprotectin may help to identify UC... | maybe |
+
+| 20538207 | Should temperature be monitorized during kidne... | ... | The new storage can affords more stable temper... | no |
+
+| 25521278 | Is plate clearing a risk factor for obesity? | ... | The tendency to clear one's plate when eating ... | yes |
+
+| 17595200 | Is there an intrauterine influence on obesity? | ... | Comparison of mother-offspring and father-offs.. | no |
+
+| 15280782 | Is unsafe sexual behaviour increasing among HI... | ... | There was no evidence of a trend in unsafe sex... | no |
+
+
+
+### Using Cohere and Qdrant to build the answers database
+
+
+
+In order to start generating the embeddings, you need to [create a Cohere account](https://dashboard.cohere.ai/welcome/register).
+
+That will start your trial period, so you’ll be able to vectorize the texts for free. Once logged in, your default API key will
+
+be available in [Settings](https://dashboard.cohere.ai/api-keys). We’ll need it to call the co.embed API. with the official python package.
+
+
+
+```python
+
+import cohere
+
+
+
+cohere_client = cohere.Client(COHERE_API_KEY)
+
+
+
+# Generating the embeddings with Cohere client library
+
+embeddings = cohere_client.embed(
+
+ texts=[""A test sentence""],
+
+ model=""large"",
+
+)
+
+vector_size = len(embeddings.embeddings[0])
+
+print(vector_size) # output: 4096
+
+```
+
+
+
+Let’s connect to the Qdrant instance first and create a collection with the proper configuration, so we can put some embeddings into it later on.
+
+
+
+```python
+
+# Connecting to Qdrant Cloud with qdrant-client requires providing the api_key.
+
+# If you use an on-premise instance, it has to be skipped.
+
+qdrant_client = QdrantClient(
+
+ host=""xyz-example.eu-central.aws.cloud.qdrant.io"",
+
+ prefer_grpc=True,
+
+ api_key=QDRANT_API_KEY,
+
+)
+
+```
+
+
+
+Now we’re able to vectorize all the answers. They are going to form our collection, so we can also put them already into Qdrant, along with the
+
+payloads and identifiers. That will make our dataset easily searchable.
+
+
+
+```python
+
+answer_response = cohere_client.embed(
+
+ texts=dataset[""train""][""long_answer""],
+
+ model=""large"",
+
+)
+
+vectors = [
+
+ # Conversion to float is required for Qdrant
+
+ list(map(float, vector))
+
+ for vector in answer_response.embeddings
+
+]
+
+ids = [entry[""pubid""] for entry in dataset[""train""]]
+
+
+
+# Filling up Qdrant collection with the embeddings generated by Cohere co.embed API
+
+qdrant_client.upsert(
+
+ collection_name=""pubmed_qa"",
+
+ points=rest.Batch(
+
+ ids=ids,
+
+ vectors=vectors,
+
+ payloads=list(dataset[""train""]),
+
+ )
+
+)
+
+```
+
+
+
+And that’s it. Without even setting up a single server on our own, we created a system that might be easily asked a question. I don’t want to call
+
+it serverless, as this term is already taken, but co.embed API with Qdrant Cloud makes everything way easier to maintain.
+
+
+
+### Answering the questions with semantic search — the quality
+
+
+
+It’s high time to query our database with some questions. It might be interesting to somehow measure the quality of the system in general.
+
+In those kinds of problems we typically use *top-k accuracy*. We assume the prediction of the system was correct if the correct answer
+
+was present in the first *k* results.
+
+
+
+```python
+
+# Finding the position at which Qdrant provided the expected answer for each question.
+
+# That allows to calculate accuracy@k for different values of k.
+
+k_max = 10
+
+answer_positions = []
+
+for embedding, pubid in tqdm(zip(question_response.embeddings, ids)):
+
+ response = qdrant_client.search(
+
+ collection_name=""pubmed_qa"",
+
+ query_vector=embedding,
+
+ limit=k_max,
+
+ )
+
+
+
+ answer_ids = [record.id for record in response]
+
+ if pubid in answer_ids:
+
+ answer_positions.append(answer_ids.index(pubid))
+
+ else:
+
+ answer_positions.append(-1)
+
+```
+
+
+
+Saved answer positions allow us to calculate the metric for different *k* values.
+
+
+
+```python
+
+# Prepared answer positions are being used to calculate different values of accuracy@k
+
+for k in range(1, k_max + 1):
+
+ correct_answers = len(
+
+ list(
+
+ filter(lambda x: 0 <= x < k, answer_positions)
+
+ )
+
+ )
+
+ print(f""accuracy@{k} ="", correct_answers / len(dataset[""train""]))
+
+```
+
+
+
+Here are the values of the top-k accuracy for different values of k:
+
+
+
+| **metric** | **value** |
+
+|-------------|-----------|
+
+| accuracy@1 | 0.877 |
+
+| accuracy@2 | 0.921 |
+
+| accuracy@3 | 0.942 |
+
+| accuracy@4 | 0.950 |
+
+| accuracy@5 | 0.956 |
+
+| accuracy@6 | 0.960 |
+
+| accuracy@7 | 0.964 |
+
+| accuracy@8 | 0.971 |
+
+| accuracy@9 | 0.976 |
+
+| accuracy@10 | 0.977 |
+
+
+
+It seems like our system worked pretty well even if we consider just the first result, with the lowest distance.
+
+We failed with around 12% of questions. But numbers become better with the higher values of k. It might be also
+
+valuable to check out what questions our system failed to answer, their perfect match and our guesses.
+
+
+
+We managed to implement a working Question Answering system within just a few lines of code. If you are fine
+
+with the results achieved, then you can start using it right away. Still, if you feel you need a slight improvement,
+
+then fine-tuning the model is a way to go. If you want to check out the full source code,
+
+it is available on [Google Colab](https://colab.research.google.com/drive/1YOYq5PbRhQ_cjhi6k4t1FnWgQm8jZ6hm?usp=sharing).
+",articles/qa-with-cohere-and-qdrant.md
+"---
+
+title: ""Is RAG Dead? The Role of Vector Databases in Vector Search | Qdrant""
+
+short_description: Learn how Qdrant’s vector database enhances enterprise AI with superior accuracy and cost-effectiveness.
+
+description: Uncover the necessity of vector databases for RAG and learn how Qdrant's vector database empowers enterprise AI with unmatched accuracy and cost-effectiveness.
+
+social_preview_image: /articles_data/rag-is-dead/preview/social_preview.jpg
+
+small_preview_image: /articles_data/rag-is-dead/icon.svg
+
+preview_dir: /articles_data/rag-is-dead/preview
+
+weight: -131
+
+author: David Myriel
+
+author_link: https://github.com/davidmyriel
+
+date: 2024-02-27T00:00:00.000Z
+
+draft: false
+
+keywords:
+
+ - vector database
+
+ - vector search
+
+ - retrieval augmented generation
+
+ - gemini 1.5
+
+---
+
+
+
+# Is RAG Dead? The Role of Vector Databases in AI Efficiency and Vector Search
+
+
+
+When Anthropic came out with a context window of 100K tokens, they said: “*[Vector search](https://qdrant.tech/solutions/) is dead. LLMs are getting more accurate and won’t need RAG anymore.*”
+
+
+
+Google’s Gemini 1.5 now offers a context window of 10 million tokens. [Their supporting paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf) claims victory over accuracy issues, even when applying Greg Kamradt’s [NIAH methodology](https://twitter.com/GregKamradt/status/1722386725635580292).
+
+
+
+*It’s over. [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) (Retrieval Augmented Generation) must be completely obsolete now. Right?*
+
+
+
+No.
+
+
+
+Larger context windows are never the solution. Let me repeat. Never. They require more computational resources and lead to slower processing times.
+
+
+
+The community is already stress testing Gemini 1.5:
+
+
+
+![RAG and Gemini 1.5](/articles_data/rag-is-dead/rag-is-dead-1.png)
+
+
+
+This is not surprising. LLMs require massive amounts of compute and memory to run. To cite Grant, running such a model by itself “would deplete a small coal mine to generate each completion”. Also, who is waiting 30 seconds for a response?
+
+
+
+## Context stuffing is not the solution
+
+
+
+> Relying on context is expensive, and it doesn’t improve response quality in real-world applications. Retrieval based on [vector search](https://qdrant.tech/solutions/) offers much higher precision.
+
+
+
+If you solely rely on an [LLM](https://qdrant.tech/articles/what-is-rag-in-ai/) to perfect retrieval and precision, you are doing it wrong.
+
+
+
+A large context window makes it harder to focus on relevant information. This increases the risk of errors or hallucinations in its responses.
+
+
+
+Google found Gemini 1.5 significantly more accurate than GPT-4 at shorter context lengths and “a very small decrease in recall towards 1M tokens”. The recall is still below 0.8.
+
+
+
+![Gemini 1.5 Data](/articles_data/rag-is-dead/rag-is-dead-2.png)
+
+
+
+We don’t think 60-80% is good enough. The LLM might retrieve enough relevant facts in its context window, but it still loses up to 40% of the available information.
+
+
+
+> The whole point of vector search is to circumvent this process by efficiently picking the information your app needs to generate the best response. A [vector database](https://qdrant.tech/) keeps the compute load low and the query response fast. You don’t need to wait for the LLM at all.
+
+
+
+Qdrant’s benchmark results are strongly in favor of accuracy and efficiency. We recommend that you consider them before deciding that an LLM is enough. Take a look at our [open-source benchmark reports](/benchmarks/) and [try out the tests](https://github.com/qdrant/vector-db-benchmark) yourself.
+
+
+
+## Vector search in compound systems
+
+
+
+The future of AI lies in careful system engineering. As per [Zaharia et al.](https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/), results from Databricks find that “60% of LLM applications use some form of RAG, while 30% use multi-step chains.”
+
+
+
+Even Gemini 1.5 demonstrates the need for a complex strategy. When looking at [Google’s MMLU Benchmark](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), the model was called 32 times to reach a score of 90.0% accuracy. This shows us that even a basic compound arrangement is superior to monolithic models.
+
+
+
+As a retrieval system, a [vector database](https://qdrant.tech/) perfectly fits the need for compound systems. Introducing them into your design opens the possibilities for superior applications of LLMs. It is superior because it’s faster, more accurate, and much cheaper to run.
+
+
+
+> The key advantage of RAG is that it allows an LLM to pull in real-time information from up-to-date internal and external knowledge sources, making it more dynamic and adaptable to new information. - Oliver Molander, CEO of IMAGINAI
+
+>
+
+
+
+## Qdrant scales to enterprise RAG scenarios
+
+
+
+People still don’t understand the economic benefit of vector databases. Why would a large corporate AI system need a standalone vector database like [Qdrant](https://qdrant.tech/)? In our minds, this is the most important question. Let’s pretend that LLMs cease struggling with context thresholds altogether.
+
+
+
+**How much would all of this cost?**
+
+
+
+If you are running a RAG solution in an enterprise environment with petabytes of private data, your compute bill will be unimaginable. Let's assume 1 cent per 1K input tokens (which is the current GPT-4 Turbo pricing). Whatever you are doing, every time you go 100 thousand tokens deep, it will cost you $1.
+
+
+
+That’s a buck a question.
+
+
+
+> According to our estimations, vector search queries are **at least** 100 million times cheaper than queries made by LLMs.
+
+
+
+Conversely, the only up-front investment with vector databases is the indexing (which requires more compute). After this step, everything else is a breeze. Once setup, Qdrant easily scales via [features like Multitenancy and Sharding](/articles/multitenancy/). This lets you scale up your reliance on the vector retrieval process and minimize your use of the compute-heavy LLMs. As an optimization measure, Qdrant is irreplaceable.
+
+
+
+Julien Simon from HuggingFace says it best:
+
+
+
+> RAG is not a workaround for limited context size. For mission-critical enterprise use cases, RAG is a way to leverage high-value, proprietary company knowledge that will never be found in public datasets used for LLM training. At the moment, the best place to index and query this knowledge is some sort of vector index. In addition, RAG downgrades the LLM to a writing assistant. Since built-in knowledge becomes much less important, a nice small 7B open-source model usually does the trick at a fraction of the cost of a huge generic model.
+
+
+
+
+
+## Get superior accuracy with Qdrant's vector database
+
+
+
+As LLMs continue to require enormous computing power, users will need to leverage vector search and [RAG](https://qdrant.tech/).
+
+
+
+Our customers remind us of this fact every day. As a product, [our vector database](https://qdrant.tech/) is highly scalable and business-friendly. We develop our features strategically to follow our company’s Unix philosophy.
+
+
+
+We want to keep Qdrant compact, efficient and with a focused purpose. This purpose is to empower our customers to use it however they see fit.
+
+
+
+When large enterprises release their generative AI into production, they need to keep costs under control, while retaining the best possible quality of responses. Qdrant has the [vector search solutions](https://qdrant.tech/solutions/) to do just that. Revolutionize your vector search capabilities and get started with [a Qdrant demo](https://qdrant.tech/contact-us/).",articles/rag-is-dead.md
+"---
+
+title: ""BM42: New Baseline for Hybrid Search""
+
+short_description: ""Introducing next evolutionary step in lexical search.""
+
+description: ""Introducing BM42 - a new sparse embedding approach, which combines the benefits of exact keyword search with the intelligence of transformers.""
+
+social_preview_image: /articles_data/bm42/social-preview.jpg
+
+preview_dir: /articles_data/bm42/preview
+
+weight: -140
+
+author: Andrey Vasnetsov
+
+date: 2024-07-01T12:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - hybrid search
+
+ - sparse embeddings
+
+ - bm25
+
+---
+
+
+
+
+
+
+
+
+
+For the last 40 years, BM25 has served as the standard for search engines.
+
+It is a simple yet powerful algorithm that has been used by many search engines, including Google, Bing, and Yahoo.
+
+
+
+Though it seemed that the advent of vector search would diminish its influence, it did so only partially.
+
+The current state-of-the-art approach to retrieval nowadays tries to incorporate BM25 along with embeddings into a hybrid search system.
+
+
+
+However, the use case of text retrieval has significantly shifted since the introduction of RAG.
+
+Many assumptions upon which BM25 was built are no longer valid.
+
+
+
+For example, the typical length of documents and queries vary significantly between traditional web search and modern RAG systems.
+
+
+
+In this article, we will recap what made BM25 relevant for so long and why alternatives have struggled to replace it. Finally, we will discuss BM42, as the next step in the evolution of lexical search.
+
+
+
+## Why has BM25 stayed relevant for so long?
+
+
+
+To understand why, we need to analyze its components.
+
+
+
+The famous BM25 formula is defined as:
+
+
+
+$$
+
+\text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \frac{f(q_i, D) \cdot (k_1 + 1)}{f(q_i, D) + k_1 \cdot \left(1 - b + b \cdot \frac{|D|}{\text{avgdl}}\right)}
+
+$$
+
+
+
+Let's simplify this to gain a better understanding.
+
+
+
+- The $score(D, Q)$ - means that we compute the score for each pair of document $D$ and query $Q$.
+
+
+
+- The $\sum_{i=1}^{N}$ - means that each of $N$ terms in the query contribute to the final score as a part of the sum.
+
+
+
+- The $\text{IDF}(q_i)$ - is the inverse document frequency. The more rare the term $q_i$ is, the more it contributes to the score. A simplified formula for this is:
+
+
+
+$$
+
+\text{IDF}(q_i) = \frac{\text{Number of documents}}{\text{Number of documents with } q_i}
+
+$$
+
+
+
+It is fair to say that the `IDF` is the most important part of the BM25 formula.
+
+`IDF` selects the most important terms in the query relative to the specific document collection.
+
+So intuitively, we can interpret the `IDF` as **term importance within the corpora**.
+
+
+
+That explains why BM25 is so good at handling queries, which dense embeddings consider out-of-domain.
+
+
+
+The last component of the formula can be intuitively interpreted as **term importance within the document**.
+
+This might look a bit complicated, so let's break it down.
+
+
+
+$$
+
+\text{Term importance in document }(q_i) = \color{red}\frac{f(q_i, D)\color{black} \cdot \color{blue}(k_1 + 1) \color{black} }{\color{red}f(q_i, D)\color{black} + \color{blue}k_1\color{black} \cdot \left(1 - \color{blue}b\color{black} + \color{blue}b\color{black} \cdot \frac{|D|}{\text{avgdl}}\right)}
+
+$$
+
+
+
+- The $\color{red}f(q_i, D)\color{black}$ - is the frequency of the term $q_i$ in the document $D$. Or in other words, the number of times the term $q_i$ appears in the document $D$.
+
+- The $\color{blue}k_1\color{black}$ and $\color{blue}b\color{black}$ are the hyperparameters of the BM25 formula. In most implementations, they are constants set to $k_1=1.5$ and $b=0.75$. Those constants define relative implications of the term frequency and the document length in the formula.
+
+- The $\frac{|D|}{\text{avgdl}}$ - is the relative length of the document $D$ compared to the average document length in the corpora. The intuition befind this part is following: if the token is found in the smaller document, it is more likely that this token is important for this document.
+
+
+
+#### Will BM25 term importance in the document work for RAG?
+
+
+
+As we can see, the *term importance in the document* heavily depends on the statistics within the document. Moreover, statistics works well if the document is long enough.
+
+Therefore, it is suitable for searching webpages, books, articles, etc.
+
+
+
+However, would it work as well for modern search applications, such as RAG? Let's see.
+
+
+
+The typical length of a document in RAG is much shorter than that of web search. In fact, even if we are working with webpages and articles, we would prefer to split them into chunks so that
+
+a) Dense models can handle them and
+
+b) We can pinpoint the exact part of the document which is relevant to the query
+
+
+
+As a result, the document size in RAG is small and fixed.
+
+
+
+That effectively renders the term importance in the document part of the BM25 formula useless.
+
+The term frequency in the document is always 0 or 1, and the relative length of the document is always 1.
+
+
+
+So, the only part of the BM25 formula that is still relevant for RAG is `IDF`. Let's see how we can leverage it.
+
+
+
+## Why SPLADE is not always the answer
+
+
+
+Before discussing our new approach, let's examine the current state-of-the-art alternative to BM25 - SPLADE.
+
+
+
+The idea behind SPLADE is interesting—what if we let a smart, end-to-end trained model generate a bag-of-words representation of the text for us?
+
+It will assign all the weights to the tokens, so we won't need to bother with statistics and hyperparameters.
+
+The documents are then represented as a sparse embedding, where each token is represented as an element of the sparse vector.
+
+
+
+And it works in academic benchmarks. Many papers report that SPLADE outperforms BM25 in terms of retrieval quality.
+
+This performance, however, comes at a cost.
+
+
+
+* **Inappropriate Tokenizer**: To incorporate transformers for this task, SPLADE models require using a standard transformer tokenizer. These tokenizers are not designed for retrieval tasks. For example, if the word is not in the (quite limited) vocabulary, it will be either split into subwords or replaced with a `[UNK]` token. This behavior works well for language modeling but is completely destructive for retrieval tasks.
+
+
+
+* **Expensive Token Expansion**: In order to compensate the tokenization issues, SPLADE uses *token expansion* technique. This means that we generate a set of similar tokens for each token in the query. There are a few problems with this approach:
+
+ - It is computationally and memory expensive. We need to generate more values for each token in the document, which increases both the storage size and retrieval time.
+
+ - It is not always clear where to stop with the token expansion. The more tokens we generate, the more likely we are to get the relevant one. But simultaneously, the more tokens we generate, the more likely we are to get irrelevant results.
+
+ - Token expansion dilutes the interpretability of the search. We can't say which tokens were used in the document and which were generated by the token expansion.
+
+
+
+* **Domain and Language Dependency**: SPLADE models are trained on specific corpora. This means that they are not always generalizable to new or rare domains. As they don't use any statistics from the corpora, they cannot adapt to the new domain without fine-tuning.
+
+
+
+* **Inference Time**: Additionally, currently available SPLADE models are quite big and slow. They usually require a GPU to make the inference in a reasonable time.
+
+
+
+At Qdrant, we acknowledge the aforementioned problems and are looking for a solution.
+
+Our idea was to combine the best of both worlds - the simplicity and interpretability of BM25 and the intelligence of transformers while avoiding the pitfalls of SPLADE.
+
+
+
+And here is what we came up with.
+
+
+
+## The best of both worlds
+
+
+
+As previously mentioned, `IDF` is the most important part of the BM25 formula. In fact it is so important, that we decided to build its calculation into the Qdrant engine itself.
+
+Check out our latest [release notes](https://github.com/qdrant/qdrant/releases/tag/v1.10.0). This type of separation allows streaming updates of the sparse embeddings while keeping the `IDF` calculation up-to-date.
+
+
+
+As for the second part of the formula, *the term importance within the document* needs to be rethought.
+
+
+
+Since we can't rely on the statistics within the document, we can try to use the semantics of the document instead.
+
+And semantics is what transformers are good at. Therefore, we only need to solve two problems:
+
+
+
+- How does one extract the importance information from the transformer?
+
+- How can tokenization issues be avoided?
+
+
+
+
+
+### Attention is all you need
+
+
+
+Transformer models, even those used to generate embeddings, generate a bunch of different outputs.
+
+Some of those outputs are used to generate embeddings.
+
+
+
+Others are used to solve other kinds of tasks, such as classification, text generation, etc.
+
+
+
+The one particularly interesting output for us is the attention matrix.
+
+
+
+{{< figure src=""/articles_data/bm42/attention-matrix.png"" alt=""Attention matrix"" caption=""Attention matrix"" width=""60%"" >}}
+
+
+
+The attention matrix is a square matrix, where each row and column corresponds to the token in the input sequence.
+
+It represents the importance of each token in the input sequence for each other.
+
+
+
+The classical transformer models are trained to predict masked tokens in the context, so the attention weights define which context tokens influence the masked token most.
+
+
+
+Apart from regular text tokens, the transformer model also has a special token called `[CLS]`. This token represents the whole sequence in the classification tasks, which is exactly what we need.
+
+
+
+By looking at the attention row for the `[CLS]` token, we can get the importance of each token in the document for the whole document.
+
+
+
+
+
+```python
+
+sentences = ""Hello, World - is the starting point in most programming languages""
+
+
+
+features = transformer.tokenize(sentences)
+
+
+
+# ...
+
+
+
+attentions = transformer.auto_model(**features, output_attentions=True).attentions
+
+
+
+weights = torch.mean(attentions[-1][0,:,0], axis=0)
+
+# ▲ ▲ ▲ ▲
+
+# │ │ │ └─── [CLS] token is the first one
+
+# │ │ └─────── First item of the batch
+
+# │ └────────── Last transformer layer
+
+# └────────────────────────── Averate all 6 attention heads
+
+
+
+for weight, token in zip(weights, tokens):
+
+ print(f""{token}: {weight}"")
+
+
+
+# [CLS] : 0.434 // Filter out the [CLS] token
+
+# hello : 0.039
+
+# , : 0.039
+
+# world : 0.107 // <-- The most important token
+
+# - : 0.033
+
+# is : 0.024
+
+# the : 0.031
+
+# starting : 0.054
+
+# point : 0.028
+
+# in : 0.018
+
+# most : 0.016
+
+# programming : 0.060 // <-- The third most important token
+
+# languages : 0.062 // <-- The second most important token
+
+# [SEP] : 0.047 // Filter out the [SEP] token
+
+
+
+```
+
+
+
+
+
+The resulting formula for the BM42 score would look like this:
+
+
+
+$$
+
+\text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \text{Attention}(\text{CLS}, q_i)
+
+$$
+
+
+
+
+
+Note that classical transformers have multiple attention heads, so we can get multiple importance vectors for the same document. The simplest way to combine them is to simply average them.
+
+
+
+These averaged attention vectors make up the importance information we were looking for.
+
+The best part is, one can get them from any transformer model, without any additional training.
+
+Therefore, BM42 can support any natural language as long as there is a transformer model for it.
+
+
+
+In our implementation, we use the `sentence-transformers/all-MiniLM-L6-v2` model, which gives a huge boost in the inference speed compared to the SPLADE models. In practice, any transformer model can be used.
+
+It doesn't require any additional training, and can be easily adapted to work as BM42 backend.
+
+
+
+
+
+### WordPiece retokenization
+
+
+
+The final piece of the puzzle we need to solve is the tokenization issue. In order to get attention vectors, we need to use native transformer tokenization.
+
+But this tokenization is not suitable for the retrieval tasks. What can we do about it?
+
+
+
+Actually, the solution we came up with is quite simple. We reverse the tokenization process after we get the attention vectors.
+
+
+
+Transformers use [WordPiece](https://huggingface.co/learn/nlp-course/en/chapter6/6) tokenization.
+
+In case it sees the word, which is not in the vocabulary, it splits it into subwords.
+
+
+
+Here is how that looks:
+
+
+
+```text
+
+""unbelievable"" -> [""un"", ""##believ"", ""##able""]
+
+```
+
+
+
+What can merge the subwords back into the words. Luckily, the subwords are marked with the `##` prefix, so we can easily detect them.
+
+Since the attention weights are normalized, we can simply sum the attention weights of the subwords to get the attention weight of the word.
+
+
+
+After that, we can apply the same traditional NLP techniques, as
+
+
+
+- Removing of the stop-words
+
+- Removing of the punctuation
+
+- Lemmatization
+
+
+
+In this way, we can significantly reduce the number of tokens, and therefore minimize the memory footprint of the sparse embeddings. We won't simultaneously compromise the ability to match (almost) exact tokens.
+
+
+
+## Practical examples
+
+
+
+
+
+| Trait | BM25 | SPLADE | BM42 |
+
+|-------------------------|--------------|--------------|--------------|
+
+| Interpretability | High ✅ | Ok 🆗 | High ✅ |
+
+| Document Inference speed| Very high ✅ | Slow 🐌 | High ✅ |
+
+| Query Inference speed | Very high ✅ | Slow 🐌 | Very high ✅ |
+
+| Memory footprint | Low ✅ | High ❌ | Low ✅ |
+
+| In-domain accuracy | Ok 🆗 | High ✅ | High ✅ |
+
+| Out-of-domain accuracy | Ok 🆗 | Low ❌ | Ok 🆗 |
+
+| Small documents accuracy| Low ❌ | High ✅ | High ✅ |
+
+| Large documents accuracy| High ✅ | Low ❌ | Ok 🆗 |
+
+| Unknown tokens handling | Yes ✅ | Bad ❌ | Yes ✅ |
+
+| Multi-lingual support | Yes ✅ | No ❌ | Yes ✅ |
+
+| Best Match | Yes ✅ | No ❌ | Yes ✅ |
+
+
+
+
+
+Starting from Qdrant v1.10.0, BM42 can be used in Qdrant via FastEmbed inference.
+
+
+
+Let's see how you can setup a collection for hybrid search with BM42 and [jina.ai](https://jina.ai/embeddings/) dense embeddings.
+
+
+
+```http
+
+PUT collections/my-hybrid-collection
+
+{
+
+ ""vectors"": {
+
+ ""jina"": {
+
+ ""size"": 768,
+
+ ""distance"": ""Cosine""
+
+ }
+
+ },
+
+ ""sparse_vectors"": {
+
+ ""bm42"": {
+
+ ""modifier"": ""idf"" // <--- This parameter enables the IDF calculation
+
+ }
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient()
+
+
+
+client.create_collection(
+
+ collection_name=""my-hybrid-collection"",
+
+ vectors_config={
+
+ ""jina"": models.VectorParams(
+
+ size=768,
+
+ distance=models.Distance.COSINE,
+
+ )
+
+ },
+
+ sparse_vectors_config={
+
+ ""bm42"": models.SparseVectorParams(
+
+ modifier=models.Modifier.IDF,
+
+ )
+
+ }
+
+)
+
+```
+
+
+
+The search query will retrieve the documents with both dense and sparse embeddings and combine the scores
+
+using the Reciprocal Rank Fusion (RRF) algorithm.
+
+
+
+```python
+
+from fastembed import SparseTextEmbedding, TextEmbedding
+
+
+
+query_text = ""best programming language for beginners?""
+
+
+
+model_bm42 = SparseTextEmbedding(model_name=""Qdrant/bm42-all-minilm-l6-v2-attentions"")
+
+model_jina = TextEmbedding(model_name=""jinaai/jina-embeddings-v2-base-en"")
+
+
+
+sparse_embedding = list(embedding_model.query_embed(query_text))[0]
+
+dense_embedding = list(model_jina.query_embed(query_text))[0]
+
+
+
+client.query_points(
+
+ collection_name=""my-hybrid-collection"",
+
+ prefetch=[
+
+ models.Prefetch(query=sparse_embedding.as_object(), using=""bm42"", limit=10),
+
+ models.Prefetch(query=dense_embedding.tolist(), using=""jina"", limit=10),
+
+ ],
+
+ query=models.FusionQuery(fusion=models.Fusion.RRF), # <--- Combine the scores
+
+ limit=10
+
+)
+
+
+
+```
+
+
+
+### Benchmarks
+
+
+
+To prove the point further we have conducted some benchmarks to highlight the cases where BM42 outperforms BM25.
+
+Please note, that we didn't intend to make an exhaustive evaluation, as we are presenting a new approach, not a new model.
+
+
+
+For out experiments we choose [quora](https://huggingface.co/datasets/BeIR/quora) dataset, which represents a question-deduplication task ~~the Question-Answering task~~.
+
+
+
+
+
+The typical example of the dataset is the following:
+
+
+
+```text
+
+{""_id"": ""109"", ""text"": ""How GST affects the CAs and tax officers?""}
+
+{""_id"": ""110"", ""text"": ""Why can't I do my homework?""}
+
+{""_id"": ""111"", ""text"": ""How difficult is it get into RSI?""}
+
+```
+
+
+
+As you can see, it has pretty short texts, there are not much of the statistics to rely on.
+
+
+
+After encoding with BM42, the average vector size is only **5.6 elements per document**.
+
+
+
+With `datatype: uint8` available in Qdrant, the total size of the sparse vector index is about **13MB** for ~530k documents.
+
+
+
+As a reference point, we use:
+
+
+
+- BM25 with tantivy
+
+- the [sparse vector BM25 implementation](https://github.com/qdrant/bm42_eval/blob/master/index_bm25_qdrant.py) with the same preprocessing pipeline like for BM42: tokenization, stop-words removal, and lemmatization
+
+
+
+| | BM25 (tantivy) | BM25 (Sparse) | BM42 |
+
+|----------------------|-------------------|---------------|----------|
+
+| ~~Precision @ 10~~ * | ~~0.45~~ | ~~0.45~~ | ~~0.49~~ |
+
+| Recall @ 10 | ~~0.71~~ **0.89** | 0.83 | 0.85 |
+
+
+
+
+
+ \* - values were corrected after the publication due to a mistake in the evaluation script.
+
+
+
+
+
+
+
+To make our benchmarks transparent, we have published scripts we used for the evaluation: see [github repo](https://github.com/qdrant/bm42_eval).
+
+
+
+
+
+Please note, that both BM25 and BM42 won't work well on their own in a production environment.
+
+Best results are achieved with a combination of sparse and dense embeddings in a hybrid approach.
+
+In this scenario, the two models are complementary to each other.
+
+The sparse model is responsible for exact token matching, while the dense model is responsible for semantic matching.
+
+
+
+Some more advanced models might outperform default `sentence-transformers/all-MiniLM-L6-v2` model we were using.
+
+We encourage developers involved in training embedding models to include a way to extract attention weights and contribute to the BM42 backend.
+
+
+
+## Fostering curiosity and experimentation
+
+
+
+Despite all of its advantages, BM42 is not always a silver bullet.
+
+For large documents without chunks, BM25 might still be a better choice.
+
+
+
+There might be a smarter way to extract the importance information from the transformer. There could be a better method to weigh IDF against attention scores.
+
+
+
+Qdrant does not specialize in model training. Our core project is the search engine itself. However, we understand that we are not operating in a vacuum. By introducing BM42, we are stepping up to empower our community with novel tools for experimentation.
+
+
+
+We truly believe that the sparse vectors method is at exact level of abstraction to yield both powerful and flexible results.
+
+
+
+Many of you are sharing your recent Qdrant projects in our [Discord channel](https://discord.com/invite/qdrant). Feel free to try out BM42 and let us know what you come up with.
+
+
+",articles/bm42.md
+"---
+
+title: ""Binary Quantization - Vector Search, 40x Faster ""
+
+short_description: ""Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance""
+
+description: ""Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance""
+
+social_preview_image: /articles_data/binary-quantization/social_preview.png
+
+small_preview_image: /articles_data/binary-quantization/binary-quantization-icon.svg
+
+preview_dir: /articles_data/binary-quantization/preview
+
+weight: -40
+
+author: Nirant Kasliwal
+
+author_link: https://nirantk.com/about/
+
+date: 2023-09-18T13:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - binary quantization
+
+ - memory optimization
+
+---
+
+
+
+# Optimizing High-Dimensional Vectors with Binary Quantization
+
+
+
+Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points.
+
+
+
+Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x.
+
+
+
+As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation.
+
+
+
+The rest of this article will cover:
+
+1. The importance of binary quantization
+
+2. Basic implementation using our Python client
+
+3. Benchmark analysis and usage recommendations
+
+
+
+## What is Binary Quantization?
+
+Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison.
+
+
+
+![What is binary quantization](/articles_data/binary-quantization/bq-2.png)
+
+
+
+**This binarization function is how we convert a range to binary values. All numbers greater than zero are marked as 1. If it's zero or less, they become 0.**
+
+
+
+The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. In exchange for reducing our 32 bit embeddings to 1 bit embeddings we can see up to a 40x retrieval speed up gain!
+
+
+
+One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector.
+
+
+
+For example, The 1536 dimension OpenAI embedding is worse than Open Source counterparts of 384 dimension at retrieval and ranking. Specifically, it scores 49.25 on the same [Embedding Retrieval Benchmark](https://huggingface.co/spaces/mteb/leaderboard) where the Open Source `bge-small` scores 51.82. This 2.57 points difference adds up quite soon.
+
+
+
+Our implementation of quantization achieves a good balance between full, large vectors at ranking time and binary vectors at search and retrieval time. It also has the ability for you to adjust this balance depending on your use case.
+
+
+
+## Faster search and retrieval
+
+
+
+Unlike product quantization, binary quantization does not rely on reducing the search space for each probe. Instead, we build a binary index that helps us achieve large increases in search speed.
+
+
+
+![Speed by quantization method](/articles_data/binary-quantization/bq-3.png)
+
+
+
+HNSW is the approximate nearest neighbor search. This means our accuracy improves up to a point of diminishing returns, as we check the index for more similar candidates. In the context of binary quantization, this is referred to as the **oversampling rate**.
+
+
+
+For example, if `oversampling=2.0` and the `limit=100`, then 200 vectors will first be selected using a quantized index. For those 200 vectors, the full 32 bit vector will be used with their HNSW index to a much more accurate 100 item result set. As opposed to doing a full HNSW search, we oversample a preliminary search and then only do the full search on this much smaller set of vectors.
+
+
+
+## Improved storage efficiency
+
+
+
+The following diagram shows the binarization function, whereby we reduce 32 bits storage to 1 bit information.
+
+
+
+Text embeddings can be over 1024 elements of floating point 32 bit numbers. For example, remember that OpenAI embeddings are 1536 element vectors. This means each vector is 6kB for just storing the vector.
+
+
+
+![Improved storage efficiency](/articles_data/binary-quantization/bq-4.png)
+
+
+
+In addition to storing the vector, we also need to maintain an index for faster search and retrieval. Qdrant’s formula to estimate overall memory consumption is:
+
+
+
+`memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes`
+
+
+
+For 100K OpenAI Embedding (`ada-002`) vectors we would need 900 Megabytes of RAM and disk space. This consumption can start to add up rapidly as you create multiple collections or add more items to the database.
+
+
+
+**With binary quantization, those same 100K OpenAI vectors only require 128 MB of RAM.** We benchmarked this result using methods similar to those covered in our [Scalar Quantization memory estimation](/articles/scalar-quantization/#benchmarks).
+
+
+
+This reduction in RAM usage is achieved through the compression that happens in the binary conversion. HNSW and quantized vectors will live in RAM for quick access, while original vectors can be offloaded to disk only. For searching, quantized HNSW will provide oversampled candidates, then they will be re-evaluated using their disk-stored original vectors to refine the final results. All of this happens under the hood without any additional intervention on your part.
+
+
+
+### When should you not use BQ?
+
+
+
+Since this method exploits the over-parameterization of embedding, you can expect poorer results for small embeddings i.e. less than 1024 dimensions. With the smaller number of elements, there is not enough information maintained in the binary vector to achieve good results.
+
+
+
+You will still get faster boolean operations and reduced RAM usage, but the accuracy degradation might be too high.
+
+
+
+## Sample implementation
+
+
+
+Now that we have introduced you to binary quantization, let’s try our a basic implementation. In this example, we will be using OpenAI and Cohere with Qdrant.
+
+
+
+#### Create a collection with Binary Quantization enabled
+
+
+
+Here is what you should do at indexing time when you create the collection:
+
+
+
+1. We store all the ""full"" vectors on disk.
+
+2. Then we set the binary embeddings to be in RAM.
+
+
+
+By default, both the full vectors and BQ get stored in RAM. We move the full vectors to disk because this saves us memory and allows us to store more vectors in RAM. By doing this, we explicitly move the binary vectors to memory by setting `always_ram=True`.
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+
+
+#collect to our Qdrant Server
+
+client = QdrantClient(
+
+ url=""http://localhost:6333"",
+
+ prefer_grpc=True,
+
+)
+
+
+
+#Create the collection to hold our embeddings
+
+# on_disk=True and the quantization_config are the areas to focus on
+
+collection_name = ""binary-quantization""
+
+if not client.collection_exists(collection_name):
+
+ client.create_collection(
+
+ collection_name=f""{collection_name}"",
+
+ vectors_config=models.VectorParams(
+
+ size=1536,
+
+ distance=models.Distance.DOT,
+
+ on_disk=True,
+
+ ),
+
+ optimizers_config=models.OptimizersConfigDiff(
+
+ default_segment_number=5,
+
+ indexing_threshold=0,
+
+ ),
+
+ quantization_config=models.BinaryQuantization(
+
+ binary=models.BinaryQuantizationConfig(always_ram=True),
+
+ ),
+
+ )
+
+```
+
+
+
+#### What is happening in the OptimizerConfig?
+
+
+
+We're setting `indexing_threshold` to 0 i.e. disabling the indexing to zero. This allows faster uploads of vectors and payloads. We will turn it back on down below, once all the data is loaded
+
+
+
+#### Next, we upload our vectors to this and then enable indexing:
+
+
+
+```python
+
+batch_size = 10000
+
+client.upload_collection(
+
+ collection_name=collection_name,
+
+ ids=range(len(dataset)),
+
+ vectors=dataset[""openai""],
+
+ payload=[
+
+ {""text"": x} for x in dataset[""text""]
+
+ ],
+
+ parallel=10, # based on the machine
+
+)
+
+```
+
+
+
+Enable indexing again:
+
+
+
+```python
+
+client.update_collection(
+
+ collection_name=f""{collection_name}"",
+
+ optimizer_config=models.OptimizersConfigDiff(
+
+ indexing_threshold=20000
+
+ )
+
+)
+
+```
+
+#### Configure the search parameters:
+
+
+
+When setting search parameters, we specify that we want to use `oversampling` and `rescore`. Here is an example snippet:
+
+
+
+```python
+
+client.search(
+
+ collection_name=""{collection_name}"",
+
+ query_vector=[0.2, 0.1, 0.9, 0.7, ...],
+
+ search_params=models.SearchParams(
+
+ quantization=models.QuantizationSearchParams(
+
+ ignore=False,
+
+ rescore=True,
+
+ oversampling=2.0,
+
+ )
+
+ )
+
+)
+
+```
+
+
+
+After Qdrant pulls the oversampled vectors set, the full vectors which will be, say 1536 dimensions for OpenAI will then be pulled up from disk. Qdrant computes the nearest neighbor with the query vector and returns the accurate, rescored order. This method produces much more accurate results. We enabled this by setting `rescore=True`.
+
+
+
+These two parameters are how you are going to balance speed versus accuracy. The larger the size of your oversample, the more items you need to read from disk and the more elements you have to search with the relatively slower full vector index. On the other hand, doing this will produce more accurate results.
+
+
+
+If you have lower accuracy requirements you can even try doing a small oversample without rescoring. Or maybe, for your data set combined with your accuracy versus speed requirements you can just search the binary index and no rescoring, i.e. leaving those two parameters out of the search query.
+
+
+
+## Benchmark results
+
+
+
+We retrieved some early results on the relationship between limit and oversampling using the the DBPedia OpenAI 1M vector dataset. We ran all these experiments on a Qdrant instance where 100K vectors were indexed and used 100 random queries.
+
+
+
+We varied the 3 parameters that will affect query time and accuracy: limit, rescore and oversampling. We offer these as an initial exploration of this new feature. You are highly encouraged to reproduce these experiments with your data sets.
+
+
+
+> Aside: Since this is a new innovation in vector databases, we are keen to hear feedback and results. [Join our Discord server](https://discord.gg/Qy6HCJK9Dc) for further discussion!
+
+
+
+**Oversampling:**
+
+In the figure below, we illustrate the relationship between recall and number of candidates:
+
+
+
+![Correct vs candidates](/articles_data/binary-quantization/bq-5.png)
+
+
+
+We see that ""correct"" results i.e. recall increases as the number of potential ""candidates"" increase (limit x oversampling). To highlight the impact of changing the `limit`, different limit values are broken apart into different curves. For example, we see that the lowest recall for limit 50 is around 94 correct, with 100 candidates. This also implies we used an oversampling of 2.0
+
+
+
+As oversampling increases, we see a general improvement in results – but that does not hold in every case.
+
+
+
+**Rescore:**
+
+As expected, rescoring increases the time it takes to return a query.
+
+We also repeated the experiment with oversampling except this time we looked at how rescore impacted result accuracy.
+
+
+
+![Relationship between limit and rescore on correct](/articles_data/binary-quantization/bq-7.png)
+
+
+
+**Limit:**
+
+We experiment with limits from Top 1 to Top 50 and we are able to get to 100% recall at limit 50, with rescore=True, in an index with 100K vectors.
+
+
+
+## Recommendations
+
+
+
+Quantization gives you the option to make tradeoffs against other parameters:
+
+Dimension count/embedding size
+
+Throughput and Latency requirements
+
+Recall requirements
+
+
+
+If you're working with OpenAI or Cohere embeddings, we recommend the following oversampling settings:
+
+
+
+|Method|Dimensionality|Test Dataset|Recall|Oversampling|
+
+|-|-|-|-|-|
+
+|OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
+
+|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
+
+|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
+
+|Cohere AI embed-english-v2.0|4096|[Wikipedia](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) 1M|0.98|2x|
+
+|OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x|
+
+|Gemini|768|No Open Data| 0.9563|3x|
+
+|Mistral Embed|768|No Open Data| 0.9445 |3x|
+
+
+
+If you determine that binary quantization is appropriate for your datasets and queries then we suggest the following:
+
+- Binary Quantization with always_ram=True
+
+- Vectors stored on disk
+
+- Oversampling=2.0 (or more)
+
+- Rescore=True
+
+
+
+## What's next?
+
+
+
+Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service.
+
+
+
+The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/).
+
+
+
+If you have any feedback, drop us a note on Twitter or LinkedIn to tell us about your results. [Join our lively Discord Server](https://discord.gg/Qy6HCJK9Dc) if you want to discuss BQ with like-minded people!
+",articles/binary-quantization.md
+"---
+
+title: Introducing Qdrant 0.11
+
+short_description: Check out what's new in Qdrant 0.11
+
+description: Replication support is the most important change introduced by Qdrant 0.11. Check out what else has been added!
+
+preview_dir: /articles_data/qdrant-0-11-release/preview
+
+small_preview_image: /articles_data/qdrant-0-11-release/announcement-svgrepo-com.svg
+
+social_preview_image: /articles_data/qdrant-0-11-release/preview/social_preview.jpg
+
+weight: 65
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2022-10-26T13:55:00+02:00
+
+draft: false
+
+---
+
+
+
+We are excited to [announce the release of Qdrant v0.11](https://github.com/qdrant/qdrant/releases/tag/v0.11.0),
+
+which introduces a number of new features and improvements.
+
+
+
+## Replication
+
+
+
+One of the key features in this release is replication support, which allows Qdrant to provide a high availability
+
+setup with distributed deployment out of the box. This, combined with sharding, enables you to horizontally scale
+
+both the size of your collections and the throughput of your cluster. This means that you can use Qdrant to handle
+
+large amounts of data without sacrificing performance or reliability.
+
+
+
+## Administration API
+
+
+
+Another new feature is the administration API, which allows you to disable write operations to the service. This is
+
+useful in situations where search availability is more critical than updates, and can help prevent issues like memory
+
+usage watermarks from affecting your searches.
+
+
+
+## Exact search
+
+
+
+We have also added the ability to report indexed payload points in the info API, which allows you to verify that
+
+payload values were properly formatted for indexing. In addition, we have introduced a new `exact` search parameter
+
+that allows you to force exact searches of vectors, even if an ANN index is built. This can be useful for validating
+
+the accuracy of your HNSW configuration.
+
+
+
+## Backward compatibility
+
+
+
+This release is backward compatible with v0.10.5 storage in single node deployment, but unfortunately, distributed
+
+deployment is not compatible with previous versions due to the large number of changes required for the replica set
+
+implementation. However, clients are tested for backward compatibility with the v0.10.x service.
+",articles/qdrant-0-11-release.md
+"---
+
+title: Finding errors in datasets with Similarity Search
+
+short_description: Finding errors datasets with distance-based methods
+
+description: Improving quality of text-and-images datasets on the online furniture marketplace example.
+
+preview_dir: /articles_data/dataset-quality/preview
+
+social_preview_image: /articles_data/dataset-quality/preview/social_preview.jpg
+
+small_preview_image: /articles_data/dataset-quality/icon.svg
+
+weight: 8
+
+author: George Panchuk
+
+author_link: https://medium.com/@george.panchuk
+
+date: 2022-07-18T10:18:00.000Z
+
+# aliases: [ /articles/dataset-quality/ ]
+
+---
+
+Nowadays, people create a huge number of applications of various types and solve problems in different areas.
+
+Despite such diversity, they have something in common - they need to process data.
+
+Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with.
+
+
+
+In some cases, you need to categorize or label your data, which can be a tough problem given its scale.
+
+The process of splitting or labelling is error-prone and these errors can be very costly.
+
+Imagine that you failed to achieve the desired quality of the model due to inaccurate labels.
+
+Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it.
+
+Thus, you get poor retention, and it directly impacts company revenue.
+
+It is really important to avoid such errors in your data.
+
+
+
+## Furniture web-marketplace
+
+
+
+Let’s say you work on an online furniture marketplace.
+
+
+
+{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/furniture_marketplace.png caption=""Furniture marketplace"" >}}
+
+
+
+In this case, to ensure a good user experience, you need to split items into different categories: tables, chairs, beds, etc.
+
+One can arrange all the items manually and spend a lot of money and time on this.
+
+There is also another way: train a classification or similarity model and rely on it.
+
+With both approaches it is difficult to avoid mistakes.
+
+Manual labelling is a tedious task, but it requires concentration.
+
+Once you got distracted or your eyes became blurred mistakes won't keep you waiting.
+
+The model also can be wrong.
+
+You can analyse the most uncertain predictions and fix them, but the other errors will still leak to the site.
+
+There is no silver bullet. You should validate your dataset thoroughly, and you need tools for this.
+
+
+
+When you are sure that there are not many objects placed in the wrong category, they can be considered outliers or anomalies.
+
+Thus, you can train a model or a bunch of models capable of looking for anomalies, e.g. autoencoder and a classifier on it.
+
+However, this is again a resource-intensive task, both in terms of time and manual labour, since labels have to be provided for classification.
+
+On the contrary, if the proportion of out-of-place elements is high enough, outlier search methods are likely to be useless.
+
+
+
+### Similarity search
+
+
+
+The idea behind similarity search is to measure semantic similarity between related parts of the data.
+
+E.g. between category title and item images.
+
+The hypothesis is, that unsuitable items will be less similar.
+
+
+
+We can't directly compare text and image data.
+
+For this we need an intermediate representation - embeddings.
+
+Embeddings are just numeric vectors containing semantic information.
+
+We can apply a pre-trained model to our data to produce these vectors.
+
+After embeddings are created, we can measure the distances between them.
+
+
+
+Assume we want to search for something other than a single bed in «Single beds» category.
+
+
+
+{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/similarity_search.png caption=""Similarity search"" >}}
+
+
+
+One of the possible pipelines would look like this:
+
+- Take the name of the category as an anchor and calculate the anchor embedding.
+
+- Calculate embeddings for images of each object placed into this category.
+
+- Compare obtained anchor and object embeddings.
+
+- Find the furthest.
+
+
+
+For instance, we can do it with the [CLIP](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1) model.
+
+
+
+{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_image_transparent.png caption=""Category vs. Image"" >}}
+
+
+
+We can also calculate embeddings for titles instead of images, or even for both of them to find more errors.
+
+
+
+{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_name_and_image_transparent.png caption=""Category vs. Title and Image"" >}}
+
+
+
+As you can see, different approaches can find new errors or the same ones.
+
+Stacking several techniques or even the same techniques with different models may provide better coverage.
+
+Hint: Caching embeddings for the same models and reusing them among different methods can significantly speed up your lookup.
+
+
+
+### Diversity search
+
+
+
+Since pre-trained models have only general knowledge about the data, they can still leave some misplaced items undetected.
+
+You might find yourself in a situation when the model focuses on non-important features, selects a lot of irrelevant elements, and fails to find genuine errors.
+
+To mitigate this issue, you can perform a diversity search.
+
+
+
+Diversity search is a method for finding the most distinctive examples in the data.
+
+As similarity search, it also operates on embeddings and measures the distances between them.
+
+The difference lies in deciding which point should be extracted next.
+
+
+
+Let's imagine how to get 3 points with similarity search and then with diversity search.
+
+
+
+Similarity:
+
+1. Calculate distance matrix
+
+2. Choose your anchor
+
+3. Get a vector corresponding to the distances from the selected anchor from the distance matrix
+
+4. Sort fetched vector
+
+5. Get top-3 embeddings
+
+
+
+Diversity:
+
+1. Calculate distance matrix
+
+2. Initialize starting point (randomly or according to the certain conditions)
+
+3. Get a distance vector for the selected starting point from the distance matrix
+
+4. Find the furthest point
+
+5. Get a distance vector for the new point
+
+6. Find the furthest point from all of already fetched points
+
+
+
+{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/diversity_transparent.png caption=""Diversity search"" >}}
+
+
+
+Diversity search utilizes the very same embeddings, and you can reuse them.
+
+If your data is huge and does not fit into memory, vector search engines like [Qdrant](https://github.com/qdrant/qdrant) might be helpful.
+
+
+
+Although the described methods can be used independently. But they are simple to combine and improve detection capabilities.
+
+If the quality remains insufficient, you can fine-tune the models using a similarity learning approach (e.g. with [Quaterion](https://quaterion.qdrant.tech) both to provide a better representation of your data and pull apart dissimilar objects in space.
+
+
+
+## Conclusion
+
+
+
+In this article, we enlightened distance-based methods to find errors in categorized datasets.
+
+Showed how to find incorrectly placed items in the furniture web store.
+
+I hope these methods will help you catch sneaky samples leaked into the wrong categories in your data, and make your users` experience more enjoyable.
+
+
+
+Poke the [demo](https://dataset-quality.qdrant.tech).
+
+
+
+Stay tuned :)
+
+
+
+
+
+
+",articles/dataset-quality.md
+"---
+
+title: ""What is a Sparse Vector? How to Achieve Vector-based Hybrid Search""
+
+short_description: ""Discover sparse vectors, their function, and significance in modern data processing, including methods like SPLADE for efficient use.""
+
+description: ""Learn what sparse vectors are, how they work, and their importance in modern data processing. Explore methods like SPLADE for creating and leveraging sparse vectors efficiently.""
+
+social_preview_image: /articles_data/sparse-vectors/social_preview.png
+
+small_preview_image: /articles_data/sparse-vectors/sparse-vectors-icon.svg
+
+preview_dir: /articles_data/sparse-vectors/preview
+
+weight: -100
+
+author: Nirant Kasliwal
+
+author_link: https://nirantk.com/about
+
+date: 2023-12-09T13:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - sparse vectors
+
+ - SPLADE
+
+ - hybrid search
+
+ - vector search
+
+---
+
+
+
+Think of a library with a vast index card system. Each index card only has a few keywords marked out (sparse vector) of a large possible set for each book (document). This is what sparse vectors enable for text.
+
+
+
+## What are sparse and dense vectors?
+
+
+
+Sparse vectors are like the Marie Kondo of data—keeping only what sparks joy (or relevance, in this case).
+
+
+
+Consider a simplified example of 2 documents, each with 200 words. A dense vector would have several hundred non-zero values, whereas a sparse vector could have, much fewer, say only 20 non-zero values.
+
+
+
+In this example: We assume it selects only 2 words or tokens from each document. The rest of the values are zero. This is why it's called a sparse vector.
+
+
+
+```python
+
+dense = [0.2, 0.3, 0.5, 0.7, ...] # several hundred floats
+
+sparse = [{331: 0.5}, {14136: 0.7}] # 20 key value pairs
+
+```
+
+
+
+The numbers 331 and 14136 map to specific tokens in the vocabulary e.g. `['chocolate', 'icecream']`. The rest of the values are zero. This is why it's called a sparse vector.
+
+
+
+The tokens aren't always words though, sometimes they can be sub-words: `['ch', 'ocolate']` too.
+
+
+
+They're pivotal in information retrieval, especially in ranking and search systems. BM25, a standard ranking function used by search engines like [Elasticsearch](https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), exemplifies this. BM25 calculates the relevance of documents to a given search query.
+
+
+
+BM25's capabilities are well-established, yet it has its limitations.
+
+
+
+BM25 relies solely on the frequency of words in a document and does not attempt to comprehend the meaning or the contextual importance of the words. Additionally, it requires the computation of the entire corpus's statistics in advance, posing a challenge for large datasets.
+
+
+
+Sparse vectors harness the power of neural networks to surmount these limitations while retaining the ability to query exact words and phrases.
+
+They excel in handling large text data, making them crucial in modern data processing a and marking an advancement over traditional methods such as BM25.
+
+
+
+# Understanding sparse vectors
+
+
+
+Sparse Vectors are a representation where each dimension corresponds to a word or subword, greatly aiding in interpreting document rankings. This clarity is why sparse vectors are essential in modern search and recommendation systems, complimenting the meaning-rich embedding or dense vectors.
+
+
+
+Dense vectors from models like OpenAI Ada-002 or Sentence Transformers contain non-zero values for every element. In contrast, sparse vectors focus on relative word weights per document, with most values being zero. This results in a more efficient and interpretable system, especially in text-heavy applications like search.
+
+
+
+Sparse Vectors shine in domains and scenarios where many rare keywords or specialized terms are present.
+
+For example, in the medical domain, many rare terms are not present in the general vocabulary, so general-purpose dense vectors cannot capture the nuances of the domain.
+
+
+
+
+
+| Feature | Sparse Vectors | Dense Vectors |
+
+|---------------------------|---------------------------------------------|----------------------------------------------|
+
+| **Data Representation** | Majority of elements are zero | All elements are non-zero |
+
+| **Computational Efficiency** | Generally higher, especially in operations involving zero elements | Lower, as operations are performed on all elements |
+
+| **Information Density** | Less dense, focuses on key features | Highly dense, capturing nuanced relationships |
+
+| **Example Applications** | Text search, Hybrid search | [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), many general machine learning tasks |
+
+
+
+Where do sparse vectors fail though? They're not great at capturing nuanced relationships between words. For example, they can't capture the relationship between ""king"" and ""queen"" as well as dense vectors.
+
+
+
+# SPLADE
+
+
+
+Let's check out [SPLADE](https://europe.naverlabs.com/research/computer-science/splade-a-sparse-bi-encoder-bert-based-model-achieves-effective-and-efficient-full-text-document-ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), an excellent way to make sparse vectors. Let's look at some numbers first. Higher is better:
+
+
+
+| Model | MRR@10 (MS MARCO Dev) | Type |
+
+|--------------------|---------|----------------|
+
+| BM25 | 0.184 | Sparse |
+
+| TCT-ColBERT | 0.359 | Dense |
+
+| doc2query-T5 [link](https://github.com/castorini/docTTTTTquery) | 0.277 | Sparse |
+
+| SPLADE | 0.322 | Sparse |
+
+| SPLADE-max | 0.340 | Sparse |
+
+| SPLADE-doc | 0.322 | Sparse |
+
+| DistilSPLADE-max | 0.368 | Sparse |
+
+
+
+All numbers are from [SPLADEv2](https://arxiv.org/abs/2109.10086). MRR is [Mean Reciprocal Rank](https://www.wikiwand.com/en/Mean_reciprocal_rank#References), a standard metric for ranking. [MS MARCO](https://microsoft.github.io/MSMARCO-Passage-Ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is a dataset for evaluating ranking and retrieval for passages.
+
+
+
+SPLADE is quite flexible as a method, with regularization knobs that can be tuned to obtain [different models](https://github.com/naver/splade) as well:
+
+
+
+> SPLADE is more a class of models rather than a model per se: depending on the regularization magnitude, we can obtain different models (from very sparse to models doing intense query/doc expansion) with different properties and performance.
+
+
+
+First, let's look at how to create a sparse vector. Then, we'll look at the concepts behind SPLADE.
+
+
+
+## Creating a sparse vector
+
+
+
+We'll explore two different ways to create a sparse vector. The higher performance way to create a sparse vector from dedicated document and query encoders. We'll look at a simpler approach -- here we will use the same model for both document and query. We will get a dictionary of token ids and their corresponding weights for a sample text - representing a document.
+
+
+
+If you'd like to follow along, here's a [Colab Notebook](https://colab.research.google.com/gist/NirantK/ad658be3abefc09b17ce29f45255e14e/splade-single-encoder.ipynb), [alternate link](https://gist.github.com/NirantK/ad658be3abefc09b17ce29f45255e14e) with all the code.
+
+
+
+### Setting Up
+
+```python
+
+from transformers import AutoModelForMaskedLM, AutoTokenizer
+
+
+
+model_id = ""naver/splade-cocondenser-ensembledistil""
+
+
+
+tokenizer = AutoTokenizer.from_pretrained(model_id)
+
+model = AutoModelForMaskedLM.from_pretrained(model_id)
+
+
+
+text = """"""Arthur Robert Ashe Jr. (July 10, 1943 – February 6, 1993) was an American professional tennis player. He won three Grand Slam titles in singles and two in doubles.""""""
+
+```
+
+
+
+### Computing the sparse vector
+
+```python
+
+import torch
+
+
+
+
+
+def compute_vector(text):
+
+ """"""
+
+ Computes a vector from logits and attention mask using ReLU, log, and max operations.
+
+ """"""
+
+ tokens = tokenizer(text, return_tensors=""pt"")
+
+ output = model(**tokens)
+
+ logits, attention_mask = output.logits, tokens.attention_mask
+
+ relu_log = torch.log(1 + torch.relu(logits))
+
+ weighted_log = relu_log * attention_mask.unsqueeze(-1)
+
+ max_val, _ = torch.max(weighted_log, dim=1)
+
+ vec = max_val.squeeze()
+
+
+
+ return vec, tokens
+
+
+
+
+
+vec, tokens = compute_vector(text)
+
+print(vec.shape)
+
+```
+
+
+
+You'll notice that there are 38 tokens in the text based on this tokenizer. This will be different from the number of tokens in the vector. In a TF-IDF, we'd assign weights only to these tokens or words. In SPLADE, we assign weights to all the tokens in the vocabulary using this vector using our learned model.
+
+
+
+## Term expansion and weights
+
+```python
+
+def extract_and_map_sparse_vector(vector, tokenizer):
+
+ """"""
+
+ Extracts non-zero elements from a given vector and maps these elements to their human-readable tokens using a tokenizer. The function creates and returns a sorted dictionary where keys are the tokens corresponding to non-zero elements in the vector, and values are the weights of these elements, sorted in descending order of weights.
+
+
+
+ This function is useful in NLP tasks where you need to understand the significance of different tokens based on a model's output vector. It first identifies non-zero values in the vector, maps them to tokens, and sorts them by weight for better interpretability.
+
+
+
+ Args:
+
+ vector (torch.Tensor): A PyTorch tensor from which to extract non-zero elements.
+
+ tokenizer: The tokenizer used for tokenization in the model, providing the mapping from tokens to indices.
+
+
+
+ Returns:
+
+ dict: A sorted dictionary mapping human-readable tokens to their corresponding non-zero weights.
+
+ """"""
+
+
+
+ # Extract indices and values of non-zero elements in the vector
+
+ cols = vector.nonzero().squeeze().cpu().tolist()
+
+ weights = vector[cols].cpu().tolist()
+
+
+
+ # Map indices to tokens and create a dictionary
+
+ idx2token = {idx: token for token, idx in tokenizer.get_vocab().items()}
+
+ token_weight_dict = {
+
+ idx2token[idx]: round(weight, 2) for idx, weight in zip(cols, weights)
+
+ }
+
+
+
+ # Sort the dictionary by weights in descending order
+
+ sorted_token_weight_dict = {
+
+ k: v
+
+ for k, v in sorted(
+
+ token_weight_dict.items(), key=lambda item: item[1], reverse=True
+
+ )
+
+ }
+
+
+
+ return sorted_token_weight_dict
+
+
+
+
+
+# Usage example
+
+sorted_tokens = extract_and_map_sparse_vector(vec, tokenizer)
+
+sorted_tokens
+
+```
+
+
+
+There will be 102 sorted tokens in total. This has expanded to include tokens that weren't in the original text. This is the term expansion we will talk about next.
+
+
+
+Here are some terms that are added: ""Berlin"", and ""founder"" - despite having no mention of Arthur's race (which leads to Owen's Berlin win) and his work as the founder of Arthur Ashe Institute for Urban Health. Here are the top few `sorted_tokens` with a weight of more than 1:
+
+
+
+```python
+
+{
+
+ ""ashe"": 2.95,
+
+ ""arthur"": 2.61,
+
+ ""tennis"": 2.22,
+
+ ""robert"": 1.74,
+
+ ""jr"": 1.55,
+
+ ""he"": 1.39,
+
+ ""founder"": 1.36,
+
+ ""doubles"": 1.24,
+
+ ""won"": 1.22,
+
+ ""slam"": 1.22,
+
+ ""died"": 1.19,
+
+ ""singles"": 1.1,
+
+ ""was"": 1.07,
+
+ ""player"": 1.06,
+
+ ""titles"": 0.99,
+
+ ...
+
+}
+
+```
+
+
+
+If you're interested in using the higher-performance approach, check out the following models:
+
+
+
+1. [naver/efficient-splade-VI-BT-large-doc](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc)
+
+2. [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc)
+
+
+
+## Why SPLADE works: term expansion
+
+
+
+Consider a query ""solar energy advantages"". SPLADE might expand this to include terms like ""renewable,"" ""sustainable,"" and ""photovoltaic,"" which are contextually relevant but not explicitly mentioned. This process is called term expansion, and it's a key component of SPLADE.
+
+
+
+SPLADE learns the query/document expansion to include other relevant terms. This is a crucial advantage over other sparse methods which include the exact word, but completely miss the contextually relevant ones.
+
+
+
+This expansion has a direct relationship with what we can control when making a SPLADE model: Sparsity via Regularisation. The number of tokens (BERT wordpieces) we use to represent each document. If we use more tokens, we can represent more terms, but the vectors become denser. This number is typically between 20 to 200 per document. As a reference point, the dense BERT vector is 768 dimensions, OpenAI Embedding is 1536 dimensions, and the sparse vector is 30 dimensions.
+
+
+
+For example, assume a 1M document corpus. Say, we use 100 sparse token ids + weights per document. Correspondingly, dense BERT vector would be 768M floats, the OpenAI Embedding would be 1.536B floats, and the sparse vector would be a maximum of 100M integers + 100M floats. This could mean a **10x reduction in memory usage**, which is a huge win for large-scale systems:
+
+
+
+| Vector Type | Memory (GB) |
+
+|-------------------|-------------------------|
+
+| Dense BERT Vector | 6.144 |
+
+| OpenAI Embedding | 12.288 |
+
+| Sparse Vector | 1.12 |
+
+
+
+## How SPLADE works: leveraging BERT
+
+
+
+SPLADE leverages a transformer architecture to generate sparse representations of documents and queries, enabling efficient retrieval. Let's dive into the process.
+
+
+
+The output logits from the transformer backbone are inputs upon which SPLADE builds. The transformer architecture can be something familiar like BERT. Rather than producing dense probability distributions, SPLADE utilizes these logits to construct sparse vectors—think of them as a distilled essence of tokens, where each dimension corresponds to a term from the vocabulary and its associated weight in the context of the given document or query.
+
+
+
+This sparsity is critical; it mirrors the probability distributions from a typical [Masked Language Modeling](http://jalammar.github.io/illustrated-bert/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) task but is tuned for retrieval effectiveness, emphasizing terms that are both:
+
+
+
+1. Contextually relevant: Terms that represent a document well should be given more weight.
+
+2. Discriminative across documents: Terms that a document has, and other documents don't, should be given more weight.
+
+
+
+The token-level distributions that you'd expect in a standard transformer model are now transformed into token-level importance scores in SPLADE. These scores reflect the significance of each term in the context of the document or query, guiding the model to allocate more weight to terms that are likely to be more meaningful for retrieval purposes.
+
+
+
+The resulting sparse vectors are not only memory-efficient but also tailored for precise matching in the high-dimensional space of a search engine like Qdrant.
+
+
+
+## Interpreting SPLADE
+
+
+
+A downside of dense vectors is that they are not interpretable, making it difficult to understand why a document is relevant to a query.
+
+
+
+SPLADE importance estimation can provide insights into the 'why' behind a document's relevance to a query. By shedding light on which tokens contribute most to the retrieval score, SPLADE offers some degree of interpretability alongside performance, a rare feat in the realm of neural IR systems. For engineers working on search, this transparency is invaluable.
+
+
+
+## Known limitations of SPLADE
+
+
+
+### Pooling strategy
+
+The switch to max pooling in SPLADE improved its performance on the MS MARCO and TREC datasets. However, this indicates a potential limitation of the baseline SPLADE pooling method, suggesting that SPLADE's performance is sensitive to the choice of pooling strategy.
+
+
+
+### Document and query Eecoder
+
+The SPLADE model variant that uses a document encoder with max pooling but no query encoder reaches the same performance level as the prior SPLADE model. This suggests a limitation in the necessity of a query encoder, potentially affecting the efficiency of the model.
+
+
+
+## Other sparse vector methods
+
+
+
+SPLADE is not the only method to create sparse vectors.
+
+
+
+Essentially, sparse vectors are a superset of TF-IDF and BM25, which are the most popular text retrieval methods.
+
+In other words, you can create a sparse vector using the term frequency and inverse document frequency (TF-IDF) to reproduce the BM25 score exactly.
+
+
+
+Additionally, attention weights from Sentence Transformers can be used to create sparse vectors.
+
+This method preserves the ability to query exact words and phrases but avoids the computational overhead of query expansion used in SPLADE.
+
+
+
+We will cover these methods in detail in a future article.
+
+
+
+## Leveraging sparse vectors in Qdrant for hybrid search
+
+
+
+Qdrant supports a separate index for Sparse Vectors.
+
+This enables you to use the same collection for both dense and sparse vectors.
+
+Each ""Point"" in Qdrant can have both dense and sparse vectors.
+
+
+
+But let's first take a look at how you can work with sparse vectors in Qdrant.
+
+
+
+## Practical implementation in Python
+
+
+
+Let's dive into how Qdrant handles sparse vectors with an example. Here is what we will cover:
+
+
+
+1. Setting Up Qdrant Client: Initially, we establish a connection with Qdrant using the QdrantClient. This setup is crucial for subsequent operations.
+
+
+
+2. Creating a Collection with Sparse Vector Support: In Qdrant, a collection is a container for your vectors. Here, we create a collection specifically designed to support sparse vectors. This is done using the create_collection method where we define the parameters for sparse vectors, such as setting the index configuration.
+
+
+
+3. Inserting Sparse Vectors: Once the collection is set up, we can insert sparse vectors into it. This involves defining the sparse vector with its indices and values, and then upserting this point into the collection.
+
+
+
+4. Querying with Sparse Vectors: To perform a search, we first prepare a query vector. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection.
+
+
+
+5. Retrieving and Interpreting Results: The search operation returns results that include the id of the matching document, its score, and other relevant details. The score is a crucial aspect, reflecting the similarity between the query and the documents in the collection.
+
+
+
+### 1. Set up
+
+
+
+```python
+
+# Qdrant client setup
+
+client = QdrantClient("":memory:"")
+
+
+
+# Define collection name
+
+COLLECTION_NAME = ""example_collection""
+
+
+
+# Insert sparse vector into Qdrant collection
+
+point_id = 1 # Assign a unique ID for the point
+
+```
+
+
+
+### 2. Create a collection with sparse vector support
+
+
+
+```python
+
+client.create_collection(
+
+ collection_name=COLLECTION_NAME,
+
+ vectors_config={},
+
+ sparse_vectors_config={
+
+ ""text"": models.SparseVectorParams(
+
+ index=models.SparseIndexParams(
+
+ on_disk=False,
+
+ )
+
+ )
+
+ },
+
+)
+
+```
+
+
+
+
+
+### 3. Insert sparse vectors
+
+
+
+Here, we see the process of inserting a sparse vector into the Qdrant collection. This step is key to building a dataset that can be quickly retrieved in the first stage of the retrieval process, utilizing the efficiency of sparse vectors. Since this is for demonstration purposes, we insert only one point with Sparse Vector and no dense vector.
+
+
+
+```python
+
+client.upsert(
+
+ collection_name=COLLECTION_NAME,
+
+ points=[
+
+ models.PointStruct(
+
+ id=point_id,
+
+ payload={}, # Add any additional payload if necessary
+
+ vector={
+
+ ""text"": models.SparseVector(
+
+ indices=indices.tolist(), values=values.tolist()
+
+ )
+
+ },
+
+ )
+
+ ],
+
+)
+
+```
+
+By upserting points with sparse vectors, we prepare our dataset for rapid first-stage retrieval, laying the groundwork for subsequent detailed analysis using dense vectors. Notice that we use ""text"" to denote the name of the sparse vector.
+
+
+
+Those familiar with the Qdrant API will notice that the extra care taken to be consistent with the existing named vectors API -- this is to make it easier to use sparse vectors in existing codebases. As always, you're able to **apply payload filters**, shard keys, and other advanced features you've come to expect from Qdrant. To make things easier for you, the indices and values don't have to be sorted before upsert. Qdrant will sort them when the index is persisted e.g. on disk.
+
+
+
+### 4. Query with sparse vectors
+
+
+
+We use the same process to prepare a query vector as well. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection.
+
+
+
+```python
+
+# Preparing a query vector
+
+
+
+query_text = ""Who was Arthur Ashe?""
+
+query_vec, query_tokens = compute_vector(query_text)
+
+query_vec.shape
+
+
+
+query_indices = query_vec.nonzero().numpy().flatten()
+
+query_values = query_vec.detach().numpy()[indices]
+
+```
+
+
+
+In this example, we use the same model for both document and query. This is not a requirement, but it's a simpler approach.
+
+
+
+### 5. Retrieve and interpret results
+
+
+
+After setting up the collection and inserting sparse vectors, the next critical step is retrieving and interpreting the results. This process involves executing a search query and then analyzing the returned results.
+
+
+
+```python
+
+# Searching for similar documents
+
+result = client.search(
+
+ collection_name=COLLECTION_NAME,
+
+ query_vector=models.NamedSparseVector(
+
+ name=""text"",
+
+ vector=models.SparseVector(
+
+ indices=query_indices,
+
+ values=query_values,
+
+ ),
+
+ ),
+
+ with_vectors=True,
+
+)
+
+
+
+result
+
+```
+
+
+
+In the above code, we execute a search against our collection using the prepared sparse vector query. The `client.search` method takes the collection name and the query vector as inputs. The query vector is constructed using the `models.NamedSparseVector`, which includes the indices and values derived from the query text. This is a crucial step in efficiently retrieving relevant documents.
+
+
+
+```python
+
+ScoredPoint(
+
+ id=1,
+
+ version=0,
+
+ score=3.4292831420898438,
+
+ payload={},
+
+ vector={
+
+ ""text"": SparseVector(
+
+ indices=[2001, 2002, 2010, 2018, 2032, ...],
+
+ values=[
+
+ 1.0660614967346191,
+
+ 1.391068458557129,
+
+ 0.8903818726539612,
+
+ 0.2502821087837219,
+
+ ...,
+
+ ],
+
+ )
+
+ },
+
+)
+
+```
+
+
+
+The result, as shown above, is a `ScoredPoint` object containing the ID of the retrieved document, its version, a similarity score, and the sparse vector. The score is a key element as it quantifies the similarity between the query and the document, based on their respective vectors.
+
+
+
+To understand how this scoring works, we use the familiar dot product method:
+
+
+
+$$\text{Similarity}(\text{Query}, \text{Document}) = \sum_{i \in I} \text{Query}_i \times \text{Document}_i$$
+
+
+
+This formula calculates the similarity score by multiplying corresponding elements of the query and document vectors and summing these products. This method is particularly effective with sparse vectors, where many elements are zero, leading to a computationally efficient process. The higher the score, the greater the similarity between the query and the document, making it a valuable metric for assessing the relevance of the retrieved documents.
+
+
+
+
+
+## Hybrid search: combining sparse and dense vectors
+
+
+
+By combining search results from both dense and sparse vectors, you can achieve a hybrid search that is both efficient and accurate.
+
+Results from sparse vectors will guarantee, that all results with the required keywords are returned,
+
+while dense vectors will cover the semantically similar results.
+
+
+
+The mixture of dense and sparse results can be presented directly to the user, or used as a first stage of a two-stage retrieval process.
+
+
+
+Let's see how you can make a hybrid search query in Qdrant.
+
+
+
+First, you need to create a collection with both dense and sparse vectors:
+
+
+
+```python
+
+client.create_collection(
+
+ collection_name=COLLECTION_NAME,
+
+ vectors_config={
+
+ ""text-dense"": models.VectorParams(
+
+ size=1536, # OpenAI Embeddings
+
+ distance=models.Distance.COSINE,
+
+ )
+
+ },
+
+ sparse_vectors_config={
+
+ ""text-sparse"": models.SparseVectorParams(
+
+ index=models.SparseIndexParams(
+
+ on_disk=False,
+
+ )
+
+ )
+
+ },
+
+)
+
+```
+
+
+
+
+
+Then, assuming you have upserted both dense and sparse vectors, you can query them together:
+
+
+
+```python
+
+query_text = ""Who was Arthur Ashe?""
+
+
+
+# Compute sparse and dense vectors
+
+query_indices, query_values = compute_sparse_vector(query_text)
+
+query_dense_vector = compute_dense_vector(query_text)
+
+
+
+
+
+client.search_batch(
+
+ collection_name=COLLECTION_NAME,
+
+ requests=[
+
+ models.SearchRequest(
+
+ vector=models.NamedVector(
+
+ name=""text-dense"",
+
+ vector=query_dense_vector,
+
+ ),
+
+ limit=10,
+
+ ),
+
+ models.SearchRequest(
+
+ vector=models.NamedSparseVector(
+
+ name=""text-sparse"",
+
+ vector=models.SparseVector(
+
+ indices=query_indices,
+
+ values=query_values,
+
+ ),
+
+ ),
+
+ limit=10,
+
+ ),
+
+ ],
+
+)
+
+```
+
+
+
+The result will be a pair of result lists, one for dense and one for sparse vectors.
+
+
+
+Having those results, there are several ways to combine them:
+
+
+
+### Mixing or fusion
+
+
+
+You can mix the results from both dense and sparse vectors, based purely on their relative scores. This is a simple and effective approach, but it doesn't take into account the semantic similarity between the results. Among the [popular mixing methods](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18) are:
+
+
+
+ - Reciprocal Ranked Fusion (RRF)
+
+ - Relative Score Fusion (RSF)
+
+ - Distribution-Based Score Fusion (DBSF)
+
+
+
+{{< figure src=/articles_data/sparse-vectors/mixture.png caption=""Relative Score Fusion"" width=80% >}}
+
+
+
+[Ranx](https://github.com/AmenRa/ranx) is a great library for mixing results from different sources.
+
+
+
+
+
+### Re-ranking
+
+
+
+You can use obtained results as a first stage of a two-stage retrieval process. In the second stage, you can re-rank the results from the first stage using a more complex model, such as [Cross-Encoders](https://www.sbert.net/examples/applications/cross-encoder/README.html) or services like [Cohere Rerank](https://txt.cohere.com/rerank/).
+
+
+
+And that's it! You've successfully achieved hybrid search with Qdrant!
+
+
+
+## Additional resources
+
+For those who want to dive deeper, here are the top papers on the topic most of which have code available:
+
+
+
+1. Problem Motivation: [Sparse Overcomplete Word Vector Representations](https://ar5iv.org/abs/1506.02004?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors)
+
+1. [SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval](https://ar5iv.org/abs/2109.10086?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors)
+
+1. [SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking](https://ar5iv.org/abs/2107.05720?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors)
+
+1. Late Interaction - [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](https://ar5iv.org/abs/2112.01488?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors)
+
+1. [SparseEmbed: Learning Sparse Lexical Representations with Contextual Embeddings for Retrieval](https://research.google/pubs/pub52289/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors)
+
+
+
+**Why just read when you can try it out?**
+
+
+
+We've packed an easy-to-use Colab for you on how to make a Sparse Vector: [Sparse Vectors Single Encoder Demo](https://colab.research.google.com/drive/1wa2Yr5BCOgV0MTOFFTude99BOXCLHXky?usp=sharing). Run it, tinker with it, and start seeing the magic unfold in your projects. We can't wait to hear how you use it!
+
+
+
+## Conclusion
+
+
+
+Alright, folks, let's wrap it up. Better search isn't a 'nice-to-have,' it's a game-changer, and Qdrant can get you there.
+
+
+
+Got questions? Our [Discord community](https://qdrant.to/discord?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is teeming with answers.
+
+
+
+If you enjoyed reading this, why not sign up for our [newsletter](/subscribe/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) to stay ahead of the curve.
+
+
+
+And, of course, a big thanks to you, our readers, for pushing us to make ranking better for everyone.
+",articles/sparse-vectors.md
+"---
+
+title: Google Summer of Code 2023 - Polygon Geo Filter for Qdrant Vector Database
+
+short_description: Gsoc'23 Polygon Geo Filter for Qdrant Vector Database
+
+description: A Summary of my work and experience at Qdrant's Gsoc '23.
+
+preview_dir: /articles_data/geo-polygon-filter-gsoc/preview
+
+small_preview_image: /articles_data/geo-polygon-filter-gsoc/icon.svg
+
+social_preview_image: /articles_data/geo-polygon-filter-gsoc/preview/social_preview.jpg
+
+weight: -50
+
+author: Zein Wen
+
+author_link: https://www.linkedin.com/in/zishenwen/
+
+date: 2023-10-12T08:00:00+03:00
+
+draft: false
+
+keywords:
+
+
+
+ - payload filtering
+
+ - geo polygon
+
+ - search condition
+
+ - gsoc'23
+
+---
+
+
+
+
+
+
+
+## Introduction
+
+
+
+Greetings, I'm Zein Wen, and I was a Google Summer of Code 2023 participant at Qdrant. I got to work with an amazing mentor, Arnaud Gourlay, on enhancing the Qdrant Geo Polygon Filter. This new feature allows users to refine their query results using polygons. As the latest addition to the Geo Filter family of radius and rectangle filters, this enhancement promises greater flexibility in querying geo data, unlocking interesting new use cases.
+
+
+
+## Project Overview
+
+
+
+{{< figure src=""/articles_data/geo-polygon-filter-gsoc/geo-filter-example.png"" caption=""A Use Case of Geo Filter (https://traveltime.com/blog/map-postcode-data-catchment-area)"" alt=""A Use Case of Geo Filter"" >}}
+
+
+
+Because Qdrant is a powerful query vector database it presents immense potential for machine learning-driven applications, such as recommendation. However, the scope of vector queries alone may not always meet user requirements. Consider a scenario where you're seeking restaurant recommendations; it's not just about a list of restaurants, but those within your neighborhood. This is where the Geo Filter comes into play, enhancing query by incorporating additional filtering criteria. Up until now, Qdrant's geographic filter options were confined to circular and rectangular shapes, which may not align with the diverse boundaries found in the real world. This scenario was exactly what led to a user feature request and we decided it would be a good feature to tackle since it introduces greater capability for geo-related queries.
+
+
+
+## Technical Challenges
+
+
+
+**1. Geo Geometry Computation**
+
+
+
+{{< figure src=""/articles_data/geo-polygon-filter-gsoc/basic-concept.png"" caption=""Geo Space Basic Concept"" alt=""Geo Space Basic Concept"" >}}
+
+
+
+Internally, the Geo Filter doesn't start by testing each individual geo location as this would be computationally expensive. Instead, we create a geo hash layer that [divides the world](https://en.wikipedia.org/wiki/Grid_(spatial_index)#Grid-based_spatial_indexing) into rectangles. When a spatial index is created for Qdrant entries it assigns the entry to the geohash for its location.
+
+
+
+During a query we first identify all potential geo hashes that satisfy the filters and subsequently check for location candidates within those hashes. Accomplishing this search involves two critical geometry computations:
+
+1. determining if a polygon intersects with a rectangle
+
+2. ascertaining if a point lies within a polygon.
+
+
+
+{{< figure src=/articles_data/geo-polygon-filter-gsoc/geo-computation-testing.png caption=""Geometry Computation Testing"" alt=""Geometry Computation Testing"" >}}
+
+
+
+While we have a geo crate (a Rust library) that provides APIs for these computations, we dug in deeper to understand the underlying algorithms and verify their accuracy. This lead us to conduct extensive testing and visualization to determine correctness. In addition to assessing the current crate, we also discovered that there are multiple algorithms available for these computations. We invested time in exploring different approaches, such as [winding windows](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=of%20the%20algorithm.-,Winding%20number%20algorithm,-%5Bedit%5D) and [ray casting](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=.%5B2%5D-,Ray%20casting%20algorithm,-%5Bedit%5D), to grasp their distinctions, and pave the way for future improvements.
+
+
+
+Through this process, I enjoyed honing my ability to swiftly grasp unfamiliar concepts. In addition, I needed to develop analytical strategies to dissect and draw meaningful conclusions from them. This experience has been invaluable in expanding my problem-solving toolkit.
+
+
+
+**2. Proto and JSON format design**
+
+
+
+Considerable effort was devoted to designing the ProtoBuf and JSON interfaces for this new feature. This component is directly exposed to users, requiring a consistent and user-friendly interface, which in turns help drive a a positive user experience and less code modifications in the future.
+
+
+
+Initially, we contemplated aligning our interface with the [GeoJSON](https://geojson.org/) specification, given its prominence as a standard for many geo-related APIs. However, we soon realized that the way GeoJSON defines geometries significantly differs from our current JSON and ProtoBuf coordinate definitions for our point radius and rectangular filter. As a result, we prioritized API-level consistency and user experience, opting to align the new polygon definition with all our existing definitions.
+
+
+
+In addition, we planned to develop a separate multi-polygon filter in addition to the polygon. However, after careful consideration, we recognize that, for our use case, polygon filters can achieve the same result as a multi-polygon filter. This relationship mirrors how we currently handle multiple circles or rectangles. Consequently, we deemed the multi-polygon filter redundant and would introduce unnecessary complexity to the API.
+
+
+
+Doing this work illustrated to me the challenge of navigating real-world solutions that require striking a balance between adhering to established standards and prioritizing user experience. It also was key to understanding the wisdom of focusing on developing what's truly necessary for users, without overextending our efforts.
+
+
+
+## Outcomes
+
+
+
+**1. Capability of Deep Dive**
+
+Navigating unfamiliar code bases, concepts, APIs, and techniques is a common challenge for developers. Participating in GSoC was akin to me going from the safety of a swimming pool and right into the expanse of the ocean. Having my mentor’s support during this transition was invaluable. He provided me with numerous opportunities to independently delve into areas I had never explored before. I have grown into no longer fearing unknown technical areas, whether it's unfamiliar code, techniques, or concepts in specific domains. I've gained confidence in my ability to learn them step by step and use them to create the things I envision.
+
+
+
+**2. Always Put User in Minds**
+
+Another crucial lesson I learned is the importance of considering the user's experience and their specific use cases. While development may sometimes entail iterative processes, every aspect that directly impacts the user must be approached and executed with empathy. Neglecting this consideration can lead not only to functional errors but also erode the trust of users due to inconsistency and confusion, which then leads to them no longer using my work.
+
+
+
+**3. Speak Up and Effectively Communicate**
+
+Finally, In the course of development, encountering differing opinions is commonplace. It's essential to remain open to others' ideas, while also possessing the resolve to communicate one's own perspective clearly. This fosters productive discussions and ultimately elevates the quality of the development process.
+
+
+
+### Wrap up
+
+
+
+Being selected for Google Summer of Code 2023 and collaborating with Arnaud and the other Qdrant engineers, along with all the other community members, has been a true privilege. I'm deeply grateful to those who invested their time and effort in reviewing my code, engaging in discussions about alternatives and design choices, and offering assistance when needed. Through these interactions, I've experienced firsthand the essence of open source and the culture that encourages collaboration. This experience not only allowed me to write Rust code for a real-world product for the first time, but it also opened the door to the amazing world of open source.
+
+
+
+Without a doubt, I'm eager to continue growing alongside this community and contribute to new features and enhancements that elevate the product. I've also become an advocate for Qdrant, introducing this project to numerous coworkers and friends in the tech industry. I'm excited to witness new users and contributors emerge from within my own network!
+
+
+
+If you want to try out my work, read the [documentation](/documentation/concepts/filtering/#geo-polygon) and then, either sign up for a free [cloud account](https://cloud.qdrant.io) or download the [Docker image](https://hub.docker.com/r/qdrant/qdrant). I look forward to seeing how people are using my work in their own applications!
+",articles/geo-polygon-filter-gsoc.md
+"---
+
+title: ""Introducing Qdrant 1.3.0""
+
+short_description: ""New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes.""
+
+description: ""New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes.""
+
+social_preview_image: /articles_data/qdrant-1.3.x/social_preview.png
+
+small_preview_image: /articles_data/qdrant-1.3.x/icon.svg
+
+preview_dir: /articles_data/qdrant-1.3.x/preview
+
+weight: 2
+
+author: David Sertic
+
+author_link:
+
+date: 2023-06-26T00:00:00Z
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - new features
+
+ - oversampling
+
+ - grouping lookup
+
+ - io_uring
+
+ - oversampling
+
+ - group lookup
+
+---
+
+
+
+A brand-new [Qdrant 1.3.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) comes packed with a plethora of new features, performance improvements and bux fixes:
+
+
+
+1. Asynchronous I/O interface: Reduce overhead by managing I/O operations asynchronously, thus minimizing context switches.
+
+2. Oversampling for Quantization: Improve the accuracy and performance of your queries while using Scalar or Product Quantization.
+
+3. Grouping API lookup: Storage optimization method that lets you look for points in another collection using group ids.
+
+4. Qdrant Web UI: A convenient dashboard to help you manage data stored in Qdrant.
+
+5. Temp directory for Snapshots: Set a separate storage directory for temporary snapshots on a faster disk.
+
+6. Other important changes
+
+
+
+Your feedback is valuable to us, and are always tying to include some of your feature requests into our roadmap. Join [our Discord community](https://qdrant.to/discord) and help us build Qdrant!.
+
+
+
+## New features
+
+
+
+### Asychronous I/O interface
+
+
+
+Going forward, we will support the `io_uring` asychnronous interface for storage devices on Linux-based systems. Since its introduction, `io_uring` has been proven to speed up slow-disk deployments as it decouples kernel work from the IO process.
+
+
+
+
+
+
+
+This interface uses two ring buffers to queue and manage I/O operations asynchronously, avoiding costly context switches and reducing overhead. Unlike mmap, it frees the user threads to do computations instead of waiting for the kernel to complete.
+
+
+
+![io_uring](/articles_data/qdrant-1.3.x/io-uring.png)
+
+
+
+
+
+
+
+#### Enable the interface from your config file:
+
+
+
+```yaml
+
+storage:
+
+ # enable the async scorer which uses io_uring
+
+ async_scorer: true
+
+```
+
+You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`.
+
+
+
+This optimization will mainly benefit workloads with lots of disk IO (e.g. querying on-disk collections with rescoring).
+
+Please keep in mind that this feature is experimental and that the interface may change in further versions.
+
+
+
+### Oversampling for quantization
+
+
+
+We are introducing [oversampling](/documentation/guides/quantization/#oversampling) as a new way to help you improve the accuracy and performance of similarity search algorithms. With this method, you are able to significantly compress high-dimensional vectors in memory and then compensate the accuracy loss by re-scoring additional points with the original vectors.
+
+
+
+You will experience much faster performance with quantization due to parallel disk usage when reading vectors. Much better IO means that you can keep quantized vectors in RAM, so the pre-selection will be even faster. Finally, once pre-selection is done, you can use parallel IO to retrieve original vectors, which is significantly faster than traversing HNSW on slow disks.
+
+
+
+#### Set the oversampling factor via query:
+
+
+
+Here is how you can configure the oversampling factor - define how many extra vectors should be pre-selected using the quantized index, and then re-scored using original vectors.
+
+
+
+```http
+
+POST /collections/{collection_name}/points/search
+
+{
+
+ ""params"": {
+
+ ""quantization"": {
+
+ ""ignore"": false,
+
+ ""rescore"": true,
+
+ ""oversampling"": 2.4
+
+ }
+
+ },
+
+ ""vector"": [0.2, 0.1, 0.9, 0.7],
+
+ ""limit"": 100
+
+}
+
+```
+
+
+
+```python
+
+from qdrant_client import QdrantClient
+
+from qdrant_client.http import models
+
+
+
+client = QdrantClient(""localhost"", port=6333)
+
+
+
+client.search(
+
+ collection_name=""{collection_name}"",
+
+ query_vector=[0.2, 0.1, 0.9, 0.7],
+
+ search_params=models.SearchParams(
+
+ quantization=models.QuantizationSearchParams(
+
+ ignore=False,
+
+ rescore=True,
+
+ oversampling=2.4
+
+ )
+
+ )
+
+)
+
+```
+
+
+
+In this case, if `oversampling` is 2.4 and `limit` is 100, then 240 vectors will be pre-selected using quantized index, and then the top 100 points will be returned after re-scoring with the unquantized vectors.
+
+
+
+As you can see from the example above, this parameter is set during the query. This is a flexible method that will let you tune query accuracy. While the index is not changed, you can decide how many points you want to retrieve using quantized vectors.
+
+
+
+### Grouping API lookup
+
+
+
+In version 1.2.0, we introduced a mechanism for requesting groups of points. Our new feature extends this functionality by giving you the option to look for points in another collection using the group ids. We wanted to add this feature, since having a single point for the shared data of the same item optimizes storage use, particularly if the payload is large.
+
+
+
+This has the extra benefit of having a single point to update when the information shared by the points in a group changes.
+
+
+
+![Group Lookup](/articles_data/qdrant-1.3.x/group-lookup.png)
+
+
+
+For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point.
+
+
+
+#### Adding the parameter to grouping API request:
+
+
+
+When using the grouping API, add the `with_lookup` parameter to bring the information from those points into each group:
+
+
+
+```http
+
+POST /collections/chunks/points/search/groups
+
+{
+
+ // Same as in the regular search API
+
+ ""vector"": [1.1],
+
+ ...,
+
+
+
+ // Grouping parameters
+
+ ""group_by"": ""document_id"",
+
+ ""limit"": 2,
+
+ ""group_size"": 2,
+
+
+
+ // Lookup parameters
+
+ ""with_lookup"": {
+
+ // Name of the collection to look up points in
+
+ ""collection_name"": ""documents"",
+
+
+
+ // Options for specifying what to bring from the payload
+
+ // of the looked up point, true by default
+
+ ""with_payload"": [""title"", ""text""],
+
+
+
+ // Options for specifying what to bring from the vector(s)
+
+ // of the looked up point, true by default
+
+ ""with_vectors: false,
+
+ }
+
+}
+
+```
+
+
+
+```python
+
+client.search_groups(
+
+ collection_name=""chunks"",
+
+
+
+ # Same as in the regular search() API
+
+ query_vector=[1.1],
+
+ ...,
+
+
+
+ # Grouping parameters
+
+ group_by=""document_id"", # Path of the field to group by
+
+ limit=2, # Max amount of groups
+
+ group_size=2, # Max amount of points per group
+
+
+
+ # Lookup parameters
+
+ with_lookup=models.WithLookup(
+
+ # Name of the collection to look up points in
+
+ collection_name=""documents"",
+
+
+
+ # Options for specifying what to bring from the payload
+
+ # of the looked up point, True by default
+
+ with_payload=[""title"", ""text""]
+
+
+
+ # Options for specifying what to bring from the vector(s)
+
+ # of the looked up point, True by default
+
+ with_vectors=False,
+
+ )
+
+)
+
+```
+
+
+
+### Qdrant web user interface
+
+
+
+We are excited to announce a more user-friendly way to organize and work with your collections inside of Qdrant. Our dashboard's design is simple, but very intuitive and easy to access.
+
+
+
+Try it out now! If you have Docker running, you can [quickstart Qdrant](/documentation/quick-start/) and access the Dashboard locally from [http://localhost:6333/dashboard](http://localhost:6333/dashboard). You should see this simple access point to Qdrant:
+
+
+
+![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png)
+
+
+
+### Temporary directory for Snapshots
+
+
+
+Currently, temporary snapshot files are created inside the `/storage` directory. Oftentimes `/storage` is a network-mounted disk. Therefore, we found this method suboptimal because `/storage` is limited in disk size and also because writing data to it may affect disk performance as it consumes bandwidth. This new feature allows you to specify a different directory on another disk that is faster. We expect this feature to significantly optimize cloud performance.
+
+
+
+To change it, access `config.yaml` and set `storage.temp_path` to another directory location.
+
+
+
+## Important changes
+
+
+
+The latest release focuses not only on the new features but also introduces some changes making
+
+Qdrant even more reliable.
+
+
+
+### Optimizing group requests
+
+
+
+Internally, `is_empty` was not using the index when it was called, so it had to deserialize the whole payload to see if the key had values or not. Our new update makes sure to check the index first, before confirming with the payload if it is actually `empty`/`null`, so these changes improve performance only when the negated condition is true (e.g. it improves when the field is not empty). Going forward, this will improve the way grouping API requests are handled.
+
+
+
+### Faster read access with mmap
+
+
+
+If you used mmap, you most likely found that segments were always created with cold caches. The first request to the database needed to request the disk, which made startup slower despite plenty of RAM being available. We have implemeneted a way to ask the kernel to ""heat up"" the disk cache and make initialization much faster.
+
+
+
+The function is expected to be used on startup and after segment optimization and reloading of newly indexed segment. So far this is only implemented for ""immutable"" memmaps.
+
+
+
+## Release notes
+
+
+
+As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) describe all the changes
+
+introduced in the latest version.
+",articles/qdrant-1.3.x.md
+"---
+
+title: Vector Search in constant time
+
+short_description: Apply Quantum Computing to your search engine
+
+description: Quantum Quantization enables vector search in constant time. This article will discuss the concept of quantum quantization for ANN vector search.
+
+preview_dir: /articles_data/quantum-quantization/preview
+
+social_preview_image: /articles_data/quantum-quantization/social_preview.png
+
+small_preview_image: /articles_data/quantum-quantization/icon.svg
+
+weight: 1000
+
+author: Prankstorm Team
+
+draft: false
+
+author_link: https://www.youtube.com/watch?v=dQw4w9WgXcQ
+
+date: 2023-04-01T00:48:00.000Z
+
+---
+
+
+
+
+
+The advent of quantum computing has revolutionized many areas of science and technology, and one of the most intriguing developments has been its potential application to artificial neural networks (ANNs). One area where quantum computing can significantly improve performance is in vector search, a critical component of many machine learning tasks. In this article, we will discuss the concept of quantum quantization for ANN vector search, focusing on the conversion of float32 to qbit vectors and the ability to perform vector search on arbitrary-sized databases in constant time.
+
+
+
+
+
+## Quantum Quantization and Entanglement
+
+
+
+Quantum quantization is a novel approach that leverages the power of quantum computing to speed up the search process in ANNs. By converting traditional float32 vectors into qbit vectors, we can create quantum entanglement between the qbits. Quantum entanglement is a unique phenomenon in which the states of two or more particles become interdependent, regardless of the distance between them. This property of quantum systems can be harnessed to create highly efficient vector search algorithms.
+
+
+
+
+
+The conversion of float32 vectors to qbit vectors can be represented by the following formula:
+
+
+
+```text
+
+qbit_vector = Q( float32_vector )
+
+```
+
+
+
+where Q is the quantum quantization function that transforms the float32_vector into a quantum entangled qbit_vector.
+
+
+
+
+
+## Vector Search in Constant Time
+
+
+
+The primary advantage of using quantum quantization for ANN vector search is the ability to search through an arbitrary-sized database in constant time.
+
+
+
+The key to performing vector search in constant time with quantum quantization is to use a quantum algorithm called Grover's algorithm.
+
+Grover's algorithm is a quantum search algorithm that finds the location of a marked item in an unsorted database in O(√N) time, where N is the size of the database.
+
+This is a significant improvement over classical algorithms, which require O(N) time to solve the same problem.
+
+
+
+However, the is one another trick, which allows to improve Grover's algorithm performanse dramatically.
+
+This trick is called transposition and it allows to reduce the number of Grover's iterations from O(√N) to O(√D), where D - is a dimension of the vector space.
+
+
+
+And since the dimension of the vector space is much smaller than the number of vectors, and usually is a constant, this trick allows to reduce the number of Grover's iterations from O(√N) to O(√D) = O(1).
+
+
+
+
+
+Check out our [Quantum Quantization PR](https://github.com/qdrant/qdrant/pull/1639) on GitHub.
+
+
+",articles/quantum-quantization.md
+"---
+
+title: ""Introducing Qdrant 1.2.x""
+
+short_description: ""Check out what Qdrant 1.2 brings to vector search""
+
+description: ""Check out what Qdrant 1.2 brings to vector search""
+
+social_preview_image: /articles_data/qdrant-1.2.x/social_preview.png
+
+small_preview_image: /articles_data/qdrant-1.2.x/icon.svg
+
+preview_dir: /articles_data/qdrant-1.2.x/preview
+
+weight: 8
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-05-24T10:45:00+02:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - new features
+
+ - product quantization
+
+ - optional vectors
+
+ - nested filters
+
+ - appendable mmap
+
+ - group requests
+
+---
+
+
+
+A brand-new Qdrant 1.2 release comes packed with a plethora of new features, some of which
+
+were highly requested by our users. If you want to shape the development of the Qdrant vector
+
+database, please [join our Discord community](https://qdrant.to/discord) and let us know
+
+how you use it!
+
+
+
+## New features
+
+
+
+As usual, a minor version update of Qdrant brings some interesting new features. We love to see your
+
+feedback, and we tried to include the features most requested by our community.
+
+
+
+### Product Quantization
+
+
+
+The primary focus of Qdrant was always performance. That's why we built it in Rust, but we were
+
+always concerned about making vector search affordable. From the very beginning, Qdrant offered
+
+support for disk-stored collections, as storage space is way cheaper than memory. That's also
+
+why we have introduced the [Scalar Quantization](/articles/scalar-quantization/) mechanism recently,
+
+which makes it possible to reduce the memory requirements by up to four times.
+
+
+
+Today, we are bringing a new quantization mechanism to life. A separate article on [Product
+
+Quantization](/documentation/quantization/#product-quantization) will describe that feature in more
+
+detail. In a nutshell, you can **reduce the memory requirements by up to 64 times**!
+
+
+
+### Optional named vectors
+
+
+
+Qdrant has been supporting multiple named vectors per point for quite a long time. Those may have
+
+utterly different dimensionality and distance functions used to calculate similarity. Having multiple
+
+embeddings per item is an essential real-world scenario. For example, you might be encoding textual
+
+and visual data using different models. Or you might be experimenting with different models but
+
+don't want to make your payloads redundant by keeping them in separate collections.
+
+
+
+![Optional vectors](/articles_data/qdrant-1.2.x/optional-vectors.png)
+
+
+
+However, up to the previous version, we requested that you provide all the vectors for each point. There
+
+have been many requests to allow nullable vectors, as sometimes you cannot generate an embedding or
+
+simply don't want to for reasons we don't need to know.
+
+
+
+### Grouping requests
+
+
+
+Embeddings are great for capturing the semantics of the documents, but we rarely encode larger pieces
+
+of data into a single vector. Having a summary of a book may sound attractive, but in reality, we
+
+divide it into paragraphs or some different parts to have higher granularity. That pays off when we
+
+perform the semantic search, as we can return the relevant pieces only. That's also how modern tools
+
+like Langchain process the data. The typical way is to encode some smaller parts of the document and
+
+keep the document id as a payload attribute.
+
+
+
+![Query without grouping request](/articles_data/qdrant-1.2.x/without-grouping-request.png)
+
+
+
+There are cases where we want to find relevant parts, but only up to a specific number of results
+
+per document (for example, only a single one). Up till now, we had to implement such a mechanism
+
+on the client side and send several calls to the Qdrant engine. But that's no longer the case.
+
+Qdrant 1.2 provides a mechanism for [grouping requests](/documentation/search/#grouping-api), which
+
+can handle that server-side, within a single call to the database. This mechanism is similar to the
+
+SQL `GROUP BY` clause.
+
+
+
+![Query with grouping request](/articles_data/qdrant-1.2.x/with-grouping-request.png)
+
+
+
+You are not limited to a single result per document, and you can select how many entries will be
+
+returned.
+
+
+
+### Nested filters
+
+
+
+Unlike some other vector databases, Qdrant accepts any arbitrary JSON payload, including
+
+arrays, objects, and arrays of objects. You can also [filter the search results using nested
+
+keys](/documentation/filtering/#nested-key), even though arrays (using the `[]` syntax).
+
+
+
+Before Qdrant 1.2 it was impossible to express some more complex conditions for the
+
+nested structures. For example, let's assume we have the following payload:
+
+
+
+```json
+
+{
+
+ ""country"": ""Japan"",
+
+ ""cities"": [
+
+ {
+
+ ""name"": ""Tokyo"",
+
+ ""population"": 9.3,
+
+ ""area"": 2194
+
+ },
+
+ {
+
+ ""name"": ""Osaka"",
+
+ ""population"": 2.7,
+
+ ""area"": 223
+
+ },
+
+ {
+
+ ""name"": ""Kyoto"",
+
+ ""population"": 1.5,
+
+ ""area"": 827.8
+
+ }
+
+ ]
+
+}
+
+```
+
+
+
+We want to filter out the results to include the countries with a city with over 2 million citizens
+
+and an area bigger than 500 square kilometers but no more than 1000. There is no such a city in
+
+Japan, looking at our data, but if we wrote the following filter, it would be returned:
+
+
+
+```json
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""country.cities[].population"",
+
+ ""range"": {
+
+ ""gte"": 2
+
+ }
+
+ },
+
+ {
+
+ ""key"": ""country.cities[].area"",
+
+ ""range"": {
+
+ ""gt"": 500,
+
+ ""lte"": 1000
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""limit"": 3
+
+}
+
+```
+
+
+
+Japan would be returned because Tokyo and Osaka match the first criteria, while Kyoto fulfills
+
+the second. But that's not what we wanted to achieve. That's the motivation behind introducing
+
+a new type of nested filter.
+
+
+
+```json
+
+{
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""nested"": {
+
+ ""key"": ""country.cities"",
+
+ ""filter"": {
+
+ ""must"": [
+
+ {
+
+ ""key"": ""population"",
+
+ ""range"": {
+
+ ""gte"": 2
+
+ }
+
+ },
+
+ {
+
+ ""key"": ""area"",
+
+ ""range"": {
+
+ ""gt"": 500,
+
+ ""lte"": 1000
+
+ }
+
+ }
+
+ ]
+
+ }
+
+ }
+
+ }
+
+ ]
+
+ },
+
+ ""limit"": 3
+
+}
+
+```
+
+
+
+The syntax is consistent with all the other supported filters and enables new possibilities. In
+
+our case, it allows us to express the joined condition on a nested structure and make the results
+
+list empty but correct.
+
+
+
+## Important changes
+
+
+
+The latest release focuses not only on the new features but also introduces some changes making
+
+Qdrant even more reliable.
+
+
+
+### Recovery mode
+
+
+
+There has been an issue in memory-constrained environments, such as cloud, happening when users were
+
+pushing massive amounts of data into the service using `wait=false`. This data influx resulted in an
+
+overreaching of disk or RAM limits before the Write-Ahead Logging (WAL) was fully applied. This
+
+situation was causing Qdrant to attempt a restart and reapplication of WAL, failing recurrently due
+
+to the same memory constraints and pushing the service into a frustrating crash loop with many
+
+Out-of-Memory errors.
+
+
+
+Qdrant 1.2 enters recovery mode, if enabled, when it detects a failure on startup.
+
+That makes the service halt the loading of collection data and commence operations in a partial state.
+
+This state allows for removing collections but doesn't support search or update functions.
+
+**Recovery mode [has to be enabled by user](/documentation/administration/#recovery-mode).**
+
+
+
+### Appendable mmap
+
+
+
+For a long time, segments using mmap storage were `non-appendable` and could only be constructed by
+
+the optimizer. Dynamically adding vectors to the mmap file is fairly complicated and thus not
+
+implemented in Qdrant, but we did our best to implement it in the recent release. If you want
+
+to read more about segments, check out our docs on [vector storage](/documentation/storage/#vector-storage).
+
+
+
+## Security
+
+
+
+There are two major changes in terms of [security](/documentation/security/):
+
+
+
+1. **API-key support** - basic authentication with a static API key to prevent unwanted access. Previously
+
+ API keys were only supported in [Qdrant Cloud](https://cloud.qdrant.io/).
+
+2. **TLS support** - to use encrypted connections and prevent sniffing/MitM attacks.
+
+
+
+## Release notes
+
+
+
+As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.2.0) describe all the changes
+
+introduced in the latest version.
+",articles/qdrant-1.2.x.md
+"---
+
+title: ""Qdrant under the hood: io_uring""
+
+short_description: ""The Linux io_uring API offers great performance in certain cases. Here's how Qdrant uses it!""
+
+description: ""Slow disk decelerating your Qdrant deployment? Get on top of IO overhead with this one trick!""
+
+social_preview_image: /articles_data/io_uring/social_preview.png
+
+small_preview_image: /articles_data/io_uring/io_uring-icon.svg
+
+preview_dir: /articles_data/io_uring/preview
+
+weight: 3
+
+author: Andre Bogus
+
+author_link: https://llogiq.github.io
+
+date: 2023-06-21T09:45:00+02:00
+
+draft: false
+
+keywords:
+
+ - vector search
+
+ - linux
+
+ - optimization
+
+aliases: [ /articles/io-uring/ ]
+
+---
+
+
+
+With Qdrant [version 1.3.0](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) we
+
+introduce the alternative io\_uring based *async uring* storage backend on
+
+Linux-based systems. Since its introduction, io\_uring has been known to improve
+
+async throughput wherever the OS syscall overhead gets too high, which tends to
+
+occur in situations where software becomes *IO bound* (that is, mostly waiting
+
+on disk).
+
+
+
+## Input+Output
+
+
+
+Around the mid-90s, the internet took off. The first servers used a process-
+
+per-request setup, which was good for serving hundreds if not thousands of
+
+concurrent request. The POSIX Input + Output (IO) was modeled in a strictly
+
+synchronous way. The overhead of starting a new process for each request made
+
+this model unsustainable. So servers started forgoing process separation, opting
+
+for the thread-per-request model. But even that ran into limitations.
+
+
+
+I distinctly remember when someone asked the question whether a server could
+
+serve 10k concurrent connections, which at the time exhausted the memory of
+
+most systems (because every thread had to have its own stack and some other
+
+metadata, which quickly filled up available memory). As a result, the
+
+synchronous IO was replaced by asynchronous IO during the 2.5 kernel update,
+
+either via `select` or `epoll` (the latter being Linux-only, but a small bit
+
+more efficient, so most servers of the time used it).
+
+
+
+However, even this crude form of asynchronous IO carries the overhead of at
+
+least one system call per operation. Each system call incurs a context switch,
+
+and while this operation is itself not that slow, the switch disturbs the
+
+caches. Today's CPUs are much faster than memory, but if their caches start to
+
+miss data, the memory accesses required led to longer and longer wait times for
+
+the CPU.
+
+
+
+### Memory-mapped IO
+
+
+
+Another way of dealing with file IO (which unlike network IO doesn't have a hard
+
+time requirement) is to map parts of files into memory - the system fakes having
+
+that chunk of the file in memory, so when you read from a location there, the
+
+kernel interrupts your process to load the needed data from disk, and resumes
+
+your process once done, whereas writing to the memory will also notify the
+
+kernel. Also the kernel can prefetch data while the program is running, thus
+
+reducing the likelyhood of interrupts.
+
+
+
+Thus there is still some overhead, but (especially in asynchronous
+
+applications) it's far less than with `epoll`. The reason this API is rarely
+
+used in web servers is that these usually have a large variety of files to
+
+access, unlike a database, which can map its own backing store into memory
+
+once.
+
+
+
+### Combating the Poll-ution
+
+
+
+There were multiple experiments to improve matters, some even going so far as
+
+moving a HTTP server into the kernel, which of course brought its own share of
+
+problems. Others like Intel added their own APIs that ignored the kernel and
+
+worked directly on the hardware.
+
+
+
+Finally, Jens Axboe took matters into his own hands and proposed a ring buffer
+
+based interface called *io\_uring*. The buffers are not directly for data, but
+
+for operations. User processes can setup a Submission Queue (SQ) and a
+
+Completion Queue (CQ), both of which are shared between the process and the
+
+kernel, so there's no copying overhead.
+
+
+
+![io_uring diagram](/articles_data/io_uring/io-uring.png)
+
+
+
+Apart from avoiding copying overhead, the queue-based architecture lends
+
+itself to multithreading as item insertion/extraction can be made lockless,
+
+and once the queues are set up, there is no further syscall that would stop
+
+any user thread.
+
+
+
+Servers that use this can easily get to over 100k concurrent requests. Today
+
+Linux allows asynchronous IO via io\_uring for network, disk and accessing
+
+other ports, e.g. for printing or recording video.
+
+
+
+## And what about Qdrant?
+
+
+
+Qdrant can store everything in memory, but not all data sets may fit, which can
+
+require storing on disk. Before io\_uring, Qdrant used mmap to do its IO. This
+
+led to some modest overhead in case of disk latency. The kernel may
+
+stop a user thread trying to access a mapped region, which incurs some context
+
+switching overhead plus the wait time until the disk IO is finished. Ultimately,
+
+this works very well with the asynchronous nature of Qdrant's core.
+
+
+
+One of the great optimizations Qdrant offers is quantization (either
+
+[scalar](/articles/scalar-quantization/) or
+
+[product](/articles/product-quantization/)-based).
+
+However unless the collection resides fully in memory, this optimization
+
+method generates significant disk IO, so it is a prime candidate for possible
+
+improvements.
+
+
+
+If you run Qdrant on Linux, you can enable io\_uring with the following in your
+
+configuration:
+
+
+
+```yaml
+
+# within the storage config
+
+storage:
+
+ # enable the async scorer which uses io_uring
+
+ async_scorer: true
+
+```
+
+
+
+You can return to the mmap based backend by either deleting the `async_scorer`
+
+entry or setting the value to `false`.
+
+
+
+## Benchmarks
+
+
+
+To run the benchmark, use a test instance of Qdrant. If necessary spin up a
+
+docker container and load a snapshot of the collection you want to benchmark
+
+with. You can copy and edit our [benchmark script](/articles_data/io_uring/rescore-benchmark.sh)
+
+to run the benchmark. Run the script with and without enabling
+
+`storage.async_scorer` and once. You can measure IO usage with `iostat` from
+
+another console.
+
+
+
+For our benchmark, we chose the laion dataset picking 5 million 768d entries.
+
+We enabled scalar quantization + HNSW with m=16 and ef_construct=512.
+
+We do the quantization in RAM, HNSW in RAM but keep the original vectors on
+
+disk (which was a network drive rented from Hetzner for the benchmark).
+
+
+
+If you want to reproduce the benchmarks, you can get snapshots containing the
+
+datasets:
+
+
+
+* [mmap only](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-mmap.snapshot)
+
+* [with scalar quantization](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-sq-m16-mmap.shapshot)
+
+
+
+Running the benchmark, we get the following IOPS, CPU loads and wall clock times:
+
+
+
+| | oversampling | parallel | ~max IOPS | CPU% (of 4 cores) | time (s) (avg of 3) |
+
+|----------|--------------|----------|-----------|-------------------|---------------------|
+
+| io_uring | 1 | 4 | 4000 | 200 | 12 |
+
+| mmap | 1 | 4 | 2000 | 93 | 43 |
+
+| io_uring | 1 | 8 | 4000 | 200 | 12 |
+
+| mmap | 1 | 8 | 2000 | 90 | 43 |
+
+| io_uring | 4 | 8 | 7000 | 100 | 30 |
+
+| mmap | 4 | 8 | 2300 | 50 | 145 |
+
+
+
+
+
+Note that in this case, the IO operations have relatively high latency due to
+
+using a network disk. Thus, the kernel takes more time to fulfil the mmap
+
+requests, and application threads need to wait, which is reflected in the CPU
+
+percentage. On the other hand, with the io\_uring backend, the application
+
+threads can better use available cores for the rescore operation without any
+
+IO-induced delays.
+
+
+
+Oversampling is a new feature to improve accuracy at the cost of some
+
+performance. It allows setting a factor, which is multiplied with the `limit`
+
+while doing the search. The results are then re-scored using the original vector
+
+and only then the top results up to the limit are selected.
+
+
+
+## Discussion
+
+
+
+Looking back, disk IO used to be very serialized; re-positioning read-write
+
+heads on moving platter was a slow and messy business. So the system overhead
+
+didn't matter as much, but nowadays with SSDs that can often even parallelize
+
+operations while offering near-perfect random access, the overhead starts to
+
+become quite visible. While memory-mapped IO gives us a fair deal in terms of
+
+ease of use and performance, we can improve on the latter in exchange for
+
+some modest complexity increase.
+
+
+
+io\_uring is still quite young, having only been introduced in 2019 with kernel
+
+5.1, so some administrators will be wary of introducing it. Of course, as with
+
+performance, the right answer is usually ""it depends"", so please review your
+
+personal risk profile and act accordingly.
+
+
+
+## Best Practices
+
+
+
+If your on-disk collection's query performance is of sufficiently high
+
+priority to you, enable the io\_uring-based async\_scorer to greatly reduce
+
+operating system overhead from disk IO. On the other hand, if your
+
+collections are in memory only, activating it will be ineffective. Also note
+
+that many queries are not IO bound, so the overhead may or may not become
+
+measurable in your workload. Finally, on-device disks typically carry lower
+
+latency than network drives, which may also affect mmap overhead.
+
+
+
+Therefore before you roll out io\_uring, perform the above or a similar
+
+benchmark with both mmap and io\_uring and measure both wall time and IOps).
+
+Benchmarks are always highly use-case dependent, so your mileage may vary.
+
+Still, doing that benchmark once is a small price for the possible performance
+
+wins. Also please
+
+[tell us](https://discord.com/channels/907569970500743200/907569971079569410)
+
+about your benchmark results!
+",articles/io_uring.md
+"---
+
+title: ""Hybrid Search Revamped - Building with Qdrant's Query API""
+
+short_description: ""Merging different search methods to improve the search quality was never easier""
+
+description: ""Our new Query API allows you to build a hybrid search system that uses different search methods to improve search quality & experience. Learn more here.""
+
+preview_dir: /articles_data/hybrid-search/preview
+
+social_preview_image: /articles_data/hybrid-search/social-preview.png
+
+weight: -150
+
+author: Kacper Łukawski
+
+author_link: https://kacperlukawski.com
+
+date: 2024-07-25T00:00:00.000Z
+
+---
+
+
+
+It's been over a year since we published the original article on how to build a hybrid
+
+search system with Qdrant. The idea was straightforward: combine the results from different search methods to improve
+
+retrieval quality. Back in 2023, you still needed to use an additional service to bring lexical search
+
+capabilities and combine all the intermediate results. Things have changed since then. Once we introduced support for
+
+sparse vectors, [the additional search service became obsolete](/articles/sparse-vectors/), but you were still
+
+required to combine the results from different methods on your end.
+
+
+
+**Qdrant 1.10 introduces a new Query API that lets you build a search system by combining different search methods
+
+to improve retrieval quality**. Everything is now done on the server side, and you can focus on building the best search
+
+experience for your users. In this article, we will show you how to utilize the new [Query
+
+API](/documentation/concepts/search/#query-api) to build a hybrid search system.
+
+
+
+## Introducing the new Query API
+
+
+
+At Qdrant, we believe that vector search capabilities go well beyond a simple search for nearest neighbors.
+
+That's why we provided separate methods for different search use cases, such as `search`, `recommend`, or `discover`.
+
+With the latest release, we are happy to introduce the new Query API, which combines all of these methods into a single
+
+endpoint and also supports creating nested multistage queries that can be used to build complex search pipelines.
+
+
+
+If you are an existing Qdrant user, you probably have a running search mechanism that you want to improve, whether sparse
+
+or dense. Doing any changes should be preceded by a proper evaluation of its effectiveness.
+
+
+
+## How effective is your search system?
+
+
+
+None of the experiments makes sense if you don't measure the quality. How else would you compare which method works
+
+better for your use case? The most common way of doing that is by using the standard metrics, such as `precision@k`,
+
+`MRR`, or `NDCG`. There are existing libraries, such as [ranx](https://amenra.github.io/ranx/), that can help you with
+
+that. We need to have the ground truth dataset to calculate any of these, but curating it is a separate task.
+
+
+
+```python
+
+from ranx import Qrels, Run, evaluate
+
+
+
+# Qrels, or query relevance judgments, keep the ground truth data
+
+qrels_dict = { ""q_1"": { ""d_12"": 5, ""d_25"": 3 },
+
+ ""q_2"": { ""d_11"": 6, ""d_22"": 1 } }
+
+
+
+# Runs are built from the search results
+
+run_dict = { ""q_1"": { ""d_12"": 0.9, ""d_23"": 0.8, ""d_25"": 0.7,
+
+ ""d_36"": 0.6, ""d_32"": 0.5, ""d_35"": 0.4 },
+
+ ""q_2"": { ""d_12"": 0.9, ""d_11"": 0.8, ""d_25"": 0.7,
+
+ ""d_36"": 0.6, ""d_22"": 0.5, ""d_35"": 0.4 } }
+
+
+
+# We need to create both objects, and then we can evaluate the run against the qrels
+
+qrels = Qrels(qrels_dict)
+
+run = Run(run_dict)
+
+
+
+# Calculating the NDCG@5 metric is as simple as that
+
+evaluate(qrels, run, ""ndcg@5"")
+
+```
+
+
+
+## Available embedding options with Query API
+
+
+
+Support for multiple vectors per point is nothing new in Qdrant, but introducing the Query API makes it even
+
+more powerful. The 1.10 release supports the multivectors, allowing you to treat embedding lists
+
+as a single entity. There are many possible ways of utilizing this feature, and the most prominent one is the support
+
+for late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/). Instead of having a single embedding for each document or query, this
+
+family of models creates a separate one for each token of text. In the search process, the final score is calculated
+
+based on the interaction between the tokens of the query and the document. Contrary to cross-encoders, document
+
+embedding might be precomputed and stored in the database, which makes the search process much faster. If you are
+
+curious about the details, please check out [the article about ColBERT, written by our friends from Jina
+
+AI](https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter-in-search/).
+
+
+
+![Late interaction](/articles_data/hybrid-search/late-interaction.png)
+
+
+
+Besides multivectors, you can use regular dense and sparse vectors, and experiment with smaller data types to reduce
+
+memory use. Named vectors can help you store different dimensionalities of the embeddings, which is useful if you
+
+use multiple models to represent your data, or want to utilize the Matryoshka embeddings.
+
+
+
+![Multiple vectors per point](/articles_data/hybrid-search/multiple-vectors.png)
+
+
+
+There is no single way of building a hybrid search. The process of designing it is an exploratory exercise, where you
+
+need to test various setups and measure their effectiveness. Building a proper search experience is a
+
+complex task, and it's better to keep it data-driven, not just rely on the intuition.
+
+
+
+## Fusion vs reranking
+
+
+
+We can, distinguish two main approaches to building a hybrid search system: fusion and reranking. The former is about
+
+combining the results from different search methods, based solely on the scores returned by each method. That usually
+
+involves some normalization, as the scores returned by different methods might be in different ranges. After that, there
+
+is a formula that takes the relevancy measures and calculates the final score that we use later on to reorder the
+
+documents. Qdrant has built-in support for the Reciprocal Rank Fusion method, which is the de facto standard in the
+
+field.
+
+
+
+![Fusion](/articles_data/hybrid-search/fusion.png)
+
+
+
+Reranking, on the other hand, is about taking the results from different search methods and reordering them based on
+
+some additional processing using the content of the documents, not just the scores. This processing may rely on an
+
+additional neural model, such as a cross-encoder which would be inefficient enough to be used on the whole dataset.
+
+These methods are practically applicable only when used on a smaller subset of candidates returned by the faster search
+
+methods. Late interaction models, such as ColBERT, are way more efficient in this case, as they can be used to rerank
+
+the candidates without the need to access all the documents in the collection.
+
+
+
+![Reranking](/articles_data/hybrid-search/reranking.png)
+
+
+
+### Why not a linear combination?
+
+
+
+It's often proposed to use full-text and vector search scores to form a linear combination formula to rerank
+
+the results. So it goes like this:
+
+
+
+```final_score = 0.7 * vector_score + 0.3 * full_text_score```
+
+
+
+However, we didn't even consider such a setup. Why? Those scores don't make the problem linearly separable. We used
+
+the BM25 score along with cosine vector similarity to use both of them as points coordinates in 2-dimensional space. The
+
+chart shows how those points are distributed:
+
+
+
+![A distribution of both Qdrant and BM25 scores mapped into 2D space.](/articles_data/hybrid-search/linear-combination.png)
+
+
+
+*A distribution of both Qdrant and BM25 scores mapped into 2D space. It clearly shows relevant and non-relevant
+
+objects are not linearly separable in that space, so using a linear combination of both scores won't give us
+
+a proper hybrid search.*
+
+
+
+Both relevant and non-relevant items are mixed. **None of the linear formulas would be able to distinguish
+
+between them.** Thus, that's not the way to solve it.
+
+
+
+## Building a hybrid search system in Qdrant
+
+
+
+Ultimately, **any search mechanism might also be a reranking mechanism**. You can prefetch results with sparse vectors
+
+and then rerank them with the dense ones, or the other way around. Or, if you have Matryoshka embeddings, you can start
+
+with oversampling the candidates with the dense vectors of the lowest dimensionality and then gradually reduce the
+
+number of candidates by reranking them with the higher-dimensional embeddings. Nothing stops you from
+
+combining both fusion and reranking.
+
+
+
+Let's go a step further and build a hybrid search mechanism that combines the results from the
+
+Matryoshka embeddings, dense vectors, and sparse vectors and then reranks them with the late interaction model. In the
+
+meantime, we will introduce additional reranking and fusion steps.
+
+
+
+![Complex search pipeline](/articles_data/hybrid-search/complex-search-pipeline.png)
+
+
+
+Our search pipeline consists of two branches, each of them responsible for retrieving a subset of documents that
+
+we eventually want to rerank with the late interaction model. Let's connect to Qdrant first and then build the search
+
+pipeline.
+
+
+
+```python
+
+from qdrant_client import QdrantClient, models
+
+
+
+client = QdrantClient(""http://localhost:6333"")
+
+```
+
+
+
+All the steps utilizing Matryoshka embeddings might be specified in the Query API as a nested structure:
+
+
+
+```python
+
+# The first branch of our search pipeline retrieves 25 documents
+
+# using the Matryoshka embeddings with multistep retrieval.
+
+matryoshka_prefetch = models.Prefetch(
+
+ prefetch=[
+
+ models.Prefetch(
+
+ prefetch=[
+
+ # The first prefetch operation retrieves 100 documents
+
+ # using the Matryoshka embeddings with the lowest
+
+ # dimensionality of 64.
+
+ models.Prefetch(
+
+ query=[0.456, -0.789, ..., 0.239],
+
+ using=""matryoshka-64dim"",
+
+ limit=100,
+
+ ),
+
+ ],
+
+ # Then, the retrieved documents are re-ranked using the
+
+ # Matryoshka embeddings with the dimensionality of 128.
+
+ query=[0.456, -0.789, ..., -0.789],
+
+ using=""matryoshka-128dim"",
+
+ limit=50,
+
+ )
+
+ ],
+
+ # Finally, the results are re-ranked using the Matryoshka
+
+ # embeddings with the dimensionality of 256.
+
+ query=[0.456, -0.789, ..., 0.123],
+
+ using=""matryoshka-256dim"",
+
+ limit=25,
+
+)
+
+```
+
+
+
+Similarly, we can build the second branch of our search pipeline, which retrieves the documents using the dense and
+
+sparse vectors and performs the fusion of them using the Reciprocal Rank Fusion method:
+
+
+
+```python
+
+# The second branch of our search pipeline also retrieves 25 documents,
+
+# but uses the dense and sparse vectors, with their results combined
+
+# using the Reciprocal Rank Fusion.
+
+sparse_dense_rrf_prefetch = models.Prefetch(
+
+ prefetch=[
+
+ models.Prefetch(
+
+ prefetch=[
+
+ # The first prefetch operation retrieves 100 documents
+
+ # using dense vectors using integer data type. Retrieval
+
+ # is faster, but quality is lower.
+
+ models.Prefetch(
+
+ query=[7, 63, ..., 92],
+
+ using=""dense-uint8"",
+
+ limit=100,
+
+ )
+
+ ],
+
+ # Integer-based embeddings are then re-ranked using the
+
+ # float-based embeddings. Here we just want to retrieve
+
+ # 25 documents.
+
+ query=[-1.234, 0.762, ..., 1.532],
+
+ using=""dense"",
+
+ limit=25,
+
+ ),
+
+ # Here we just add another 25 documents using the sparse
+
+ # vectors only.
+
+ models.Prefetch(
+
+ query=models.SparseVector(
+
+ indices=[125, 9325, 58214],
+
+ values=[-0.164, 0.229, 0.731],
+
+ ),
+
+ using=""sparse"",
+
+ limit=25,
+
+ ),
+
+ ],
+
+ # RRF is activated below, so there is no need to specify the
+
+ # query vector here, as fusion is done on the scores of the
+
+ # retrieved documents.
+
+ query=models.FusionQuery(
+
+ fusion=models.Fusion.RRF,
+
+ ),
+
+)
+
+```
+
+
+
+The second branch could have already been called hybrid, as it combines the results from the dense and sparse vectors
+
+with fusion. However, nothing stops us from building even more complex search pipelines.
+
+
+
+Here is how the target call to the Query API would look like in Python:
+
+
+
+
+
+```python
+
+client.query_points(
+
+ ""my-collection"",
+
+ prefetch=[
+
+ matryoshka_prefetch,
+
+ sparse_dense_rrf_prefetch,
+
+ ],
+
+ # Finally rerank the results with the late interaction model. It only
+
+ # considers the documents retrieved by all the prefetch operations above.
+
+ # Return 10 final results.
+
+ query=[
+
+ [1.928, -0.654, ..., 0.213],
+
+ [-1.197, 0.583, ..., 1.901],
+
+ ...,
+
+ [0.112, -1.473, ..., 1.786],
+
+ ],
+
+ using=""late-interaction"",
+
+ with_payload=False,
+
+ limit=10,
+
+)
+
+```
+
+
+
+The options are endless, the new Query API gives you the flexibility to experiment with different setups. **You
+
+rarely need to build such a complex search pipeline**, but it's good to know that you can do that if needed.
+
+
+
+## Some anecdotal observations
+
+
+
+Neither of the algorithms performs best in all cases. In some cases, keyword-based search
+
+will be the winner and vice-versa. The following table shows some interesting examples we could find in the
+
+[WANDS](https://github.com/wayfair/WANDS) dataset during experimentation:
+
+
+
+
+
+
+
+
Query
+
+
BM25 Search
+
+
Vector Search
+
+
+
+
+
+
+
+
cybersport desk
+
+
desk ❌
+
+
gaming desk ✅
+
+
+
+
+
+
plates for icecream
+
+
""eat"" plates on wood wall décor ❌
+
+
alicyn 8.5 '' melamine dessert plate ✅
+
+
+
+
+
+
kitchen table with a thick board
+
+
craft kitchen acacia wood cutting board ❌
+
+
industrial solid wood dining table ✅
+
+
+
+
+
+
wooden bedside table
+
+
30 '' bedside table lamp ❌
+
+
portable bedside end table ✅
+
+
+
+
+
+
+
+
+
+
+
+Also examples where keyword-based search did better:
+
+
+
+
+
+
+
+
Query
+
+
BM25 Search
+
+
Vector Search
+
+
+
+
+
+
+
+
computer chair
+
+
vibrant computer task chair ✅
+
+
office chair ❌
+
+
+
+
+
+
64.2 inch console table
+
+
cervantez 64.2 '' console table ✅
+
+
69.5 '' console table ❌
+
+
+
+
+
+
+
+
+
+## Try the New Query API in Qdrant 1.10
+
+
+
+The new Query API introduced in Qdrant 1.10 is a game-changer for building hybrid search systems. You don't need any
+
+additional services to combine the results from different search methods, and you can even create more complex pipelines
+
+and serve them directly from Qdrant.
+
+
+
+Our webinar on *Building the Ultimate Hybrid Search* takes you through the process of building a hybrid search system
+
+with Qdrant Query API. If you missed it, you can [watch the recording](https://www.youtube.com/watch?v=LAZOxqzceEU), or
+
+[check the notebooks](https://github.com/qdrant/workshop-ultimate-hybrid-search).
+
+
+
+
+
+
+
+If you have any questions or need help with building your hybrid search system, don't hesitate to reach out to us on
+
+[Discord](https://qdrant.to/discord).
+",articles/hybrid-search.md
+"---
+
+title: ""Neural Search 101: A Complete Guide and Step-by-Step Tutorial""
+
+short_description: Step-by-step guide on how to build a neural search service.
+
+description: Discover the power of neural search. Learn what neural search is and follow our tutorial to build a neural search service using BERT, Qdrant, and FastAPI.
+
+# external_link: https://blog.qdrant.tech/neural-search-tutorial-3f034ab13adc
+
+social_preview_image: /articles_data/neural-search-tutorial/social_preview.jpg
+
+preview_dir: /articles_data/neural-search-tutorial/preview
+
+small_preview_image: /articles_data/neural-search-tutorial/tutorial.svg
+
+weight: 50
+
+author: Andrey Vasnetsov
+
+author_link: https://blog.vasnetsov.com/
+
+date: 2021-06-10T10:18:00.000Z
+
+# aliases: [ /articles/neural-search-tutorial/ ]
+
+---
+
+# Neural Search 101: A Comprehensive Guide and Step-by-Step Tutorial
+
+
+
+Information retrieval technology is one of the main technologies that enabled the modern Internet to exist.
+
+These days, search technology is the heart of a variety of applications.
+
+From web-pages search to product recommendations.
+
+For many years, this technology didn't get much change until neural networks came into play.
+
+
+
+In this guide we are going to find answers to these questions:
+
+
+
+* What is the difference between regular and neural search?
+
+* What neural networks could be used for search?
+
+* In what tasks is neural network search useful?
+
+* How to build and deploy own neural search service step-by-step?
+
+
+
+## What is neural search?
+
+
+
+A regular full-text search, such as Google's, consists of searching for keywords inside a document.
+
+For this reason, the algorithm can not take into account the real meaning of the query and documents.
+
+Many documents that might be of interest to the user are not found because they use different wording.
+
+
+
+Neural search tries to solve exactly this problem - it attempts to enable searches not by keywords but by meaning.
+
+To achieve this, the search works in 2 steps.
+
+In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called embeddings.
+
+The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation.
+
+
+
+![Encoders and embedding space](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/e52e3f1a320cd985ebc96f48955d7f355de8876c/encoders.png)
+
+
+
+Having this vector representation, it is easy to understand what the second step should be.
+
+To find documents similar to the query you now just need to find the nearest vectors.
+
+The most convenient way to determine the distance between two vectors is to calculate the cosine distance.
+
+The usual Euclidean distance can also be used, but it is not so efficient due to [the curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
+
+
+
+## Which model could be used?
+
+
+
+It is ideal to use a model specially trained to determine the closeness of meanings.
+
+For example, models trained on Semantic Textual Similarity (STS) datasets.
+
+Current state-of-the-art models can be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining).
+
+
+
+However, not only specially trained models can be used.
+
+If the model is trained on a large enough dataset, its internal features can work as embeddings too.
+
+So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it.
+
+In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes.
+
+The output of this layer can be used as an embedding.
+
+
+
+## What tasks is neural search good for?
+
+
+
+Neural search has the greatest advantage in areas where the query cannot be formulated precisely.
+
+Querying a table in an SQL database is not the best place for neural search.
+
+
+
+On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions - neural search can help you.
+
+If the search query is a picture, sound file or long text, neural network search is almost the only option.
+
+
+
+If you want to build a recommendation system, the neural approach can also be useful.
+
+The user's actions can be encoded in vector space in the same way as a picture or text.
+
+And having those vectors, it is possible to find semantically similar users and determine the next probable user actions.
+
+
+
+## Step-by-step neural search tutorial using Qdrant
+
+
+
+With all that said, let's make our neural network search.
+
+As an example, I decided to make a search for startups by their description.
+
+In this demo, we will see the cases when text search works better and the cases when neural network search works better.
+
+
+
+
+
+I will use data from [startups-list.com](https://www.startups-list.com/).
+
+Each record contains the name, a paragraph describing the company, the location and a picture.
+
+Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json).
+
+
+
+### Step 1: Prepare data for neural search
+
+
+
+To be able to search for our descriptions in vector space, we must get vectors first.
+
+We need to encode the descriptions into a vector representation.
+
+As the descriptions are textual data, we can use a pre-trained language model.
+
+As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity.
+
+
+
+One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab.
+
+It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture.
+
+Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough.
+
+
+
+We will use a model called `all-MiniLM-L6-v2`.
+
+This model is an all-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs.
+
+It is optimized for low memory consumption and fast inference.
+
+
+
+The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing).
+
+
+
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing)
+
+
+
+### Step 2: Incorporate a Vector search engine
+
+
+
+Now as we have a vector representation for all our records, we need to store them somewhere.
+
+In addition to storing, we may also need to add or delete a vector, save additional information with the vector.
+
+And most importantly, we need a way to search for the nearest vectors.
+
+
+
+The vector search engine can take care of all these tasks.
+
+It provides a convenient API for searching and managing vectors.
+
+In our tutorial, we will use [Qdrant vector search engine](https://github.com/qdrant/qdrant) vector search engine.
+
+It not only supports all necessary operations with vectors but also allows you to store additional payload along with vectors and use it to perform filtering of the search result.
+
+Qdrant has a client for Python and also defines the API schema if you need to use it from other languages.
+
+
+
+The easiest way to use Qdrant is to run a pre-built image.
+
+So make sure you have Docker installed on your system.
+
+
+
+To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant).
+
+
+
+Download image from [DockerHub](https://hub.docker.com/r/qdrant/qdrant):
+
+
+
+```bash
+
+docker pull qdrant/qdrant
+
+```
+
+
+
+And run the service inside the docker:
+
+
+
+```bash
+
+docker run -p 6333:6333 \
+
+ -v $(pwd)/qdrant_storage:/qdrant/storage \
+
+ qdrant/qdrant
+
+```
+
+You should see output like this
+
+
+
+```text
+
+...
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers
+
+[2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333
+
+```
+
+
+
+This means that the service is successfully launched and listening port 6333.
+
+To make sure you can test [http://localhost:6333/](http://localhost:6333/) in your browser and get qdrant version info.
+
+
+
+All uploaded to Qdrant data is saved into the `./qdrant_storage` directory and will be persisted even if you recreate the container.
+
+
+
+### Step 3: Upload data to Qdrant
+
+
+
+Now once we have the vectors prepared and the search engine running, we can start uploading the data.
+
+To interact with Qdrant from python, I recommend using an out-of-the-box client library.
+
+
+
+To install it, use the following command
+
+
+
+```bash
+
+pip install qdrant-client
+
+```
+
+
+
+At this point, we should have startup records in file `startups.json`, encoded vectors in file `startup_vectors.npy`, and running Qdrant on a local machine.
+
+Let's write a script to upload all startup data and vectors into the search engine.
+
+
+
+First, let's create a client object for Qdrant.
+
+
+
+```python
+
+# Import client library
+
+from qdrant_client import QdrantClient
+
+from qdrant_client.models import VectorParams, Distance
+
+
+
+qdrant_client = QdrantClient(host='localhost', port=6333)
+
+```
+
+
+
+Qdrant allows you to combine vectors of the same purpose into collections.
+
+Many independent vector collections can exist on one service at the same time.
+
+
+
+Let's create a new collection for our startup vectors.
+
+
+
+```python
+
+if not qdrant_client.collection_exists('startups'):
+
+ qdrant_client.create_collection(
+
+ collection_name='startups',
+
+ vectors_config=VectorParams(size=384, distance=Distance.COSINE),
+
+ )
+
+```
+
+
+
+The `vector_size` parameter is very important.
+
+It tells the service the size of the vectors in that collection.
+
+All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them.
+
+`384` is the output dimensionality of the encoder we are using.
+
+
+
+The `distance` parameter allows specifying the function used to measure the distance between two points.
+
+
+
+The Qdrant client library defines a special function that allows you to load datasets into the service.
+
+However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input.
+
+
+
+Let's create an iterator over the startup data and vectors.
+
+
+
+```python
+
+import numpy as np
+
+import json
+
+
+
+fd = open('./startups.json')
+
+
+
+# payload is now an iterator over startup data
+
+payload = map(json.loads, fd)
+
+
+
+# Here we load all vectors into memory, numpy array works as iterable for itself.
+
+# Other option would be to use Mmap, if we don't want to load all data into RAM
+
+vectors = np.load('./startup_vectors.npy')
+
+```
+
+
+
+And the final step - data uploading
+
+
+
+```python
+
+qdrant_client.upload_collection(
+
+ collection_name='startups',
+
+ vectors=vectors,
+
+ payload=payload,
+
+ ids=None, # Vector ids will be assigned automatically
+
+ batch_size=256 # How many vectors will be uploaded in a single request?
+
+)
+
+```
+
+
+
+Now we have vectors uploaded to the vector search engine.
+
+In the next step, we will learn how to actually search for the closest vectors.
+
+
+
+The full code for this step can be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py).
+
+
+
+### Step 4: Make a search API
+
+
+
+Now that all the preparations are complete, let's start building a neural search class.
+
+
+
+First, install all the requirements:
+
+```bash
+
+pip install sentence-transformers numpy
+
+```
+
+
+
+In order to process incoming requests neural search will need 2 things.
+
+A model to convert the query into a vector and Qdrant client, to perform a search queries.
+
+
+
+```python
+
+# File: neural_searcher.py
+
+
+
+from qdrant_client import QdrantClient
+
+from sentence_transformers import SentenceTransformer
+
+
+
+
+
+class NeuralSearcher:
+
+
+
+ def __init__(self, collection_name):
+
+ self.collection_name = collection_name
+
+ # Initialize encoder model
+
+ self.model = SentenceTransformer('all-MiniLM-L6-v2', device='cpu')
+
+ # initialize Qdrant client
+
+ self.qdrant_client = QdrantClient(host='localhost', port=6333)
+
+```
+
+
+
+The search function looks as simple as possible:
+
+
+
+```python
+
+ def search(self, text: str):
+
+ # Convert text query into vector
+
+ vector = self.model.encode(text).tolist()
+
+
+
+ # Use `vector` for search for closest vectors in the collection
+
+ search_result = self.qdrant_client.search(
+
+ collection_name=self.collection_name,
+
+ query_vector=vector,
+
+ query_filter=None, # We don't want any filters for now
+
+ top=5 # 5 the most closest results is enough
+
+ )
+
+ # `search_result` contains found vector ids with similarity scores along with the stored payload
+
+ # In this function we are interested in payload only
+
+ payloads = [hit.payload for hit in search_result]
+
+ return payloads
+
+```
+
+
+
+With Qdrant it is also feasible to add some conditions to the search.
+
+For example, if we wanted to search for startups in a certain city, the search query could look like this:
+
+
+
+```python
+
+from qdrant_client.models import Filter
+
+
+
+ ...
+
+
+
+ city_of_interest = ""Berlin""
+
+
+
+ # Define a filter for cities
+
+ city_filter = Filter(**{
+
+ ""must"": [{
+
+ ""key"": ""city"", # We store city information in a field of the same name
+
+ ""match"": { # This condition checks if payload field have requested value
+
+ ""keyword"": city_of_interest
+
+ }
+
+ }]
+
+ })
+
+
+
+ search_result = self.qdrant_client.search(
+
+ collection_name=self.collection_name,
+
+ query_vector=vector,
+
+ query_filter=city_filter,
+
+ top=5
+
+ )
+
+ ...
+
+
+
+```
+
+
+
+We now have a class for making neural search queries. Let's wrap it up into a service.
+
+
+
+
+
+### Step 5: Deploy as a service
+
+
+
+To build the service we will use the FastAPI framework.
+
+It is super easy to use and requires minimal code writing.
+
+
+
+To install it, use the command
+
+
+
+```bash
+
+pip install fastapi uvicorn
+
+```
+
+
+
+Our service will have only one API endpoint and will look like this:
+
+
+
+```python
+
+# File: service.py
+
+
+
+from fastapi import FastAPI
+
+
+
+# That is the file where NeuralSearcher is stored
+
+from neural_searcher import NeuralSearcher
+
+
+
+app = FastAPI()
+
+
+
+# Create an instance of the neural searcher
+
+neural_searcher = NeuralSearcher(collection_name='startups')
+
+
+
+@app.get(""/api/search"")
+
+def search_startup(q: str):
+
+ return {
+
+ ""result"": neural_searcher.search(text=q)
+
+ }
+
+
+
+
+
+if __name__ == ""__main__"":
+
+ import uvicorn
+
+ uvicorn.run(app, host=""0.0.0.0"", port=8000)
+
+
+
+```
+
+
+
+Now, if you run the service with
+
+
+
+```bash
+
+python service.py
+
+```
+
+
+
+and open your browser at [http://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service.
+
+
+
+![FastAPI Swagger interface](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/d866e37a60036ebe65508bd736faff817a5d27e9/fastapi_neural_search.png)
+
+
+
+Feel free to play around with it, make queries and check out the results.
+
+This concludes the tutorial.
+
+
+
+
+
+### Experience Neural Search With Qdrant’s Free Demo
+
+Excited to see neural search in action? Take the next step and book a [free demo](https://qdrant.to/semantic-search-demo) with Qdrant! Experience firsthand how this cutting-edge technology can transform your search capabilities.
+
+
+
+Our demo will help you grow intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search.
+
+Try to use a startup description to find similar ones.
+
+
+
+Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, and publish other examples of neural networks and neural search applications.
+",articles/neural-search-tutorial.md
+"---
+
+title: Serverless Semantic Search
+
+short_description: ""Need to setup a server to offer semantic search? Think again!""
+
+description: ""Create a serverless semantic search engine using nothing but Qdrant and free cloud services.""
+
+social_preview_image: /articles_data/serverless/social_preview.png
+
+small_preview_image: /articles_data/serverless/icon.svg
+
+preview_dir: /articles_data/serverless/preview
+
+weight: 1
+
+author: Andre Bogus
+
+author_link: https://llogiq.github.io
+
+date: 2023-07-12T10:00:00+01:00
+
+draft: false
+
+keywords: rust, serverless, lambda, semantic, search
+
+---
+
+
+
+Do you want to insert a semantic search function into your website or online app? Now you can do so - without spending any money! In this example, you will learn how to create a free prototype search engine for your own non-commercial purposes.
+
+
+
+You may find all of the assets for this tutorial on [GitHub](https://github.com/qdrant/examples/tree/master/lambda-search).
+
+
+
+## Ingredients
+
+
+
+* A [Rust](https://rust-lang.org) toolchain
+
+* [cargo lambda](https://cargo-lambda.info) (install via package manager, [download](https://github.com/cargo-lambda/cargo-lambda/releases) binary or `cargo install cargo-lambda`)
+
+* The [AWS CLI](https://aws.amazon.com/cli)
+
+* Qdrant instance ([free tier](https://cloud.qdrant.io) available)
+
+* An embedding provider service of your choice (see our [Embeddings docs](/documentation/embeddings/). You may be able to get credits from [AI Grant](https://aigrant.org), also Cohere has a [rate-limited non-commercial free tier](https://cohere.com/pricing))
+
+* AWS Lambda account (12-month free tier available)
+
+
+
+## What you're going to build
+
+
+
+You'll combine the embedding provider and the Qdrant instance to a neat semantic search, calling both services from a small Lambda function.
+
+
+
+![lambda integration diagram](/articles_data/serverless/lambda_integration.png)
+
+
+
+Now lets look at how to work with each ingredient before connecting them.
+
+
+
+## Rust and cargo-lambda
+
+
+
+You want your function to be quick, lean and safe, so using Rust is a no-brainer. To compile Rust code for use within Lambda functions, the `cargo-lambda` subcommand has been built. `cargo-lambda` can put your Rust code in a zip file that AWS Lambda can then deploy on a no-frills `provided.al2` runtime.
+
+
+
+To interface with AWS Lambda, you will need a Rust project with the following dependencies in your `Cargo.toml`:
+
+
+
+```toml
+
+[dependencies]
+
+tokio = { version = ""1"", features = [""macros""] }
+
+lambda_http = { version = ""0.8"", default-features = false, features = [""apigw_http""] }
+
+lambda_runtime = ""0.8""
+
+```
+
+
+
+This gives you an interface consisting of an entry point to start the Lambda runtime and a way to register your handler for HTTP calls. Put the following snippet into `src/helloworld.rs`:
+
+
+
+```rust
+
+use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response};
+
+
+
+/// This is your callback function for responding to requests at your URL
+
+async fn function_handler(_req: Request) -> Result, Error> {
+
+ Response::from_text(""Hello, Lambda!"")
+
+}
+
+
+
+#[tokio::main]
+
+async fn main() {
+
+ run(service_fn(function_handler)).await
+
+}
+
+```
+
+
+
+You can also use a closure to bind other arguments to your function handler (the `service_fn` call then becomes `service_fn(|req| function_handler(req, ...))`). Also if you want to extract parameters from the request, you can do so using the [Request](https://docs.rs/lambda_http/latest/lambda_http/type.Request.html) methods (e.g. `query_string_parameters` or `query_string_parameters_ref`).
+
+
+
+Add the following to your `Cargo.toml` to define the binary:
+
+
+
+```toml
+
+[[bin]]
+
+name = ""helloworld""
+
+path = ""src/helloworld.rs""
+
+```
+
+
+
+On the AWS side, you need to setup a Lambda and IAM role to use with your function.
+
+
+
+![create lambda web page](/articles_data/serverless/create_lambda.png)
+
+
+
+Choose your function name, select ""Provide your own bootstrap on Amazon Linux 2"". As architecture, use `arm64`. You will also activate a function URL. Here it is up to you if you want to protect it via IAM or leave it open, but be aware that open end points can be accessed by anyone, potentially costing money if there is too much traffic.
+
+
+
+By default, this will also create a basic role. To look up the role, you can go into the Function overview:
+
+
+
+![function overview](/articles_data/serverless/lambda_overview.png)
+
+
+
+Click on the ""Info"" link near the ""▸ Function overview"" heading, and select the ""Permissions"" tab on the left.
+
+
+
+You will find the ""Role name"" directly under *Execution role*. Note it down for later.
+
+
+
+![function overview](/articles_data/serverless/lambda_role.png)
+
+
+
+To test that your ""Hello, Lambda"" service works, you can compile and upload the function:
+
+
+
+```bash
+
+$ export LAMBDA_FUNCTION_NAME=hello
+
+$ export LAMBDA_ROLE=
+
+$ export LAMBDA_REGION=us-east-1
+
+$ cargo lambda build --release --arm --bin helloworld --output-format zip
+
+ Downloaded libc v0.2.137
+
+# [..] output omitted for brevity
+
+ Finished release [optimized] target(s) in 1m 27s
+
+$ # Delete the old empty definition
+
+$ aws lambda delete-function-url-config --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME
+
+$ aws lambda delete-function --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME
+
+$ # Upload the function
+
+$ aws lambda create-function --function-name $LAMBDA_FUNCTION_NAME \
+
+ --handler bootstrap \
+
+ --architectures arm64 \
+
+ --zip-file fileb://./target/lambda/helloworld/bootstrap.zip \
+
+ --runtime provided.al2 \
+
+ --region $LAMBDA_REGION \
+
+ --role $LAMBDA_ROLE \
+
+ --tracing-config Mode=Active
+
+$ # Add the function URL
+
+$ aws lambda add-permission \
+
+ --function-name $LAMBDA_FUNCTION_NAME \
+
+ --action lambda:InvokeFunctionUrl \
+
+ --principal ""*"" \
+
+ --function-url-auth-type ""NONE"" \
+
+ --region $LAMBDA_REGION \
+
+ --statement-id url
+
+$ # Here for simplicity unauthenticated URL access. Beware!
+
+$ aws lambda create-function-url-config \
+
+ --function-name $LAMBDA_FUNCTION_NAME \
+
+ --region $LAMBDA_REGION \
+
+ --cors ""AllowOrigins=*,AllowMethods=*,AllowHeaders=*"" \
+
+ --auth-type NONE
+
+```
+
+
+
+Now you can go to your *Function Overview* and click on the Function URL. You should see something like this:
+
+
+
+```text
+
+Hello, Lambda!
+
+```
+
+
+
+Bearer ! You have set up a Lambda function in Rust. On to the next ingredient:
+
+
+
+## Embedding
+
+
+
+Most providers supply a simple https GET or POST interface you can use with an API key, which you have to supply in an authentication header. If you are using this for non-commercial purposes, the rate limited trial key from Cohere is just a few clicks away. Go to [their welcome page](https://dashboard.cohere.ai/welcome/register), register and you'll be able to get to the dashboard, which has an ""API keys"" menu entry which will bring you to the following page:
+
+ [cohere dashboard](/articles_data/serverless/cohere-dashboard.png)
+
+
+
+From there you can click on the ⎘ symbol next to your API key to copy it to the clipboard. *Don't put your API key in the code!* Instead read it from an env variable you can set in the lambda environment. This avoids accidentally putting your key into a public repo. Now all you need to get embeddings is a bit of code. First you need to extend your dependencies with `reqwest` and also add `anyhow` for easier error handling:
+
+
+
+```toml
+
+anyhow = ""1.0""
+
+reqwest = { version = ""0.11.18"", default-features = false, features = [""json"", ""rustls-tls""] }
+
+serde = ""1.0""
+
+```
+
+
+
+Now given the API key from above, you can make a call to get the embedding vectors:
+
+
+
+```rust
+
+use anyhow::Result;
+
+use serde::Deserialize;
+
+use reqwest::Client;
+
+
+
+#[derive(Deserialize)]
+
+struct CohereResponse { outputs: Vec> }
+
+
+
+pub async fn embed(client: &Client, text: &str, api_key: &str) -> Result>> {
+
+ let CohereResponse { outputs } = client
+
+ .post(""https://api.cohere.ai/embed"")
+
+ .header(""Authorization"", &format!(""Bearer {api_key}""))
+
+ .header(""Content-Type"", ""application/json"")
+
+ .header(""Cohere-Version"", ""2021-11-08"")
+
+ .body(format!(""{{\""text\"":[\""{text}\""],\""model\"":\""small\""}}""))
+
+ .send()
+
+ .await?
+
+ .json()
+
+ .await?;
+
+ Ok(outputs)
+
+}
+
+```
+
+
+
+Note that this may return multiple vectors if the text overflows the input dimensions.
+
+Cohere's `small` model has 1024 output dimensions.
+
+
+
+Other providers have similar interfaces. Consult our [Embeddings docs](/documentation/embeddings/) for further information. See how little code it took to get the embedding?
+
+
+
+While you're at it, it's a good idea to write a small test to check if embedding works and the vectors are of the expected size:
+
+
+
+```rust
+
+#[tokio::test]
+
+async fn check_embedding() {
+
+ // ignore this test if API_KEY isn't set
+
+ let Ok(api_key) = &std::env::var(""API_KEY"") else { return; }
+
+ let embedding = crate::embed(""What is semantic search?"", api_key).unwrap()[0];
+
+ // Cohere's `small` model has 1024 output dimensions.
+
+ assert_eq!(1024, embedding.len());
+
+}
+
+```
+
+
+
+Run this while setting the `API_KEY` environment variable to check if the embedding works.
+
+
+
+## Qdrant search
+
+
+
+Now that you have embeddings, it's time to put them into your Qdrant. You could of course use `curl` or `python` to set up your collection and upload the points, but as you already have Rust including some code to obtain the embeddings, you can stay in Rust, adding `qdrant-client` to the mix.
+
+
+
+```rust
+
+use anyhow::Result;
+
+use qdrant_client::prelude::*;
+
+use qdrant_client::qdrant::{VectorsConfig, VectorParams};
+
+use qdrant_client::qdrant::vectors_config::Config;
+
+use std::collections::HashMap;
+
+
+
+fn setup<'i>(
+
+ embed_client: &reqwest::Client,
+
+ embed_api_key: &str,
+
+ qdrant_url: &str,
+
+ api_key: Option<&str>,
+
+ collection_name: &str,
+
+ data: impl Iterator)>,
+
+) -> Result<()> {
+
+ let mut config = QdrantClientConfig::from_url(qdrant_url);
+
+ config.api_key = api_key;
+
+ let client = QdrantClient::new(Some(config))?;
+
+
+
+ // create the collections
+
+ if !client.has_collection(collection_name).await? {
+
+ client
+
+ .create_collection(&CreateCollection {
+
+ collection_name: collection_name.into(),
+
+ vectors_config: Some(VectorsConfig {
+
+ config: Some(Config::Params(VectorParams {
+
+ size: 1024, // output dimensions from above
+
+ distance: Distance::Cosine as i32,
+
+ ..Default::default()
+
+ })),
+
+ }),
+
+ ..Default::default()
+
+ })
+
+ .await?;
+
+ }
+
+ let mut id_counter = 0_u64;
+
+ let points = data.map(|(text, payload)| {
+
+ let id = std::mem::replace(&mut id_counter, *id_counter + 1);
+
+ let vectors = Some(embed(embed_client, text, embed_api_key).unwrap());
+
+ PointStruct { id, vectors, payload }
+
+ }).collect();
+
+ client.upsert_points(collection_name, points, None).await?;
+
+ Ok(())
+
+}
+
+```
+
+
+
+Depending on whether you want to efficiently filter the data, you can also add some indexes. I'm leaving this out for brevity, but you can look at the [example code](https://github.com/qdrant/examples/tree/master/lambda-search) containing this operation. Also this does not implement chunking (splitting the data to upsert in multiple requests, which avoids timeout errors).
+
+
+
+Add a suitable `main` method and you can run this code to insert the points (or just use the binary from the example). Be sure to include the port in the `qdrant_url`.
+
+
+
+Now that you have the points inserted, you can search them by embedding:
+
+
+
+```rust
+
+use anyhow::Result;
+
+use qdrant_client::prelude::*;
+
+pub async fn search(
+
+ text: &str,
+
+ collection_name: String,
+
+ client: &Client,
+
+ api_key: &str,
+
+ qdrant: &QdrantClient,
+
+) -> Result> {
+
+ Ok(qdrant.search_points(&SearchPoints {
+
+ collection_name,
+
+ limit: 5, // use what fits your use case here
+
+ with_payload: Some(true.into()),
+
+ vector: embed(client, text, api_key)?,
+
+ ..Default::default()
+
+ }).await?.result)
+
+}
+
+```
+
+
+
+You can also filter by adding a `filter: ...` field to the `SearchPoints`, and you will likely want to process the result further, but the example code already does that, so feel free to start from there in case you need this functionality.
+
+
+
+## Putting it all together
+
+
+
+Now that you have all the parts, it's time to join them up. Now copying and wiring up the snippets above is left as an exercise to the reader. Impatient minds can peruse the [example repo](https://github.com/qdrant/examples/tree/master/lambda-search) instead.
+
+
+
+You'll want to extend the `main` method a bit to connect with the Client once at the start, also get API keys from the environment so you don't need to compile them into the code. To do that, you can get them with `std::env::var(_)` from the rust code and set the environment from the AWS console.
+
+
+
+```bash
+
+$ export QDRANT_URI=
+
+$ export QDRANT_API_KEY=
+
+$ export COHERE_API_KEY=
+
+$ export COLLECTION_NAME=site-cohere
+
+$ aws lambda update-function-configuration \
+
+ --function-name $LAMBDA_FUNCTION_NAME \
+
+ --environment ""Variables={QDRANT_URI=$QDRANT_URI,\
+
+ QDRANT_API_KEY=$QDRANT_API_KEY,COHERE_API_KEY=${COHERE_API_KEY},\
+
+ COLLECTION_NAME=${COLLECTION_NAME}""`
+
+```
+
+
+
+In any event, you will arrive at one command line program to insert your data and one Lambda function. The former can just be `cargo run` to set up the collection. For the latter, you can again call `cargo lambda` and the AWS console:
+
+
+
+```bash
+
+$ export LAMBDA_FUNCTION_NAME=search
+
+$ export LAMBDA_REGION=us-east-1
+
+$ cargo lambda build --release --arm --output-format zip
+
+ Downloaded libc v0.2.137
+
+# [..] output omitted for brevity
+
+ Finished release [optimized] target(s) in 1m 27s
+
+$ # Update the function
+
+$ aws lambda update-function-code --function-name $LAMBDA_FUNCTION_NAME \
+
+ --zip-file fileb://./target/lambda/page-search/bootstrap.zip \
+
+ --region $LAMBDA_REGION
+
+```
+
+
+
+## Discussion
+
+
+
+Lambda works by spinning up your function once the URL is called, so they don't need to keep the compute on hand unless it is actually used. This means that the first call will be burdened by some 1-2 seconds of latency for loading the function, later calls will resolve faster. Of course, there is also the latency for calling the embeddings provider and Qdrant. On the other hand, the free tier doesn't cost a thing, so you certainly get what you pay for. And for many use cases, a result within one or two seconds is acceptable.
+
+
+
+Rust minimizes the overhead for the function, both in terms of file size and runtime. Using an embedding service means you don't need to care about the details. Knowing the URL, API key and embedding size is sufficient. Finally, with free tiers for both Lambda and Qdrant as well as free credits for the embedding provider, the only cost is your time to set everything up. Who could argue with free?
+",articles/serverless.md
+"---
+
+title: Filtrable HNSW
+
+short_description: How to make ANN search with custom filtering?
+
+description: How to make ANN search with custom filtering? Search in selected subsets without loosing the results.
+
+# external_link: https://blog.vasnetsov.com/posts/categorical-hnsw/
+
+social_preview_image: /articles_data/filtrable-hnsw/social_preview.jpg
+
+preview_dir: /articles_data/filtrable-hnsw/preview
+
+small_preview_image: /articles_data/filtrable-hnsw/global-network.svg
+
+weight: 60
+
+date: 2019-11-24T22:44:08+03:00
+
+author: Andrei Vasnetsov
+
+author_link: https://blog.vasnetsov.com/
+
+# aliases: [ /articles/filtrable-hnsw/ ]
+
+---
+
+
+
+If you need to find some similar objects in vector space, provided e.g. by embeddings or matching NN, you can choose among a variety of libraries: Annoy, FAISS or NMSLib.
+
+All of them will give you a fast approximate neighbors search within almost any space.
+
+
+
+But what if you need to introduce some constraints in your search?
+
+For example, you want search only for products in some category or select the most similar customer of a particular brand.
+
+I did not find any simple solutions for this.
+
+There are several discussions like [this](https://github.com/spotify/annoy/issues/263), but they only suggest to iterate over top search results and apply conditions consequently after the search.
+
+
+
+Let's see if we could somehow modify any of ANN algorithms to be able to apply constrains during the search itself.
+
+
+
+Annoy builds tree index over random projections.
+
+Tree index implies that we will meet same problem that appears in relational databases:
+
+if field indexes were built independently, then it is possible to use only one of them at a time.
+
+Since nobody solved this problem before, it seems that there is no easy approach.
+
+
+
+There is another algorithm which shows top results on the [benchmark](https://github.com/erikbern/ann-benchmarks).
+
+It is called HNSW which stands for Hierarchical Navigable Small World.
+
+
+
+The [original paper](https://arxiv.org/abs/1603.09320) is well written and very easy to read, so I will only give the main idea here.
+
+We need to build a navigation graph among all indexed points so that the greedy search on this graph will lead us to the nearest point.
+
+This graph is constructed by sequentially adding points that are connected by a fixed number of edges to previously added points.
+
+In the resulting graph, the number of edges at each point does not exceed a given threshold $m$ and always contains the nearest considered points.
+
+
+
+![NSW](/articles_data/filtrable-hnsw/NSW.png)
+
+
+
+### How can we modify it?
+
+
+
+What if we simply apply the filter criteria to the nodes of this graph and use in the greedy search only those that meet these criteria?
+
+It turns out that even with this naive modification algorithm can cover some use cases.
+
+
+
+One such case is if your criteria do not correlate with vector semantics.
+
+For example, you use a vector search for clothing names and want to filter out some sizes.
+
+In this case, the nodes will be uniformly filtered out from the entire cluster structure.
+
+Therefore, the theoretical conclusions obtained in the [Percolation theory](https://en.wikipedia.org/wiki/Percolation_theory) become applicable:
+
+
+
+
+
+> Percolation is related to the robustness of the graph (called also network). Given a random graph of $n$ nodes and an average degree $\langle k\rangle$ . Next we remove randomly a fraction $1-p$ of nodes and leave only a fraction $p$. There exists a critical percolation threshold $ pc = \frac{1}{\langle k\rangle} $ below which the network becomes fragmented while above $pc$ a giant connected component exists.
+
+
+
+
+
+This statement also confirmed by experiments:
+
+
+
+{{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_m0.png caption=""Dependency of connectivity to the number of edges"" >}}
+
+
+
+{{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_num_elements.png caption=""Dependency of connectivity to the number of point (no dependency)."" >}}
+
+
+
+
+
+There is a clear threshold when the search begins to fail.
+
+This threshold is due to the decomposition of the graph into small connected components.
+
+The graphs also show that this threshold can be shifted by increasing the $m$ parameter of the algorithm, which is responsible for the degree of nodes.
+
+
+
+Let's consider some other filtering conditions we might want to apply in the search:
+
+
+
+* Categorical filtering
+
+ * Select only points in a specific category
+
+ * Select points which belong to a specific subset of categories
+
+ * Select points with a specific set of labels
+
+* Numerical range
+
+* Selection within some geographical region
+
+
+
+In the first case, we can guarantee that the HNSW graph will be connected simply by creating additional edges
+
+inside each category separately, using the same graph construction algorithm, and then combining them into the original graph.
+
+In this case, the total number of edges will increase by no more than 2 times, regardless of the number of categories.
+
+
+
+Second case is a little harder. A connection may be lost between two categories if they lie in different clusters.
+
+
+
+![category clusters](/articles_data/filtrable-hnsw/hnsw_graph_category.png)
+
+
+
+The idea here is to build same navigation graph but not between nodes, but between categories.
+
+Distance between two categories might be defined as distance between category entry points (or, for precision, as the average distance between a random sample). Now we can estimate expected graph connectivity by number of excluded categories, not nodes.
+
+It still does not guarantee that two random categories will be connected, but allows us to switch to multiple searches in each category if connectivity threshold passed. In some cases, multiple searches can be even faster if you take advantage of parallel processing.
+
+
+
+{{< figure src=/articles_data/filtrable-hnsw/exp_random_groups.png caption=""Dependency of connectivity to the random categories included in search"" >}}
+
+
+
+Third case might be resolved in a same way it is resolved in classical databases.
+
+Depending on labeled subsets size ration we can go for one of the following scenarios:
+
+
+
+* if at least one subset is small: perform search over the label containing smallest subset and then filter points consequently.
+
+* if large subsets give large intersection: perform regular search with constraints expecting that intersection size fits connectivity threshold.
+
+* if large subsets give small intersection: perform linear search over intersection expecting that it is small enough to fit a time frame.
+
+
+
+Numerical range case can be reduces to the previous one if we split numerical range into a buckets containing equal amount of points.
+
+Next we also connect neighboring buckets to achieve graph connectivity. We still need to filter some results which presence in border buckets but do not fulfill actual constraints, but their amount might be regulated by the size of buckets.
+
+
+
+Geographical case is a lot like a numerical one.
+
+Usual geographical search involves [geohash](https://en.wikipedia.org/wiki/Geohash), which matches any geo-point to a fixes length identifier.
+
+
+
+![Geohash example](/articles_data/filtrable-hnsw/geohash.png)
+
+
+
+We can use this identifiers as categories and additionally make connections between neighboring geohashes.
+
+It will ensure that any selected geographical region will also contain connected HNSW graph.
+
+
+
+## Conclusion
+
+
+
+It is possible to enchant HNSW algorithm so that it will support filtering points in a first search phase.
+
+Filtering can be carried out on the basis of belonging to categories,
+
+which in turn is generalized to such popular cases as numerical ranges and geo.
+
+
+
+Experiments were carried by modification [python implementation](https://github.com/generall/hnsw-python) of the algorithm,
+
+but real production systems require much faster version, like [NMSLib](https://github.com/nmslib/nmslib).
+",articles/filtrable-hnsw.md
+"---
+
+title: Food Discovery Demo
+
+short_description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search.
+
+description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search.
+
+preview_dir: /articles_data/food-discovery-demo/preview
+
+social_preview_image: /articles_data/food-discovery-demo/preview/social_preview.png
+
+small_preview_image: /articles_data/food-discovery-demo/icon.svg
+
+weight: -30
+
+author: Kacper Łukawski
+
+author_link: https://medium.com/@lukawskikacper
+
+date: 2023-09-05T11:32:00.000Z
+
+---
+
+
+
+Not every search journey begins with a specific destination in mind. Sometimes, you just want to explore and see what’s out there and what you might like.
+
+This is especially true when it comes to food. You might be craving something sweet, but you don’t know what. You might be also looking for a new dish to try,
+
+and you just want to see the options available. In these cases, it's impossible to express your needs in a textual query, as the thing you are looking for is not
+
+yet defined. Qdrant's semantic search for images is useful when you have a hard time expressing your tastes in words.
+
+
+
+## General architecture
+
+
+
+We are happy to announce a refreshed version of our [Food Discovery Demo](https://food-discovery.qdrant.tech/). This time available as an open source project,
+
+so you can easily deploy it on your own and play with it. If you prefer to dive into the source code directly, then feel free to check out the [GitHub repository
+
+](https://github.com/qdrant/demo-food-discovery/).
+
+Otherwise, read on to learn more about the demo and how it works!
+
+
+
+In general, our application consists of three parts: a [FastAPI](https://fastapi.tiangolo.com/) backend, a [React](https://react.dev/) frontend, and
+
+a [Qdrant](/) instance. The architecture diagram below shows how these components interact with each other:
+
+
+
+![Archtecture diagram](/articles_data/food-discovery-demo/architecture-diagram.png)
+
+
+
+## Why did we use a CLIP model?
+
+
+
+CLIP is a neural network that can be used to encode both images and texts into vectors. And more importantly, both images and texts are vectorized into the same
+
+latent space, so we can compare them directly. This lets you perform semantic search on images using text queries and the other way around. For example, if
+
+you search for “flat bread with toppings”, you will get images of pizza. Or if you search for “pizza”, you will get images of some flat bread with toppings, even
+
+if they were not labeled as “pizza”. This is because CLIP embeddings capture the semantics of the images and texts and can find the similarities between them
+
+no matter the wording.
+
+
+
+![CLIP model](/articles_data/food-discovery-demo/clip-model.png)
+
+
+
+CLIP is available in many different ways. We used the pretrained `clip-ViT-B-32` model available in the [Sentence-Transformers](https://www.sbert.net/examples/applications/image-search/README.html)
+
+library, as this is the easiest way to get started.
+
+
+
+## The dataset
+
+
+
+The demo is based on the [Wolt](https://wolt.com/) dataset. It contains over 2M images of dishes from different restaurants along with some additional metadata.
+
+This is how a payload for a single dish looks like:
+
+
+
+```json
+
+{
+
+ ""cafe"": {
+
+ ""address"": ""VGX7+6R2 Vecchia Napoli, Valletta"",
+
+ ""categories"": [""italian"", ""pasta"", ""pizza"", ""burgers"", ""mediterranean""],
+
+ ""location"": {""lat"": 35.8980154, ""lon"": 14.5145106},
+
+ ""menu_id"": ""610936a4ee8ea7a56f4a372a"",
+
+ ""name"": ""Vecchia Napoli Is-Suq Tal-Belt"",
+
+ ""rating"": 9,
+
+ ""slug"": ""vecchia-napoli-skyparks-suq-tal-belt""
+
+ },
+
+ ""description"": ""Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli"",
+
+ ""image"": ""https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg"",
+
+ ""name"": ""L'Amatriciana""
+
+}
+
+```
+
+
+
+Processing this amount of records takes some time, so we precomputed the CLIP embeddings, stored them in a Qdrant collection and exported the collection as
+
+a snapshot. You may [download it here](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot).
+
+
+
+## Different search modes
+
+
+
+The FastAPI backend [exposes just a single endpoint](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/main.py#L37),
+
+however it handles multiple scenarios. Let's dive into them one by one and understand why they are needed.
+
+
+
+### Cold start
+
+
+
+Recommendation systems struggle with a cold start problem. When a new user joins the system, there is no data about their preferences, so it’s hard to recommend
+
+anything. The same applies to our demo. When you open it, you will see a random selection of dishes, and it changes every time you refresh the page. Internally,
+
+the demo [chooses some random points](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L70) in the
+
+vector space.
+
+
+
+![Random points selection](/articles_data/food-discovery-demo/random-results.png)
+
+
+
+That procedure should result in returning diverse results, so we have a higher chance of showing something interesting to the user.
+
+
+
+### Textual search
+
+
+
+Since the demo suffers from the cold start problem, we implemented a textual search mode that is useful to start exploring the data. You can type in any text query
+
+by clicking a search icon in the top right corner. The demo will use the CLIP model to encode the query into a vector and then search for the nearest neighbors
+
+in the vector space.
+
+
+
+![Random points selection](/articles_data/food-discovery-demo/textual-search.png)
+
+
+
+This is implemented as [a group search query to Qdrant](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L44).
+
+We didn't use a simple search, but performed grouping by the restaurant to get more diverse results. [Search groups](/documentation/concepts/search/#search-groups)
+
+is a mechanism similar to `GROUP BY` clause in SQL, and it's useful when you want to get a specific number of result per group (in our case just one).
+
+
+
+```python
+
+import settings
+
+
+
+# Encode query into a vector, model is an instance of
+
+# sentence_transformers.SentenceTransformer that loaded CLIP model
+
+query_vector = model.encode(query).tolist()
+
+
+
+# Search for nearest neighbors, client is an instance of
+
+# qdrant_client.QdrantClient that has to be initialized before
+
+response = client.search_groups(
+
+ settings.QDRANT_COLLECTION,
+
+ query_vector=query_vector,
+
+ group_by=settings.GROUP_BY_FIELD,
+
+ limit=search_query.limit,
+
+)
+
+```
+
+
+
+### Exploring the results
+
+
+
+The main feature of the demo is the ability to explore the space of the dishes. You can click on any of them to see more details, but first of all you can like or dislike it,
+
+and the demo will update the search results accordingly.
+
+
+
+![Recommendation results](/articles_data/food-discovery-demo/recommendation-results.png)
+
+
+
+#### Negative feedback only
+
+
+
+Qdrant [Recommendation API](/documentation/concepts/search/#recommendation-api) needs at least one positive example to work. However, in our demo
+
+we want to be able to provide only negative examples. This is because we want to be able to say “I don’t like this dish” without having to like anything first.
+
+To achieve this, we use a trick. We negate the vectors of the disliked dishes and use their mean as a query. This way, the disliked dishes will be pushed away
+
+from the search results. **This works because the cosine distance is based on the angle between two vectors, and the angle between a vector and its negation is 180 degrees.**
+
+
+
+![CLIP model](/articles_data/food-discovery-demo/negated-vector.png)
+
+
+
+Food Discovery Demo [implements that trick](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L122)
+
+by calling Qdrant twice. Initially, we use the [Scroll API](/documentation/concepts/points/#scroll-points) to find disliked items,
+
+and then calculate a negated mean of all their vectors. That allows using the [Search Groups API](/documentation/concepts/search/#search-groups)
+
+to find the nearest neighbors of the negated mean vector.
+
+
+
+```python
+
+import numpy as np
+
+
+
+# Retrieve the disliked points based on their ids
+
+disliked_points, _ = client.scroll(
+
+ settings.QDRANT_COLLECTION,
+
+ scroll_filter=models.Filter(
+
+ must=[
+
+ models.HasIdCondition(has_id=search_query.negative),
+
+ ]
+
+ ),
+
+ with_vectors=True,
+
+)
+
+
+
+# Calculate a mean vector of disliked points
+
+disliked_vectors = np.array([point.vector for point in disliked_points])
+
+mean_vector = np.mean(disliked_vectors, axis=0)
+
+negated_vector = -mean_vector
+
+
+
+# Search for nearest neighbors of the negated mean vector
+
+response = client.search_groups(
+
+ settings.QDRANT_COLLECTION,
+
+ query_vector=negated_vector.tolist(),
+
+ group_by=settings.GROUP_BY_FIELD,
+
+ limit=search_query.limit,
+
+)
+
+```
+
+
+
+#### Positive and negative feedback
+
+
+
+Since the [Recommendation API](/documentation/concepts/search/#recommendation-api) requires at least one positive example, we can use it only when
+
+the user has liked at least one dish. We could theoretically use the same trick as above and negate the disliked dishes, but it would be a bit weird, as Qdrant has
+
+that feature already built-in, and we can call it just once to do the job. It's always better to perform the search server-side. Thus, in this case [we just call
+
+the Qdrant server with a list of positive and negative examples](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L166),
+
+so it can find some points which are close to the positive examples and far from the negative ones.
+
+
+
+```python
+
+response = client.recommend_groups(
+
+ settings.QDRANT_COLLECTION,
+
+ positive=search_query.positive,
+
+ negative=search_query.negative,
+
+ group_by=settings.GROUP_BY_FIELD,
+
+ limit=search_query.limit,
+
+)
+
+```
+
+
+
+From the user perspective nothing changes comparing to the previous case.
+
+
+
+### Location-based search
+
+
+
+Last but not least, location plays an important role in the food discovery process. You are definitely looking for something you can find nearby, not on the other
+
+side of the globe. Therefore, your current location can be toggled as a filtering condition. You can enable it by clicking on “Find near me” icon
+
+in the top right. This way you can find the best pizza in your neighborhood, not in the whole world. Qdrant [geo radius filter](/documentation/concepts/filtering/#geo-radius) is a perfect choice for this. It lets you
+
+filter the results by distance from a given point.
+
+
+
+```python
+
+from qdrant_client import models
+
+
+
+# Create a geo radius filter
+
+query_filter = models.Filter(
+
+ must=[
+
+ models.FieldCondition(
+
+ key=""cafe.location"",
+
+ geo_radius=models.GeoRadius(
+
+ center=models.GeoPoint(
+
+ lon=location.longitude,
+
+ lat=location.latitude,
+
+ ),
+
+ radius=location.radius_km * 1000,
+
+ ),
+
+ )
+
+ ]
+
+)
+
+```
+
+
+
+Such a filter needs [a payload index](/documentation/concepts/indexing/#payload-index) to work efficiently, and it was created on a collection
+
+we used to create the snapshot. When you import it into your instance, the index will be already there.
+
+
+
+## Using the demo
+
+
+
+The Food Discovery Demo [is available online](https://food-discovery.qdrant.tech/), but if you prefer to run it locally, you can do it with Docker. The
+
+[README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes all the steps more in detail, but here is a quick start:
+
+
+
+```bash
+
+git clone git@github.com:qdrant/demo-food-discovery.git
+
+cd demo-food-discovery
+
+# Create .env file based on .env.example
+
+docker-compose up -d
+
+```
+
+
+
+The demo will be available at `http://localhost:8001`, but you won't be able to search anything until you [import the snapshot into your Qdrant
+
+instance](/documentation/concepts/snapshots/#recover-via-api). If you don't want to bother with hosting a local one, you can use the [Qdrant
+
+Cloud](https://cloud.qdrant.io/) cluster. 4 GB RAM is enough to load all the 2 million entries.
+
+
+
+## Fork and reuse
+
+
+
+Our demo is completely open-source. Feel free to fork it, update with your own dataset or adapt the application to your use case. Whether you’re looking to understand the mechanics
+
+of semantic search or to have a foundation to build a larger project, this demo can serve as a starting point. Check out the [Food Discovery Demo repository
+
+](https://github.com/qdrant/demo-food-discovery/) to get started. If you have any questions, feel free to reach out [through Discord](https://qdrant.to/discord).
+",articles/food-discovery-demo.md
+"---
+
+title: Google Summer of Code 2023 - Web UI for Visualization and Exploration
+
+short_description: Gsoc'23 Web UI for Visualization and Exploration
+
+description: My journey as a Google Summer of Code 2023 student working on the ""Web UI for Visualization and Exploration"" project for Qdrant.
+
+preview_dir: /articles_data/web-ui-gsoc/preview
+
+small_preview_image: /articles_data/web-ui-gsoc/icon.svg
+
+social_preview_image: /articles_data/web-ui-gsoc/preview/social_preview.jpg
+
+weight: -20
+
+author: Kartik Gupta
+
+author_link: https://kartik-gupta-ij.vercel.app/
+
+date: 2023-08-28T08:00:00+03:00
+
+draft: false
+
+keywords:
+
+
+
+ - vector reduction
+
+ - console
+
+ - gsoc'23
+
+ - vector similarity
+
+ - exploration
+
+ - recommendation
+
+---
+
+
+
+
+
+
+
+## Introduction
+
+
+
+Hello everyone! My name is Kartik Gupta, and I am thrilled to share my coding journey as part of the Google Summer of Code 2023 program. This summer, I had the incredible opportunity to work on an exciting project titled ""Web UI for Visualization and Exploration"" for Qdrant, a vector search engine. In this article, I will take you through my experience, challenges, and achievements during this enriching coding journey.
+
+
+
+## Project Overview
+
+
+
+Qdrant is a powerful vector search engine widely used for similarity search and clustering. However, it lacked a user-friendly web-based UI for data visualization and exploration. My project aimed to bridge this gap by developing a web-based user interface that allows users to easily interact with and explore their vector data.
+
+
+
+## Milestones and Achievements
+
+
+
+The project was divided into six milestones, each focusing on a specific aspect of the web UI development. Let's go through each of them and my achievements during the coding period.
+
+
+
+**1. Designing a friendly UI on Figma**
+
+
+
+I started by designing the user interface on Figma, ensuring it was easy to use, visually appealing, and responsive on different devices. I focused on usability and accessibility to create a seamless user experience. ( [Figma Design](https://www.figma.com/file/z54cAcOErNjlVBsZ1DrXyD/Qdant?type=design&node-id=0-1&mode=design&t=Pu22zO2AMFuGhklG-0))
+
+
+
+**2. Building the layout**
+
+
+
+The layout route served as a landing page with an overview of the application's features and navigation links to other routes.
+
+
+
+**3. Creating a view collection route**
+
+
+
+This route enabled users to view a list of collections available in the application. Users could click on a collection to see more details, including the data and vectors associated with it.
+
+
+
+{{< figure src=/articles_data/web-ui-gsoc/collections-page.png caption=""Collection Page"" alt=""Collection Page"" >}}
+
+
+
+**4. Developing a data page with ""find similar"" functionality**
+
+
+
+I implemented a data page where users could search for data and find similar data using a recommendation API. The recommendation API suggested similar data based on the Data's selected ID, providing valuable insights.
+
+
+
+{{< figure src=/articles_data/web-ui-gsoc/points-page.png caption=""Points Page"" alt=""Points Page"" >}}
+
+
+
+**5. Developing query editor page libraries**
+
+
+
+This milestone involved creating a query editor page that allowed users to write queries in a custom language. The editor provided syntax highlighting, autocomplete, and error-checking features for a seamless query writing experience.
+
+
+
+{{< figure src=/articles_data/web-ui-gsoc/console-page.png caption=""Query Editor Page"" alt=""Query Editor Page"" >}}
+
+
+
+**6. Developing a route for visualizing vector data points**
+
+
+
+This is done by the reduction of n-dimensional vector in 2-D points and they are displayed with their respective payloads.
+
+
+
+{{< figure src=/articles_data/web-ui-gsoc/visualization-page.png caption=""Vector Visuliztion Page"" alt=""visualization-page"" >}}
+
+
+
+## Challenges and Learning
+
+
+
+Throughout the project, I encountered a series of challenges that stretched my engineering capabilities and provided unique growth opportunities. From mastering new libraries and technologies to ensuring the user interface (UI) was both visually appealing and user-friendly, every obstacle became a stepping stone toward enhancing my skills as a developer. However, each challenge provided an opportunity to learn and grow as a developer. I acquired valuable experience in vector search and dimension reduction techniques.
+
+
+
+The most significant learning for me was the importance of effective project management. Setting realistic timelines, collaborating with mentors, and staying proactive with feedback allowed me to complete the milestones efficiently.
+
+
+
+### Technical Learning and Skill Development
+
+
+
+One of the most significant aspects of this journey was diving into the intricate world of vector search and dimension reduction techniques. These areas, previously unfamiliar to me, required rigorous study and exploration. Learning how to process vast amounts of data efficiently and extract meaningful insights through these techniques was both challenging and rewarding.
+
+
+
+### Effective Project Management
+
+
+
+Undoubtedly, the most impactful lesson was the art of effective project management. I quickly grasped the importance of setting realistic timelines and goals. Collaborating closely with mentors and maintaining proactive communication proved indispensable. This approach enabled me to navigate the complex development process and successfully achieve the project's milestones.
+
+
+
+### Overcoming Technical Challenges
+
+
+
+#### Autocomplete Feature in Console
+
+
+
+One particularly intriguing challenge emerged while working on the autocomplete feature within the console. Finding a solution was proving elusive until a breakthrough came from an unexpected direction. My mentor, Andrey, proposed creating a separate module that could support autocomplete based on OpenAPI for our custom language. This ingenious approach not only resolved the issue but also showcased the power of collaborative problem-solving.
+
+
+
+#### Optimization with Web Workers
+
+
+
+The high-processing demands of vector reduction posed another significant challenge. Initially, this task was straining browsers and causing performance issues. The solution materialized in the form of web workers—an independent processing instance that alleviated the strain on browsers. However, a new question arose: how to terminate these workers effectively? With invaluable insights from my mentor, I gained a deeper understanding of web worker dynamics and successfully tackled this challenge.
+
+
+
+#### Console Integration Complexity
+
+
+
+Integrating the console interaction into the application presented multifaceted challenges. Crafting a custom language in Monaco, parsing text to make API requests, and synchronizing the entire process demanded meticulous attention to detail. Overcoming these hurdles was a testament to the complexity of real-world engineering endeavours.
+
+
+
+#### Codelens Multiplicity Issue
+
+
+
+An unexpected issue cropped up during the development process: the codelen (run button) registered multiple times, leading to undesired behaviour. This hiccup underscored the importance of thorough testing and debugging, even in seemingly straightforward features.
+
+
+
+### Key Learning Points
+
+
+
+Amidst these challenges, I garnered valuable insights that have significantly enriched my engineering prowess:
+
+
+
+**Vector Reduction Techniques**: Navigating the realm of vector reduction techniques provided a deep understanding of how to process and interpret data efficiently. This knowledge opens up new avenues for developing data-driven applications in the future.
+
+
+
+**Web Workers Efficiency**: Mastering the intricacies of web workers not only resolved performance concerns but also expanded my repertoire of optimization strategies. This newfound proficiency will undoubtedly find relevance in various future projects.
+
+
+
+**Monaco Editor and UI Frameworks**: Working extensively with the Monaco Editor, Material-UI (MUI), and Vite enriched my familiarity with these essential tools. I honed my skills in integrating complex UI components seamlessly into applications.
+
+
+
+## Areas for Improvement and Future Enhancements
+
+
+
+While reflecting on this transformative journey, I recognize several areas that offer room for improvement and future enhancements:
+
+
+
+1. Enhanced Autocomplete: Further refining the autocomplete feature to support key-value suggestions in JSON structures could greatly enhance the user experience.
+
+
+
+2. Error Detection in Console: Integrating the console's error checker with OpenAPI could enhance its accuracy in identifying errors and offering precise suggestions for improvement.
+
+
+
+3. Expanded Vector Visualization: Exploring additional visualization methods and optimizing their performance could elevate the utility of the vector visualization route.
+
+
+
+
+
+## Conclusion
+
+
+
+Participating in the Google Summer of Code 2023 and working on the ""Web UI for Visualization and Exploration"" project has been an immensely rewarding experience. I am grateful for the opportunity to contribute to Qdrant and develop a user-friendly interface for vector data exploration.
+
+
+
+I want to express my gratitude to my mentors and the entire Qdrant community for their support and guidance throughout this journey. This experience has not only improved my coding skills but also instilled a deeper passion for web development and data analysis.
+
+
+
+As my coding journey continues beyond this project, I look forward to applying the knowledge and experience gained here to future endeavours. I am excited to see how Qdrant evolves with the newly developed web UI and how it positively impacts users worldwide.
+
+
+
+Thank you for joining me on this coding adventure, and I hope to share more exciting projects in the future! Happy coding!",articles/web-ui-gsoc.md
+"---
+
+title: Metric Learning for Anomaly Detection
+
+short_description: ""How to use metric learning to detect anomalies: quality assessment of coffee beans with just 200 labelled samples""
+
+description: Practical use of metric learning for anomaly detection. A way to match the results of a classification-based approach with only ~0.6% of the labeled data.
+
+social_preview_image: /articles_data/detecting-coffee-anomalies/preview/social_preview.jpg
+
+preview_dir: /articles_data/detecting-coffee-anomalies/preview
+
+small_preview_image: /articles_data/detecting-coffee-anomalies/anomalies_icon.svg
+
+weight: 30
+
+author: Yusuf Sarıgöz
+
+author_link: https://medium.com/@yusufsarigoz
+
+date: 2022-05-04T13:00:00+03:00
+
+draft: false
+
+# aliases: [ /articles/detecting-coffee-anomalies/ ]
+
+---
+
+
+
+Anomaly detection is a thirsting yet challenging task that has numerous use cases across various industries.
+
+The complexity results mainly from the fact that the task is data-scarce by definition.
+
+
+
+Similarly, anomalies are, again by definition, subject to frequent change, and they may take unexpected forms.
+
+For that reason, supervised classification-based approaches are:
+
+
+
+* Data-hungry - requiring quite a number of labeled data;
+
+* Expensive - data labeling is an expensive task itself;
+
+* Time-consuming - you would try to obtain what is necessarily scarce;
+
+* Hard to maintain - you would need to re-train the model repeatedly in response to changes in the data distribution.
+
+
+
+These are not desirable features if you want to put your model into production in a rapidly-changing environment.
+
+And, despite all the mentioned difficulties, they do not necessarily offer superior performance compared to the alternatives.
+
+In this post, we will detail the lessons learned from such a use case.
+
+
+
+## Coffee Beans
+
+
+
+[Agrivero.ai](https://agrivero.ai/) - is a company making AI-enabled solution for quality control & traceability of green coffee for producers, traders, and roasters.
+
+They have collected and labeled more than **30 thousand** images of coffee beans with various defects - wet, broken, chipped, or bug-infested samples.
+
+This data is used to train a classifier that evaluates crop quality and highlights possible problems.
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/detection.gif caption=""Anomalies in coffee"" width=""400px"" >}}
+
+
+
+We should note that anomalies are very diverse, so the enumeration of all possible anomalies is a challenging task on it's own.
+
+In the course of work, new types of defects appear, and shooting conditions change. Thus, a one-time labeled dataset becomes insufficient.
+
+
+
+Let's find out how metric learning might help to address this challenge.
+
+
+
+## Metric Learning Approach
+
+
+
+In this approach, we aimed to encode images in an n-dimensional vector space and then use learned similarities to label images during the inference.
+
+
+
+The simplest way to do this is KNN classification.
+
+The algorithm retrieves K-nearest neighbors to a given query vector and assigns a label based on the majority vote.
+
+
+
+In production environment kNN classifier could be easily replaced with [Qdrant](https://github.com/qdrant/qdrant) vector search engine.
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/anomalies_detection.png caption=""Production deployment"" >}}
+
+
+
+This approach has the following advantages:
+
+
+
+* We can benefit from unlabeled data, considering labeling is time-consuming and expensive.
+
+* The relevant metric, e.g., precision or recall, can be tuned according to changing requirements during the inference without re-training.
+
+* Queries labeled with a high score can be added to the KNN classifier on the fly as new data points.
+
+
+
+To apply metric learning, we need to have a neural encoder, a model capable of transforming an image into a vector.
+
+
+
+Training such an encoder from scratch may require a significant amount of data we might not have. Therefore, we will divide the training into two steps:
+
+
+
+* The first step is to train the autoencoder, with which we will prepare a model capable of representing the target domain.
+
+
+
+* The second step is finetuning. Its purpose is to train the model to distinguish the required types of anomalies.
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/anomaly_detection_training.png caption=""Model training architecture"" >}}
+
+
+
+
+
+### Step 1 - Autoencoder for Unlabeled Data
+
+
+
+First, we pretrained a Resnet18-like model in a vanilla autoencoder architecture by leaving the labels aside.
+
+Autoencoder is a model architecture composed of an encoder and a decoder, with the latter trying to recreate the original input from the low-dimensional bottleneck output of the former.
+
+
+
+There is no intuitive evaluation metric to indicate the performance in this setup, but we can evaluate the success by examining the recreated samples visually.
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/image_reconstruction.png caption=""Example of image reconstruction with Autoencoder"" >}}
+
+
+
+Then we encoded a subset of the data into 128-dimensional vectors by using the encoder,
+
+and created a KNN classifier on top of these embeddings and associated labels.
+
+
+
+Although the results are promising, we can do even better by finetuning with metric learning.
+
+
+
+### Step 2 - Finetuning with Metric Learning
+
+
+
+We started by selecting 200 labeled samples randomly without replacement.
+
+
+
+In this step, The model was composed of the encoder part of the autoencoder with a randomly initialized projection layer stacked on top of it.
+
+We applied transfer learning from the frozen encoder and trained only the projection layer with Triplet Loss and an online batch-all triplet mining strategy.
+
+
+
+Unfortunately, the model overfitted quickly in this attempt.
+
+In the next experiment, we used an online batch-hard strategy with a trick to prevent vector space from collapsing.
+
+We will describe our approach in the further articles.
+
+
+
+This time it converged smoothly, and our evaluation metrics also improved considerably to match the supervised classification approach.
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/ae_report_knn.png caption=""Metrics for the autoencoder model with KNN classifier"" >}}
+
+
+
+{{< figure src=/articles_data/detecting-coffee-anomalies/ft_report_knn.png caption=""Metrics for the finetuned model with KNN classifier"" >}}
+
+
+
+We repeated this experiment with 500 and 2000 samples, but it showed only a slight improvement.
+
+Thus we decided to stick to 200 samples - see below for why.
+
+
+
+## Supervised Classification Approach
+
+We also wanted to compare our results with the metrics of a traditional supervised classification model.
+
+For this purpose, a Resnet50 model was finetuned with ~30k labeled images, made available for training.
+
+Surprisingly, the F1 score was around ~0.86.
+
+
+
+Please note that we used only 200 labeled samples in the metric learning approach instead of ~30k in the supervised classification approach.
+
+These numbers indicate a huge saving with no considerable compromise in the performance.
+
+
+
+## Conclusion
+
+We obtained results comparable to those of the supervised classification method by using **only 0.66%** of the labeled data with metric learning.
+
+This approach is time-saving and resource-efficient, and that may be improved further. Possible next steps might be:
+
+
+
+- Collect more unlabeled data and pretrain a larger autoencoder.
+
+- Obtain high-quality labels for a small number of images instead of tens of thousands for finetuning.
+
+- Use hyperparameter optimization and possibly gradual unfreezing in the finetuning step.
+
+- Use [vector search engine](https://github.com/qdrant/qdrant) to serve Metric Learning in production.
+
+
+
+We are actively looking into these, and we will continue to publish our findings in this challenge and other use cases of metric learning.
+",articles/detecting-coffee-anomalies.md
+"---
+
+title: Fine Tuning Similar Cars Search
+
+short_description: ""How to use similarity learning to search for similar cars""
+
+description: Learn how to train a similarity model that can retrieve similar car images in novel categories.
+
+social_preview_image: /articles_data/cars-recognition/preview/social_preview.jpg
+
+small_preview_image: /articles_data/cars-recognition/icon.svg
+
+preview_dir: /articles_data/cars-recognition/preview
+
+weight: 10
+
+author: Yusuf Sarıgöz
+
+author_link: https://medium.com/@yusufsarigoz
+
+date: 2022-06-28T13:00:00+03:00
+
+draft: false
+
+# aliases: [ /articles/cars-recognition/ ]
+
+---
+
+
+
+Supervised classification is one of the most widely used training objectives in machine learning,
+
+but not every task can be defined as such. For example,
+
+
+
+1. Your classes may change quickly —e.g., new classes may be added over time,
+
+2. You may not have samples from every possible category,
+
+3. It may be impossible to enumerate all the possible classes during the training time,
+
+4. You may have an essentially different task, e.g., search or retrieval.
+
+
+
+All such problems may be efficiently solved with similarity learning.
+
+
+
+N.B.: If you are new to the similarity learning concept, checkout the [awesome-metric-learning](https://github.com/qdrant/awesome-metric-learning) repo for great resources and use case examples.
+
+
+
+However, similarity learning comes with its own difficulties such as:
+
+
+
+1. Need for larger batch sizes usually,
+
+2. More sophisticated loss functions,
+
+3. Changing architectures between training and inference.
+
+
+
+Quaterion is a fine tuning framework built to tackle such problems in similarity learning.
+
+It uses [PyTorch Lightning](https://www.pytorchlightning.ai/)
+
+as a backend, which is advertized with the motto, ""spend more time on research, less on engineering.""
+
+This is also true for Quaterion, and it includes:
+
+
+
+1. Trainable and servable model classes,
+
+2. Annotated built-in loss functions, and a wrapper over [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) when you need even more,
+
+3. Sample, dataset and data loader classes to make it easier to work with similarity learning data,
+
+4. A caching mechanism for faster iterations and less memory footprint.
+
+
+
+## A closer look at Quaterion
+
+
+
+Let's break down some important modules:
+
+
+
+- `TrainableModel`: A subclass of `pl.LightNingModule` that has additional hook methods such as `configure_encoders`, `configure_head`, `configure_metrics` and others
+
+to define objects needed for training and evaluation —see below to learn more on these.
+
+- `SimilarityModel`: An inference-only export method to boost code transfer and lower dependencies during the inference time.
+
+In fact, Quaterion is composed of two packages:
+
+ 1. `quaterion_models`: package that you need for inference.
+
+ 2. `quaterion`: package that defines objects needed for training and also depends on `quaterion_models`.
+
+- `Encoder` and `EncoderHead`: Two objects that form a `SimilarityModel`.
+
+In most of the cases, you may use a frozen pretrained encoder, e.g., ResNets from `torchvision`, or language modelling
+
+models from `transformers`, with a trainable `EncoderHead` stacked on top of it.
+
+`quaterion_models` offers several ready-to-use `EncoderHead` implementations,
+
+but you may also create your own by subclassing a parent class or easily listing PyTorch modules in a `SequentialHead`.
+
+
+
+Quaterion has other objects such as distance functions, evaluation metrics, evaluators, convenient dataset and data loader classes, but these are mostly self-explanatory.
+
+Thus, they will not be explained in detail in this article for brevity.
+
+However, you can always go check out the [documentation](https://quaterion.qdrant.tech) to learn more about them.
+
+
+
+The focus of this tutorial is a step-by-step solution to a similarity learning problem with Quaterion.
+
+This will also help us better understand how the abovementioned objects fit together in a real project.
+
+Let's start walking through some of the important parts of the code.
+
+
+
+If you are looking for the complete source code instead, you can find it under the [examples](https://github.com/qdrant/quaterion/tree/master/examples/cars)
+
+directory in the Quaterion repo.
+
+
+
+## Dataset
+
+In this tutorial, we will use the [Stanford Cars](https://pytorch.org/vision/main/generated/torchvision.datasets.StanfordCars.html)
+
+dataset.
+
+
+
+{{< figure src=https://storage.googleapis.com/quaterion/docs/class_montage.jpg caption=""Stanford Cars Dataset"" >}}
+
+
+
+
+
+It has 16185 images of cars from 196 classes,
+
+and it is split into training and testing subsets with almost a 50-50% split.
+
+To make things even more interesting, however, we will first merge training and testing subsets,
+
+then we will split it into two again in such a way that the half of the 196 classes will be put into the training set and the other half will be in the testing set.
+
+This will let us test our model with samples from novel classes that it has never seen in the training phase,
+
+which is what supervised classification cannot achieve but similarity learning can.
+
+
+
+In the following code borrowed from [`data.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/data.py):
+
+- `get_datasets()` function performs the splitting task described above.
+
+- `get_dataloaders()` function creates `GroupSimilarityDataLoader` instances from training and testing datasets.
+
+- Datasets are regular PyTorch datasets that emit `SimilarityGroupSample` instances.
+
+
+
+N.B.: Currently, Quaterion has two data types to represent samples in a dataset. To learn more about `SimilarityPairSample`, check out the [NLP tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html)
+
+
+
+```python
+
+import numpy as np
+
+import os
+
+import tqdm
+
+from torch.utils.data import Dataset, Subset
+
+from torchvision import datasets, transforms
+
+from typing import Callable
+
+from pytorch_lightning import seed_everything
+
+
+
+from quaterion.dataset import (
+
+ GroupSimilarityDataLoader,
+
+ SimilarityGroupSample,
+
+)
+
+
+
+# set seed to deterministically sample train and test categories later on
+
+seed_everything(seed=42)
+
+
+
+# dataset will be downloaded to this directory under local directory
+
+dataset_path = os.path.join(""."", ""torchvision"", ""datasets"")
+
+
+
+
+
+def get_datasets(input_size: int):
+
+ # Use Mean and std values for the ImageNet dataset as the base model was pretrained on it.
+
+ # taken from https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/
+
+ mean = [0.485, 0.456, 0.406]
+
+ std = [0.229, 0.224, 0.225]
+
+
+
+ # create train and test transforms
+
+ transform = transforms.Compose(
+
+ [
+
+ transforms.Resize((input_size, input_size)),
+
+ transforms.ToTensor(),
+
+ transforms.Normalize(mean, std),
+
+ ]
+
+ )
+
+
+
+ # we need to merge train and test splits into a full dataset first,
+
+ # and then we will split it to two subsets again with each one composed of distinct labels.
+
+ full_dataset = datasets.StanfordCars(
+
+ root=dataset_path, split=""train"", download=True
+
+ ) + datasets.StanfordCars(root=dataset_path, split=""test"", download=True)
+
+
+
+ # full_dataset contains examples from 196 categories labeled with an integer from 0 to 195
+
+ # randomly sample half of it to be used for training
+
+ train_categories = np.random.choice(a=196, size=196 // 2, replace=False)
+
+
+
+ # get a list of labels for all samples in the dataset
+
+ labels_list = np.array([label for _, label in tqdm.tqdm(full_dataset)])
+
+
+
+ # get a mask for indices where label is included in train_categories
+
+ labels_mask = np.isin(labels_list, train_categories)
+
+
+
+ # get a list of indices to be used as train samples
+
+ train_indices = np.argwhere(labels_mask).squeeze()
+
+
+
+ # others will be used as test samples
+
+ test_indices = np.argwhere(np.logical_not(labels_mask)).squeeze()
+
+
+
+ # now that we have distinct indices for train and test sets, we can use `Subset` to create new datasets
+
+ # from `full_dataset`, which contain only the samples at given indices.
+
+ # finally, we apply transformations created above.
+
+ train_dataset = CarsDataset(
+
+ Subset(full_dataset, train_indices), transform=transform
+
+ )
+
+
+
+ test_dataset = CarsDataset(
+
+ Subset(full_dataset, test_indices), transform=transform
+
+ )
+
+
+
+ return train_dataset, test_dataset
+
+
+
+
+
+def get_dataloaders(
+
+ batch_size: int,
+
+ input_size: int,
+
+ shuffle: bool = False,
+
+):
+
+ train_dataset, test_dataset = get_datasets(input_size)
+
+
+
+ train_dataloader = GroupSimilarityDataLoader(
+
+ train_dataset, batch_size=batch_size, shuffle=shuffle
+
+ )
+
+
+
+ test_dataloader = GroupSimilarityDataLoader(
+
+ test_dataset, batch_size=batch_size, shuffle=False
+
+ )
+
+
+
+ return train_dataloader, test_dataloader
+
+
+
+
+
+class CarsDataset(Dataset):
+
+ def __init__(self, dataset: Dataset, transform: Callable):
+
+ self._dataset = dataset
+
+ self._transform = transform
+
+
+
+ def __len__(self) -> int:
+
+ return len(self._dataset)
+
+
+
+ def __getitem__(self, index) -> SimilarityGroupSample:
+
+ image, label = self._dataset[index]
+
+ image = self._transform(image)
+
+
+
+ return SimilarityGroupSample(obj=image, group=label)
+
+```
+
+
+
+## Trainable Model
+
+
+
+Now it's time to review one of the most exciting building blocks of Quaterion: [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#module-quaterion.train.trainable_model).
+
+It is the base class for models you would like to configure for training,
+
+and it provides several hook methods starting with `configure_` to set up every aspect of the training phase
+
+just like [`pl.LightningModule`](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.LightningModule.html), its own base class.
+
+It is central to fine tuning with Quaterion, so we will break down this essential code in [`models.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py)
+
+and review each method separately. Let's begin with the imports:
+
+
+
+```python
+
+import torch
+
+import torchvision
+
+from quaterion_models.encoders import Encoder
+
+from quaterion_models.heads import EncoderHead, SkipConnectionHead
+
+from torch import nn
+
+from typing import Dict, Union, Optional, List
+
+
+
+from quaterion import TrainableModel
+
+from quaterion.eval.attached_metric import AttachedMetric
+
+from quaterion.eval.group import RetrievalRPrecision
+
+from quaterion.loss import SimilarityLoss, TripletLoss
+
+from quaterion.train.cache import CacheConfig, CacheType
+
+
+
+from .encoders import CarsEncoder
+
+```
+
+
+
+In the following code snippet, we subclass `TrainableModel`.
+
+You may use `__init__()` to store some attributes to be used in various `configure_*` methods later on.
+
+The more interesting part is, however, in the [`configure_encoders()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_encoders) method.
+
+We need to return an instance of [`Encoder`](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) (or a dictionary with `Encoder` instances as values) from this method.
+
+In our case, it is an instance of `CarsEncoders`, which we will review soon.
+
+Notice now how it is created with a pretrained ResNet152 model whose classification layer is replaced by an identity function.
+
+
+
+
+
+```python
+
+class Model(TrainableModel):
+
+ def __init__(self, lr: float, mining: str):
+
+ self._lr = lr
+
+ self._mining = mining
+
+ super().__init__()
+
+
+
+ def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
+
+ pre_trained_encoder = torchvision.models.resnet152(pretrained=True)
+
+ pre_trained_encoder.fc = nn.Identity()
+
+ return CarsEncoder(pre_trained_encoder)
+
+```
+
+
+
+In Quaterion, a [`SimilarityModel`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel) is composed of one or more `Encoder`s
+
+and an [`EncoderHead`](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead).
+
+`quaterion_models` has [several `EncoderHead` implementations](https://quaterion-models.qdrant.tech/quaterion_models.heads.html#module-quaterion_models.heads)
+
+with a unified API such as a configurable dropout value.
+
+You may use one of them or create your own subclass of `EncoderHead`.
+
+In either case, you need to return an instance of it from [`configure_head`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_head)
+
+In this example, we will use a `SkipConnectionHead`, which is lightweight and more resistant to overfitting.
+
+
+
+```python
+
+ def configure_head(self, input_embedding_size) -> EncoderHead:
+
+ return SkipConnectionHead(input_embedding_size, dropout=0.1)
+
+```
+
+
+
+Quaterion has implementations of [some popular loss functions](https://quaterion.qdrant.tech/quaterion.loss.html) for similarity learning, all of which subclass either [`GroupLoss`](https://quaterion.qdrant.tech/quaterion.loss.group_loss.html#quaterion.loss.group_loss.GroupLoss)
+
+or [`PairwiseLoss`](https://quaterion.qdrant.tech/quaterion.loss.pairwise_loss.html#quaterion.loss.pairwise_loss.PairwiseLoss).
+
+In this example, we will use [`TripletLoss`](https://quaterion.qdrant.tech/quaterion.loss.triplet_loss.html#quaterion.loss.triplet_loss.TripletLoss),
+
+which is a subclass of `GroupLoss`. In general, subclasses of `GroupLoss` are used with
+
+datasets in which samples are assigned with some group (or label). In our example label is a make of the car.
+
+Those datasets should emit `SimilarityGroupSample`.
+
+Other alternatives are implementations of `PairwiseLoss`, which consume `SimilarityPairSample` - pair of objects for which similarity is specified individually.
+
+To see an example of the latter, you may need to check out the [NLP Tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html)
+
+
+
+```python
+
+ def configure_loss(self) -> SimilarityLoss:
+
+ return TripletLoss(mining=self._mining, margin=0.5)
+
+```
+
+
+
+
+
+`configure_optimizers()` may be familiar to PyTorch Lightning users,
+
+but there is a novel `self.model` used inside that method.
+
+It is an instance of `SimilarityModel` and is automatically created by Quaterion from the return values of `configure_encoders()` and `configure_head()`.
+
+
+
+```python
+
+ def configure_optimizers(self):
+
+ optimizer = torch.optim.Adam(self.model.parameters(), self._lr)
+
+ return optimizer
+
+```
+
+
+
+Caching in Quaterion is used for avoiding calculation of outputs of a frozen pretrained `Encoder` in every epoch.
+
+When it is configured, outputs will be computed once and cached in the preferred device for direct usage later on.
+
+It provides both a considerable speedup and less memory footprint.
+
+However, it is quite a bit versatile and has several knobs to tune.
+
+To get the most out of its potential, it's recommended that you check out the [cache tutorial](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html).
+
+For the sake of making this article self-contained, you need to return a [`CacheConfig`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig)
+
+instance from [`configure_caches()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_caches)
+
+to specify cache-related preferences such as:
+
+- [`CacheType`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType), i.e., whether to store caches on CPU or GPU,
+
+- `save_dir`, i.e., where to persist caches for subsequent runs,
+
+- `batch_size`, i.e., batch size to be used only when creating caches - the batch size to be used during the actual training might be different.
+
+
+
+```python
+
+ def configure_caches(self) -> Optional[CacheConfig]:
+
+ return CacheConfig(
+
+ cache_type=CacheType.AUTO, save_dir=""./cache_dir"", batch_size=32
+
+ )
+
+```
+
+
+
+We have just configured the training-related settings of a `TrainableModel`.
+
+However, evaluation is an integral part of experimentation in machine learning,
+
+and you may configure evaluation metrics by returning one or more [`AttachedMetric`](https://quaterion.qdrant.tech/quaterion.eval.attached_metric.html#quaterion.eval.attached_metric.AttachedMetric)
+
+instances from `configure_metrics()`. Quaterion has several built-in [group](https://quaterion.qdrant.tech/quaterion.eval.group.html)
+
+and [pairwise](https://quaterion.qdrant.tech/quaterion.eval.pair.html)
+
+evaluation metrics.
+
+
+
+```python
+
+ def configure_metrics(self) -> Union[AttachedMetric, List[AttachedMetric]]:
+
+ return AttachedMetric(
+
+ ""rrp"",
+
+ metric=RetrievalRPrecision(),
+
+ prog_bar=True,
+
+ on_epoch=True,
+
+ on_step=False,
+
+ )
+
+```
+
+
+
+## Encoder
+
+
+
+As previously stated, a `SimilarityModel` is composed of one or more `Encoder`s and an `EncoderHead`.
+
+Even if we freeze pretrained `Encoder` instances,
+
+`EncoderHead` is still trainable and has enough parameters to adapt to the new task at hand.
+
+It is recommended that you set the `trainable` property to `False` whenever possible,
+
+as it lets you benefit from the caching mechanism described above.
+
+Another important property is `embedding_size`, which will be passed to `TrainableModel.configure_head()` as `input_embedding_size`
+
+to let you properly initialize the head layer.
+
+Let's see how an `Encoder` is implemented in the following code borrowed from [`encoders.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/encoders.py):
+
+
+
+```python
+
+import os
+
+
+
+import torch
+
+import torch.nn as nn
+
+from quaterion_models.encoders import Encoder
+
+
+
+
+
+class CarsEncoder(Encoder):
+
+ def __init__(self, encoder_model: nn.Module):
+
+ super().__init__()
+
+ self._encoder = encoder_model
+
+ self._embedding_size = 2048 # last dimension from the ResNet model
+
+
+
+ @property
+
+ def trainable(self) -> bool:
+
+ return False
+
+
+
+ @property
+
+ def embedding_size(self) -> int:
+
+ return self._embedding_size
+
+```
+
+
+
+An `Encoder` is a regular `torch.nn.Module` subclass,
+
+and we need to implement the forward pass logic in the `forward` method.
+
+Depending on how you create your submodules, this method may be more complex;
+
+however, we simply pass the input through a pretrained ResNet152 backbone in this example:
+
+
+
+```python
+
+ def forward(self, images):
+
+ embeddings = self._encoder.forward(images)
+
+ return embeddings
+
+```
+
+
+
+An important step of machine learning development is proper saving and loading of models.
+
+Quaterion lets you save your `SimilarityModel` with [`TrainableModel.save_servable()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.save_servable)
+
+and restore it with [`SimilarityModel.load()`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel.load).
+
+To be able to use these two methods, you need to implement `save()` and `load()` methods in your `Encoder`.
+
+Additionally, it is also important that you define your subclass of `Encoder` outside the `__main__` namespace,
+
+i.e., in a separate file from your main entry point.
+
+It may not be restored properly otherwise.
+
+
+
+```python
+
+ def save(self, output_path: str):
+
+ os.makedirs(output_path, exist_ok=True)
+
+ torch.save(self._encoder, os.path.join(output_path, ""encoder.pth""))
+
+
+
+ @classmethod
+
+ def load(cls, input_path):
+
+ encoder_model = torch.load(os.path.join(input_path, ""encoder.pth""))
+
+ return CarsEncoder(encoder_model)
+
+```
+
+
+
+## Training
+
+
+
+With all essential objects implemented, it is easy to bring them all together and run a training loop with the [`Quaterion.fit()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.fit)
+
+method. It expects:
+
+- A `TrainableModel`,
+
+- A [`pl.Trainer`](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html),
+
+- A [`SimilarityDataLoader`](https://quaterion.qdrant.tech/quaterion.dataset.similarity_data_loader.html#quaterion.dataset.similarity_data_loader.SimilarityDataLoader) for training data,
+
+- And optionally, another `SimilarityDataLoader` for evaluation data.
+
+
+
+We need to import a few objects to prepare all of these:
+
+
+
+```python
+
+import os
+
+import pytorch_lightning as pl
+
+import torch
+
+from pytorch_lightning.callbacks import EarlyStopping, ModelSummary
+
+
+
+from quaterion import Quaterion
+
+from .data import get_dataloaders
+
+from .models import Model
+
+```
+
+
+
+The `train()` function in the following code snippet expects several hyperparameter values as arguments.
+
+They can be defined in a `config.py` or passed from the command line.
+
+However, that part of the code is omitted for brevity.
+
+Instead let's focus on how all the building blocks are initialized and passed to `Quaterion.fit()`,
+
+which is responsible for running the whole loop.
+
+When the training loop is complete, you can simply call `TrainableModel.save_servable()`
+
+to save the current state of the `SimilarityModel` instance:
+
+
+
+```python
+
+def train(
+
+ lr: float,
+
+ mining: str,
+
+ batch_size: int,
+
+ epochs: int,
+
+ input_size: int,
+
+ shuffle: bool,
+
+ save_dir: str,
+
+):
+
+ model = Model(
+
+ lr=lr,
+
+ mining=mining,
+
+ )
+
+
+
+ train_dataloader, val_dataloader = get_dataloaders(
+
+ batch_size=batch_size, input_size=input_size, shuffle=shuffle
+
+ )
+
+
+
+ early_stopping = EarlyStopping(
+
+ monitor=""validation_loss"",
+
+ patience=50,
+
+ )
+
+
+
+ trainer = pl.Trainer(
+
+ gpus=1 if torch.cuda.is_available() else 0,
+
+ max_epochs=epochs,
+
+ callbacks=[early_stopping, ModelSummary(max_depth=3)],
+
+ enable_checkpointing=False,
+
+ log_every_n_steps=1,
+
+ )
+
+
+
+ Quaterion.fit(
+
+ trainable_model=model,
+
+ trainer=trainer,
+
+ train_dataloader=train_dataloader,
+
+ val_dataloader=val_dataloader,
+
+ )
+
+
+
+ model.save_servable(save_dir)
+
+```
+
+
+
+## Evaluation
+
+
+
+Let's see what we have achieved with these simple steps.
+
+[`evaluate.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/evaluate.py) has two functions to evaluate both the baseline model and the tuned similarity model.
+
+We will review only the latter for brevity.
+
+In addition to the ease of restoring a `SimilarityModel`, this code snippet also shows
+
+how to use [`Evaluator`](https://quaterion.qdrant.tech/quaterion.eval.evaluator.html#quaterion.eval.evaluator.Evaluator)
+
+to evaluate the performance of a `SimilarityModel` on a given dataset
+
+by given evaluation metrics.
+
+
+
+
+
+{{< figure src=https://storage.googleapis.com/quaterion/docs/original_vs_tuned_cars.png caption=""Comparison of original and tuned models for retrieval"" >}}
+
+
+
+
+
+Full evaluation of a dataset usually grows exponentially,
+
+and thus you may want to perform a partial evaluation on a sampled subset.
+
+In this case, you may use [samplers](https://quaterion.qdrant.tech/quaterion.eval.samplers.html)
+
+to limit the evaluation.
+
+Similar to `Quaterion.fit()` used for training, [`Quaterion.evaluate()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.evaluate)
+
+runs a complete evaluation loop. It takes the following as arguments:
+
+- An `Evaluator` instance created with given evaluation metrics and a `Sampler`,
+
+- The `SimilarityModel` to be evaluated,
+
+- And the evaluation dataset.
+
+
+
+```python
+
+def eval_tuned_encoder(dataset, device):
+
+ print(""Evaluating tuned encoder..."")
+
+ tuned_cars_model = SimilarityModel.load(
+
+ os.path.join(os.path.dirname(__file__), ""cars_encoders"")
+
+ ).to(device)
+
+ tuned_cars_model.eval()
+
+
+
+ result = Quaterion.evaluate(
+
+ evaluator=Evaluator(
+
+ metrics=RetrievalRPrecision(),
+
+ sampler=GroupSampler(sample_size=1000, device=device, log_progress=True),
+
+ ),
+
+ model=tuned_cars_model,
+
+ dataset=dataset,
+
+ )
+
+
+
+ print(result)
+
+```
+
+
+
+## Conclusion
+
+
+
+In this tutorial, we trained a similarity model to search for similar cars from novel categories unseen in the training phase.
+
+Then, we evaluated it on a test dataset by the Retrieval R-Precision metric.
+
+The base model scored 0.1207,
+
+and our tuned model hit 0.2540, a twice higher score.
+
+These scores can be seen in the following figure:
+
+
+
+{{< figure src=/articles_data/cars-recognition/cars_metrics.png caption=""Metrics for the base and tuned models"" >}}
+",articles/cars-recognition.md
+"---
+
+title: ""How to Optimize RAM Requirements for 1 Million Vectors: A Case Study""
+
+short_description: Master RAM measurement and memory optimization for optimal performance and resource use.
+
+description: Unlock the secrets of efficient RAM measurement and memory optimization with this comprehensive guide, ensuring peak performance and resource utilization.
+
+social_preview_image: /articles_data/memory-consumption/preview/social_preview.jpg
+
+preview_dir: /articles_data/memory-consumption/preview
+
+small_preview_image: /articles_data/memory-consumption/icon.svg
+
+weight: 7
+
+author: Andrei Vasnetsov
+
+author_link: https://blog.vasnetsov.com/
+
+date: 2022-12-07T10:18:00.000Z
+
+# aliases: [ /articles/memory-consumption/ ]
+
+---
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Mastering RAM Measurement and Memory Optimization in Qdrant: A Comprehensive Guide
+
+
+
+When it comes to measuring the memory consumption of our processes, we often rely on tools such as `htop` to give us an indication of how much RAM is being used. However, this method can be misleading and doesn't always accurately reflect the true memory usage of a process.
+
+
+
+There are many different ways in which `htop` may not be a reliable indicator of memory usage.
+
+For instance, a process may allocate memory in advance but not use it, or it may not free deallocated memory, leading to overstated memory consumption.
+
+A process may be forked, which means that it will have a separate memory space, but it will share the same code and data with the parent process.
+
+This means that the memory consumption of the child process will be counted twice.
+
+Additionally, a process may utilize disk cache, which is also accounted as resident memory in the `htop` measurements.
+
+
+
+As a result, even if `htop` shows that a process is using 10GB of memory, it doesn't necessarily mean that the process actually requires 10GB of RAM to operate efficiently.
+
+In this article, we will explore how to properly measure RAM usage and optimize [Qdrant](https://qdrant.tech/) for optimal memory consumption.
+
+
+
+## How to measure actual RAM requirements
+
+
+
+
+
+
+
+We need to know memory consumption in order to estimate how much RAM is required to run the program.
+
+So in order to determine that, we can conduct a simple experiment.
+
+Let's limit the allowed memory of the process and observe at which point it stops functioning.
+
+In this way we can determine the minimum amount of RAM the program needs to operate.
+
+
+
+One way to do this is by conducting a grid search, but a more efficient method is to use binary search to quickly find the minimum required amount of RAM.
+
+We can use docker to limit the memory usage of the process.
+
+
+
+Before running each benchmark, it is important to clear the page cache with the following command:
+
+
+
+```bash
+
+sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches'
+
+```
+
+
+
+This ensures that the process doesn't utilize any data from previous runs, providing more accurate and consistent results.
+
+
+
+We can use the following command to run Qdrant with a memory limit of 1GB:
+
+
+
+```bash
+
+docker run -it --rm \
+
+ --memory 1024mb \
+
+ --network=host \
+
+ -v ""$(pwd)/data/storage:/qdrant/storage"" \
+
+ qdrant/qdrant:latest
+
+```
+
+
+
+## Let's run some benchmarks
+
+
+
+Let's run some benchmarks to see how much RAM Qdrant needs to serve 1 million vectors.
+
+
+
+We can use the `glove-100-angular` and scripts from the [vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) project to upload and query the vectors.
+
+With the first run we will use the default configuration of Qdrant with all data stored in RAM.
+
+
+
+```bash
+
+# Upload vectors
+
+python run.py --engines qdrant-all-in-ram --datasets glove-100-angular
+
+```
+
+
+
+After uploading vectors, we will repeat the same experiment with different RAM limits to see how they affect the memory consumption and search speed.
+
+
+
+```bash
+
+# Search vectors
+
+python run.py --engines qdrant-all-in-ram --datasets glove-100-angular --skip-upload
+
+```
+
+
+
+
+
+
+
+### All in Memory
+
+
+
+In the first experiment, we tested how well our system performs when all vectors are stored in memory.
+
+We tried using different amounts of memory, ranging from 1512mb to 1024mb, and measured the number of requests per second (rps) that our system was able to handle.
+
+
+
+| Memory | Requests/s |
+
+|--------|---------------|
+
+| 1512mb | 774.38 |
+
+| 1256mb | 760.63 |
+
+| 1200mb | 794.72 |
+
+| 1152mb | out of memory |
+
+| 1024mb | out of memory |
+
+
+
+
+
+We found that 1152MB memory limit resulted in our system running out of memory, but using 1512mb, 1256mb, and 1200mb of memory resulted in our system being able to handle around 780 RPS.
+
+This suggests that about 1.2GB of memory is needed to serve around 1 million vectors, and there is no speed degradation when limiting memory usage above 1.2GB.
+
+
+
+### Vectors stored using MMAP
+
+
+
+Let's go a bit further!
+
+In the second experiment, we tested how well our system performs when **vectors are stored using the memory-mapped file** (mmap).
+
+Create collection with:
+
+
+
+```http
+
+PUT /collections/benchmark
+
+{
+
+ ""vectors"": {
+
+ ...
+
+ ""on_disk"": true
+
+ }
+
+}
+
+
+
+```
+
+This configuration tells Qdrant to use mmap for vectors if the segment size is greater than 20000Kb (which is approximately 40K 128d-vectors).
+
+
+
+Now the out-of-memory happens when we allow using **600mb** RAM only
+
+
+
+
+
+ Experiments details
+
+
+
+| Memory | Requests/s |
+
+|--------|---------------|
+
+| 1200mb | 759.94 |
+
+| 1100mb | 687.00 |
+
+| 1000mb | 10 |
+
+
+
+--- use a bit faster disk ---
+
+
+
+| Memory | Requests/s |
+
+|--------|---------------|
+
+| 1000mb | 25 rps |
+
+| 750mb | 5 rps |
+
+| 625mb | 2.5 rps |
+
+| 600mb | out of memory |
+
+
+
+
+
+
+
+
+
+At this point we have to switch from network-mounted storage to a faster disk, as the network-based storage is too slow to handle the amount of sequential reads that our system needs to serve the queries.
+
+
+
+But let's first see how much RAM we need to serve 1 million vectors and then we will discuss the speed optimization as well.
+
+
+
+
+
+### Vectors and HNSW graph stored using MMAP
+
+
+
+In the third experiment, we tested how well our system performs when vectors and [HNSW](https://qdrant.tech/articles/filtrable-hnsw/) graph are stored using the memory-mapped files.
+
+Create collection with:
+
+
+
+```http
+
+PUT /collections/benchmark
+
+{
+
+ ""vectors"": {
+
+ ...
+
+ ""on_disk"": true
+
+ },
+
+ ""hnsw_config"": {
+
+ ""on_disk"": true
+
+ },
+
+ ...
+
+}
+
+```
+
+
+
+With this configuration we are able to serve 1 million vectors with **only 135mb of RAM**!
+
+
+
+
+
+
+
+ Experiments details
+
+
+
+
+
+| Memory | Requests/s |
+
+|--------|---------------|
+
+| 600mb | 5 rps |
+
+| 300mb | 0.9 rps / 1.1 sec per query |
+
+| 150mb | 0.4 rps / 2.5 sec per query |
+
+| 135mb | 0.33 rps / 3 sec per query |
+
+| 125mb | out of memory |
+
+
+
+
+
+
+
+At this point the importance of the disk speed becomes critical.
+
+We can serve the search requests with 135mb of RAM, but the speed of the requests makes it impossible to use the system in production.
+
+
+
+Let's see how we can improve the speed.
+
+
+
+
+
+## How to speed up the search
+
+
+
+
+
+
+
+
+
+To measure the impact of disk parameters on search speed, we used the `fio` tool to test the speed of different types of disks.
+
+
+
+```bash
+
+# Install fio
+
+sudo apt-get install fio
+
+
+
+# Run fio to check the random reads speed
+
+fio --randrepeat=1 \
+
+ --ioengine=libaio \
+
+ --direct=1 \
+
+ --gtod_reduce=1 \
+
+ --name=fiotest \
+
+ --filename=testfio \
+
+ --bs=4k \
+
+ --iodepth=64 \
+
+ --size=8G \
+
+ --readwrite=randread
+
+```
+
+
+
+
+
+Initially, we tested on a network-mounted disk, but its performance was too slow, with a read IOPS of 6366 and a bandwidth of 24.9 MiB/s:
+
+
+
+```text
+
+read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec)
+
+```
+
+
+
+To improve performance, we switched to a local disk, which showed much faster results, with a read IOPS of 63.2k and a bandwidth of 247 MiB/s:
+
+
+
+```text
+
+read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec)
+
+```
+
+
+
+That gave us a significant speed boost, but we wanted to see if we could improve performance even further.
+
+To do that, we switched to a machine with a local SSD, which showed even better results, with a read IOPS of 183k and a bandwidth of 716 MiB/s:
+
+
+
+```text
+
+read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec)
+
+```
+
+
+
+Let's see how these results translate into search speed:
+
+
+
+| Memory | RPS with IOPS=63.2k | RPS with IOPS=183k |
+
+|--------|---------------------|--------------------|
+
+| 600mb | 5 | 50 |
+
+| 300mb | 0.9 | 13 |
+
+| 200mb | 0.5 | 8 |
+
+| 150mb | 0.4 | 7 |
+
+
+
+
+
+As you can see, the speed of the disk has a significant impact on the search speed.
+
+With a local SSD, we were able to increase the search speed by 10x!
+
+
+
+With the production-grade disk, the search speed could be even higher.
+
+Some configurations of the SSDs can reach 1M IOPS and more.
+
+
+
+Which might be an interesting option to serve large datasets with low search latency in Qdrant.
+
+
+
+
+
+## Conclusion
+
+
+
+In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](https://qdrant.tech/contact-us/)!
+
+
+
+We are eager to learn more about how you use Qdrant in your projects, what challenges you face, and how we can help you solve them.
+
+Please feel free to join our [Discord](https://qdrant.to/discord) and share your experience with us!
+
+
+",articles/memory-consumption.md
+"---
+
+title: ""Vector Search as a dedicated service""
+
+short_description: ""Why vector search requires to be a dedicated service.""
+
+description: ""Why vector search requires a dedicated service.""
+
+social_preview_image: /articles_data/dedicated-service/social-preview.png
+
+small_preview_image: /articles_data/dedicated-service/preview/icon.svg
+
+preview_dir: /articles_data/dedicated-service/preview
+
+weight: -70
+
+author: Andrey Vasnetsov
+
+author_link: https://vasnetsov.com/
+
+date: 2023-11-30T10:00:00+03:00
+
+draft: false
+
+keywords:
+
+ - system architecture
+
+ - vector search
+
+ - best practices
+
+ - anti-patterns
+
+---
+
+
+
+
+
+Ever since the data science community discovered that vector search significantly improves LLM answers,
+
+various vendors and enthusiasts have been arguing over the proper solutions to store embeddings.
+
+
+
+Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases.
+
+
+
+Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/).
+
+
+
+
+
+This article presents our vision and arguments on the topic .
+
+We will:
+
+
+
+1. Explain why and when you actually need a dedicated vector solution
+
+2. Debunk some ungrounded claims and anti-patterns to be avoided when building a vector search system.
+
+
+
+A table of contents:
+
+
+
+* *Each database vendor will sooner or later introduce vector capabilities...* [[click](#each-database-vendor-will-sooner-or-later-introduce-vector-capabilities-that-will-make-every-database-a-vector-database)]
+
+* *Having a dedicated vector database requires duplication of data.* [[click](#having-a-dedicated-vector-database-requires-duplication-of-data)]
+
+* *Having a dedicated vector database requires complex data synchronization.* [[click](#having-a-dedicated-vector-database-requires-complex-data-synchronization)]
+
+* *You have to pay for a vector service uptime and data transfer.* [[click](#you-have-to-pay-for-a-vector-service-uptime-and-data-transfer-of-both-solutions)]
+
+* *What is more seamless than your current database adding vector search capability?* [[click](#what-is-more-seamless-than-your-current-database-adding-vector-search-capability)]
+
+* *Databases can support RAG use-case end-to-end.* [[click](#databases-can-support-rag-use-case-end-to-end)]
+
+
+
+
+
+## Responding to claims
+
+
+
+###### Each database vendor will sooner or later introduce vector capabilities. That will make every database a Vector Database.
+
+
+
+The origins of this misconception lie in the careless use of the term Vector *Database*.
+
+When we think of a *database*, we subconsciously envision a relational database like Postgres or MySQL.
+
+Or, more scientifically, a service built on ACID principles that provides transactions, strong consistency guarantees, and atomicity.
+
+
+
+The majority of Vector Database are not *databases* in this sense.
+
+It is more accurate to call them *search engines*, but unfortunately, the marketing term *vector database* has already stuck, and it is unlikely to change.
+
+
+
+
+
+*What makes search engines different, and why vector DBs are built as search engines?*
+
+
+
+First of all, search engines assume different patterns of workloads and prioritize different properties of the system. The core architecture of such solutions is built around those priorities.
+
+
+
+What types of properties do search engines prioritize?
+
+
+
+* **Scalability**. Search engines are built to handle large amounts of data and queries. They are designed to be horizontally scalable and operate with more data than can fit into a single machine.
+
+* **Search speed**. Search engines should guarantee low latency for queries, while the atomicity of updates is less important.
+
+* **Availability**. Search engines must stay available if the majority of the nodes in a cluster are down. At the same time, they can tolerate the eventual consistency of updates.
+
+
+
+{{< figure src=/articles_data/dedicated-service/compass.png caption=""Database guarantees compass"" width=80% >}}
+
+
+
+
+
+Those priorities lead to different architectural decisions that are not reproducible in a general-purpose database, even if it has vector index support.
+
+
+
+
+
+###### Having a dedicated vector database requires duplication of data.
+
+
+
+By their very nature, vector embeddings are derivatives of the primary source data.
+
+
+
+In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source.
+
+
+
+And the distinguishing feature of derivative data is that it will change when the transformation pipeline changes.
+
+In the case of vector embeddings, the scenario of those changes is quite simple: every time you update the encoder model, all the embeddings will change.
+
+
+
+In systems where vector embeddings are fused with the primary data source, it is impossible to perform such migrations without significantly affecting the production system.
+
+
+
+As a result, even if you want to use a single database for storing all kinds of data, you would still need to duplicate data internally.
+
+
+
+###### Having a dedicated vector database requires complex data synchronization.
+
+
+
+Most production systems prefer to isolate different types of workloads into separate services.
+
+In many cases, those isolated services are not even related to search use cases.
+
+
+
+For example, databases for analytics and one for serving can be updated from the same source.
+
+Yet they can store and organize the data in a way that is optimal for their typical workloads.
+
+
+
+Search engines are usually isolated for the same reason: you want to avoid creating a noisy neighbor problem and compromise the performance of your main database.
+
+
+
+*To give you some intuition, let's consider a practical example:*
+
+
+
+Assume we have a database with 1 million records.
+
+This is a small database by modern standards of any relational database.
+
+You can probably use the smallest free tier of any cloud provider to host it.
+
+
+
+But if we want to use this database for vector search, 1 million OpenAI `text-embedding-ada-002` embeddings will take **~6GB of RAM** (sic!).
+
+As you can see, the vector search use case completely overwhelmed the main database resource requirements.
+
+In practice, this means that your main database becomes burdened with high memory requirements and can not scale efficiently, limited by the size of a single machine.
+
+
+
+Fortunately, the data synchronization problem is not new and definitely not unique to vector search.
+
+There are many well-known solutions, starting with message queues and ending with specialized ETL tools.
+
+
+
+For example, we recently released our [integration with Airbyte](/documentation/integrations/airbyte/), allowing you to synchronize data from various sources into Qdrant incrementally.
+
+
+
+###### You have to pay for a vector service uptime and data transfer of both solutions.
+
+
+
+In the open-source world, you pay for the resources you use, not the number of different databases you run.
+
+Resources depend more on the optimal solution for each use case.
+
+As a result, running a dedicated vector search engine can be even cheaper, as it allows optimization specifically for vector search use cases.
+
+
+
+For instance, Qdrant implements a number of [quantization techniques](/documentation/guides/quantization/) that can significantly reduce the memory footprint of embeddings.
+
+
+
+In terms of data transfer costs, on most cloud providers, network use within a region is usually free. As long as you put the original source data and the vector store in the same region, there are no added data transfer costs.
+
+
+
+###### What is more seamless than your current database adding vector search capability?
+
+
+
+In contrast to the short-term attractiveness of integrated solutions, dedicated search engines propose flexibility and a modular approach.
+
+You don't need to update the whole production database each time some of the vector plugins are updated.
+
+Maintenance of a dedicated search engine is as isolated from the main database as the data itself.
+
+
+
+In fact, integration of more complex scenarios, such as read/write segregation, is much easier with a dedicated vector solution.
+
+You can easily build cross-region replication to ensure low latency for your users.
+
+
+
+{{< figure src=/articles_data/dedicated-service/region-based-deploy.png caption=""Read/Write segregation + cross-regional deployment"" width=80% >}}
+
+
+
+It is especially important in large enterprise organizations, where the responsibility for different parts of the system is distributed among different teams.
+
+In those situations, it is much easier to maintain a dedicated search engine for the AI team than to convince the core team to update the whole primary database.
+
+
+
+Finally, the vector capabilities of the all-in-one database are tied to the development and release cycle of the entire stack.
+
+Their long history of use also means that they need to pay a high price for backward compatibility.
+
+
+
+###### Databases can support RAG use-case end-to-end.
+
+
+
+Putting aside performance and scalability questions, the whole discussion about implementing RAG in the DBs assumes that the only detail missing in traditional databases is the vector index and the ability to make fast ANN queries.
+
+
+
+In fact, the current capabilities of vector search have only scratched the surface of what is possible.
+
+For example, in our recent article, we discuss the possibility of building an [exploration API](/articles/vector-similarity-beyond-search/) to fuel the discovery process - an alternative to kNN search, where you don’t even know what exactly you are looking for.
+
+
+
+## Summary
+
+Ultimately, you do not need a vector database if you are looking for a simple vector search functionality with a small amount of data. We genuinely recommend starting with whatever you already have in your stack to prototype. But you need one if you are looking to do more out of it, and it is the central functionality of your application. It is just like using a multi-tool to make something quick or using a dedicated instrument highly optimized for the use case.
+
+
+
+Large-scale production systems usually consist of different specialized services and storage types for good reasons since it is one of the best practices of modern software architecture. Comparable to the orchestration of independent building blocks in a microservice architecture.
+
+
+
+When you stuff the database with a vector index, you compromise both the performance and scalability of the main database and the vector search capabilities.
+
+There is no one-size-fits-all approach that would not compromise on performance or flexibility.
+
+So if your use case utilizes vector search in any significant way, it is worth investing in a dedicated vector search engine, aka vector database.
+",articles/dedicated-service.md
+"---
+
+title: Triplet Loss - Advanced Intro
+
+short_description: ""What are the advantages of Triplet Loss and how to efficiently implement it?""
+
+description: ""What are the advantages of Triplet Loss over Contrastive loss and how to efficiently implement it?""
+
+social_preview_image: /articles_data/triplet-loss/social_preview.jpg
+
+preview_dir: /articles_data/triplet-loss/preview
+
+small_preview_image: /articles_data/triplet-loss/icon.svg
+
+weight: 30
+
+author: Yusuf Sarıgöz
+
+author_link: https://medium.com/@yusufsarigoz
+
+date: 2022-03-24T15:12:00+03:00
+
+# aliases: [ /articles/triplet-loss/ ]
+
+---
+
+
+
+## What is Triplet Loss?
+
+
+
+Triplet Loss was first introduced in [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832) in 2015,
+
+and it has been one of the most popular loss functions for supervised similarity or metric learning ever since.
+
+In its simplest explanation, Triplet Loss encourages that dissimilar pairs be distant from any similar pairs by at least a certain margin value.
+
+Mathematically, the loss value can be calculated as
+
+$L=max(d(a,p) - d(a,n) + m, 0)$, where:
+
+
+
+- $p$, i.e., positive, is a sample that has the same label as $a$, i.e., anchor,
+
+- $n$, i.e., negative, is another sample that has a label different from $a$,
+
+- $d$ is a function to measure the distance between these three samples,
+
+- and $m$ is a margin value to keep negative samples far apart.
+
+
+
+The paper uses Euclidean distance, but it is equally valid to use any other distance metric, e.g., cosine distance.
+
+
+
+The function has a learning objective that can be visualized as in the following:
+
+
+
+{{< figure src=/articles_data/triplet-loss/loss_objective.png caption=""Triplet Loss learning objective"" >}}
+
+
+
+Notice that Triplet Loss does not have a side effect of urging to encode anchor and positive samples into the same point
+
+in the vector space as in Contrastive Loss.
+
+This lets Triplet Loss tolerate some intra-class variance, unlike Contrastive Loss,
+
+as the latter forces the distance between an anchor and any positive essentially to $0$.
+
+In other terms, Triplet Loss allows to stretch clusters in such a way as to include outliers
+
+while still ensuring a margin between samples from different clusters, e.g., negative pairs.
+
+
+
+Additionally, Triplet Loss is less greedy. Unlike Contrastive Loss,
+
+it is already satisfied when different samples are easily distinguishable from similar ones. It does not change the distances in a positive cluster if
+
+there is no interference from negative examples.
+
+This is due to the fact that Triplet Loss tries to ensure a margin between distances of negative pairs and distances of positive pairs.
+
+However, Contrastive Loss takes into account the margin value only when comparing dissimilar pairs,
+
+and it does not care at all where similar pairs are at that moment.
+
+This means that Contrastive Loss may reach a local minimum earlier,
+
+while Triplet Loss may continue to organize the vector space in a better state.
+
+
+
+Let's demonstrate how two loss functions organize the vector space by animations.
+
+For simpler visualization, the vectors are represented by points in a 2-dimensional space,
+
+and they are selected randomly from a normal distribution.
+
+
+
+{{< figure src=/articles_data/triplet-loss/contrastive.gif caption=""Animation that shows how Contrastive Loss moves points in the course of training."" >}}
+
+
+
+{{< figure src=/articles_data/triplet-loss/triplet.gif caption=""Animation that shows how Triplet Loss moves points in the course of training."" >}}
+
+
+
+
+
+From mathematical interpretations of the two-loss functions, it is clear that Triplet Loss is theoretically stronger,
+
+but Triplet Loss has additional tricks that help it work better.
+
+Most importantly, Triplet Loss introduce online triplet mining strategies, e.g., automatically forming the most useful triplets.
+
+
+
+## Why triplet mining matters?
+
+
+
+The formulation of Triplet Loss demonstrates that it works on three objects at a time:
+
+
+
+- `anchor`,
+
+- `positive` - a sample that has the same label as the anchor,
+
+- and `negative` - a sample with a different label from the anchor and the positive.
+
+
+
+In a naive implementation, we could form such triplets of samples at the beginning of each epoch
+
+and then feed batches of such triplets to the model throughout that epoch. This is called ""offline strategy.""
+
+However, this would not be so efficient for several reasons:
+
+- It needs to pass $3n$ samples to get a loss value of $n$ triplets.
+
+- Not all these triplets will be useful for the model to learn anything, e.g., yielding a positive loss value.
+
+- Even if we form ""useful"" triplets at the beginning of each epoch with one of the methods that I will be implementing in this series,
+
+they may become ""useless"" at some point in the epoch as the model weights will be constantly updated.
+
+
+
+Instead, we can get a batch of $n$ samples and their associated labels,
+
+and form triplets on the fly. That is called ""online strategy."" Normally, this gives
+
+$n^3$ possible triplets, but only a subset of such possible triplets will be actually valid. Even in this case,
+
+we will have a loss value calculated from much more triplets than the offline strategy.
+
+
+
+Given a triplet of `(a, p, n)`, it is valid only if:
+
+
+
+- `a` and `p` has the same label,
+
+- `a` and `p` are distinct samples,
+
+- and `n` has a different label from `a` and `p`.
+
+
+
+These constraints may seem to be requiring expensive computation with nested loops,
+
+but it can be efficiently implemented with tricks such as distance matrix, masking, and broadcasting.
+
+The rest of this series will focus on the implementation of these tricks.
+
+
+
+
+
+## Distance matrix
+
+
+
+A distance matrix is a matrix of shape $(n, n)$ to hold distance values between all possible
+
+pairs made from items in two $n$-sized collections.
+
+This matrix can be used to vectorize calculations that would need inefficient loops otherwise.
+
+Its calculation can be optimized as well, and we will implement [Euclidean Distance Matrix Trick (PDF)](https://www.robots.ox.ac.uk/~albanie/notes/Euclidean_distance_trick.pdf)
+
+explained by Samuel Albanie. You may want to read this three-page document for
+
+the full intuition of the trick, but a brief explanation is as follows:
+
+
+
+- Calculate the dot product of two collections of vectors, e.g., embeddings in our case.
+
+- Extract the diagonal from this matrix that holds the squared Euclidean norm of each embedding.
+
+- Calculate the squared Euclidean distance matrix based on the following equation: $||a - b||^2 = ||a||^2 - 2 ⟨a, b⟩ + ||b||^2$
+
+- Get the square root of this matrix for non-squared distances.
+
+
+
+We will implement it in PyTorch, so let's start with imports.
+
+
+
+
+
+```python
+
+import torch
+
+import torch.nn as nn
+
+import torch.nn.functional as F
+
+
+
+eps = 1e-8 # an arbitrary small value to be used for numerical stability tricks
+
+```
+
+
+
+---
+
+
+
+```python
+
+def euclidean_distance_matrix(x):
+
+ """"""Efficient computation of Euclidean distance matrix
+
+
+
+ Args:
+
+ x: Input tensor of shape (batch_size, embedding_dim)
+
+
+
+ Returns:
+
+ Distance matrix of shape (batch_size, batch_size)
+
+ """"""
+
+ # step 1 - compute the dot product
+
+
+
+ # shape: (batch_size, batch_size)
+
+ dot_product = torch.mm(x, x.t())
+
+
+
+ # step 2 - extract the squared Euclidean norm from the diagonal
+
+
+
+ # shape: (batch_size,)
+
+ squared_norm = torch.diag(dot_product)
+
+
+
+ # step 3 - compute squared Euclidean distances
+
+
+
+ # shape: (batch_size, batch_size)
+
+ distance_matrix = squared_norm.unsqueeze(0) - 2 * dot_product + squared_norm.unsqueeze(1)
+
+
+
+ # get rid of negative distances due to numerical instabilities
+
+ distance_matrix = F.relu(distance_matrix)
+
+
+
+ # step 4 - compute the non-squared distances
+
+
+
+ # handle numerical stability
+
+ # derivative of the square root operation applied to 0 is infinite
+
+ # we need to handle by setting any 0 to eps
+
+ mask = (distance_matrix == 0.0).float()
+
+
+
+ # use this mask to set indices with a value of 0 to eps
+
+ distance_matrix += mask * eps
+
+
+
+ # now it is safe to get the square root
+
+ distance_matrix = torch.sqrt(distance_matrix)
+
+
+
+ # undo the trick for numerical stability
+
+ distance_matrix *= (1.0 - mask)
+
+
+
+ return distance_matrix
+
+```
+
+
+
+## Invalid triplet masking
+
+
+
+Now that we can compute a distance matrix for all possible pairs of embeddings in a batch,
+
+we can apply broadcasting to enumerate distance differences for all possible triplets and represent them in a tensor of shape `(batch_size, batch_size, batch_size)`.
+
+However, only a subset of these $n^3$ triplets are actually valid as I mentioned earlier,
+
+and we need a corresponding mask to compute the loss value correctly.
+
+We will implement such a helper function in three steps:
+
+
+
+- Compute a mask for distinct indices, e.g., `(i != j and j != k)`.
+
+- Compute a mask for valid anchor-positive-negative triplets, e.g., `labels[i] == labels[j] and labels[j] != labels[k]`.
+
+- Combine two masks.
+
+
+
+
+
+```python
+
+def get_triplet_mask(labels):
+
+ """"""compute a mask for valid triplets
+
+
+
+ Args:
+
+ labels: Batch of integer labels. shape: (batch_size,)
+
+
+
+ Returns:
+
+ Mask tensor to indicate which triplets are actually valid. Shape: (batch_size, batch_size, batch_size)
+
+ A triplet is valid if:
+
+ `labels[i] == labels[j] and labels[i] != labels[k]`
+
+ and `i`, `j`, `k` are different.
+
+ """"""
+
+ # step 1 - get a mask for distinct indices
+
+
+
+ # shape: (batch_size, batch_size)
+
+ indices_equal = torch.eye(labels.size()[0], dtype=torch.bool, device=labels.device)
+
+ indices_not_equal = torch.logical_not(indices_equal)
+
+ # shape: (batch_size, batch_size, 1)
+
+ i_not_equal_j = indices_not_equal.unsqueeze(2)
+
+ # shape: (batch_size, 1, batch_size)
+
+ i_not_equal_k = indices_not_equal.unsqueeze(1)
+
+ # shape: (1, batch_size, batch_size)
+
+ j_not_equal_k = indices_not_equal.unsqueeze(0)
+
+ # Shape: (batch_size, batch_size, batch_size)
+
+ distinct_indices = torch.logical_and(torch.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k)
+
+
+
+ # step 2 - get a mask for valid anchor-positive-negative triplets
+
+
+
+ # shape: (batch_size, batch_size)
+
+ labels_equal = labels.unsqueeze(0) == labels.unsqueeze(1)
+
+ # shape: (batch_size, batch_size, 1)
+
+ i_equal_j = labels_equal.unsqueeze(2)
+
+ # shape: (batch_size, 1, batch_size)
+
+ i_equal_k = labels_equal.unsqueeze(1)
+
+ # shape: (batch_size, batch_size, batch_size)
+
+ valid_indices = torch.logical_and(i_equal_j, torch.logical_not(i_equal_k))
+
+
+
+ # step 3 - combine two masks
+
+ mask = torch.logical_and(distinct_indices, valid_indices)
+
+
+
+ return mask
+
+```
+
+
+
+## Batch-all strategy for online triplet mining
+
+
+
+Now we are ready for actually implementing Triplet Loss itself.
+
+Triplet Loss involves several strategies to form or select triplets, and the simplest one is
+
+to use all valid triplets that can be formed from samples in a batch.
+
+This can be achieved in four easy steps thanks to utility functions we've already implemented:
+
+
+
+- Get a distance matrix of all possible pairs that can be formed from embeddings in a batch.
+
+- Apply broadcasting to this matrix to compute loss values for all possible triplets.
+
+- Set loss values of invalid or easy triplets to $0$.
+
+- Average the remaining positive values to return a scalar loss.
+
+
+
+I will start by implementing this strategy, and more complex ones will follow as separate posts.
+
+
+
+
+
+```python
+
+class BatchAllTtripletLoss(nn.Module):
+
+ """"""Uses all valid triplets to compute Triplet loss
+
+
+
+ Args:
+
+ margin: Margin value in the Triplet Loss equation
+
+ """"""
+
+ def __init__(self, margin=1.):
+
+ super().__init__()
+
+ self.margin = margin
+
+
+
+ def forward(self, embeddings, labels):
+
+ """"""computes loss value.
+
+
+
+ Args:
+
+ embeddings: Batch of embeddings, e.g., output of the encoder. shape: (batch_size, embedding_dim)
+
+ labels: Batch of integer labels associated with embeddings. shape: (batch_size,)
+
+
+
+ Returns:
+
+ Scalar loss value.
+
+ """"""
+
+ # step 1 - get distance matrix
+
+ # shape: (batch_size, batch_size)
+
+ distance_matrix = euclidean_distance_matrix(embeddings)
+
+
+
+ # step 2 - compute loss values for all triplets by applying broadcasting to distance matrix
+
+
+
+ # shape: (batch_size, batch_size, 1)
+
+ anchor_positive_dists = distance_matrix.unsqueeze(2)
+
+ # shape: (batch_size, 1, batch_size)
+
+ anchor_negative_dists = distance_matrix.unsqueeze(1)
+
+ # get loss values for all possible n^3 triplets
+
+ # shape: (batch_size, batch_size, batch_size)
+
+ triplet_loss = anchor_positive_dists - anchor_negative_dists + self.margin
+
+
+
+ # step 3 - filter out invalid or easy triplets by setting their loss values to 0
+
+
+
+ # shape: (batch_size, batch_size, batch_size)
+
+ mask = get_triplet_mask(labels)
+
+ triplet_loss *= mask
+
+ # easy triplets have negative loss values
+
+ triplet_loss = F.relu(triplet_loss)
+
+
+
+ # step 4 - compute scalar loss value by averaging positive losses
+
+ num_positive_losses = (triplet_loss > eps).float().sum()
+
+ triplet_loss = triplet_loss.sum() / (num_positive_losses + eps)
+
+
+
+ return triplet_loss
+
+```
+
+
+
+## Conclusion
+
+
+
+I mentioned that Triplet Loss is different from Contrastive Loss not only mathematically but also in its sample selection strategies, and I implemented the batch-all strategy for online triplet mining in this post
+
+efficiently by using several tricks.
+
+
+
+There are other more complicated strategies such as batch-hard and batch-semihard mining,
+
+but their implementations, and discussions of the tricks I used for efficiency in this post,
+
+are worth separate posts of their own.
+
+
+
+The future posts will cover such topics and additional discussions on some tricks
+
+to avoid vector collapsing and control intra-class and inter-class variance.",articles/triplet-loss.md
+"---
+
+title: ""Qdrant Internals: Immutable Data Structures""
+
+short_description: ""Learn how immutable data structures improve vector search performance in Qdrant.""
+
+description: ""Learn how immutable data structures improve vector search performance in Qdrant.""
+
+social_preview_image: /articles_data/immutable-data-structures/social_preview.png
+
+preview_dir: /articles_data/immutable-data-structures/preview
+
+weight: -200
+
+author: Andrey Vasnetsov
+
+date: 2024-08-20T10:45:00+02:00
+
+draft: false
+
+keywords:
+
+ - data structures
+
+ - optimization
+
+ - immutable data structures
+
+ - perfect hashing
+
+ - defragmentation
+
+---
+
+
+
+## Data Structures 101
+
+
+
+Those who took programming courses might remember that there is no such thing as a universal data structure.
+
+Some structures are good at accessing elements by index (like arrays), while others shine in terms of insertion efficiency (like linked lists).
+
+
+
+{{< figure src=""/articles_data/immutable-data-structures/hardware-optimized.png"" alt=""Hardware-optimized data structure"" caption=""Hardware-optimized data structure"" width=""80%"" >}}
+
+
+
+However, when we move from theoretical data structures to real-world systems, and particularly in performance-critical areas such as [vector search](/use-cases/), things become more complex. [Big-O notation](https://en.wikipedia.org/wiki/Big_O_notation) provides a good abstraction, but it doesn’t account for the realities of modern hardware: cache misses, memory layout, disk I/O, and other low-level considerations that influence actual performance.
+
+
+
+> From the perspective of hardware efficiency, the ideal data structure is a contiguous array of bytes that can be read sequentially in a single thread. This scenario allows hardware optimizations like prefetching, caching, and branch prediction to operate at their best.
+
+
+
+However, real-world use cases require more complex structures to perform various operations like insertion, deletion, and search.
+
+These requirements increase complexity and introduce performance trade-offs.
+
+
+
+### Mutability
+
+
+
+One of the most significant challenges when working with data structures is ensuring **mutability — the ability to change the data structure after it’s created**, particularly with fast update operations.
+
+
+
+Let’s consider a simple example: we want to iterate over items in sorted order.
+
+Without a mutability requirement, we can use a simple array and sort it once.
+
+This is very close to our ideal scenario. We can even put the structure on disk - which is trivial for an array.
+
+
+
+However, if we need to insert an item into this array, **things get more complicated**.
+
+Inserting into a sorted array requires shifting all elements after the insertion point, which leads to linear time complexity for each insertion, which is not acceptable for many applications.
+
+
+
+To handle such cases, more complex structures like [B-trees](https://en.wikipedia.org/wiki/B-tree) come into play. B-trees are specifically designed to optimize both insertion and read operations for large data sets. However, they sacrifice the raw speed of array reads for better insertion performance.
+
+
+
+Here’s a benchmark that illustrates the difference between iterating over a plain array and a BTreeSet in Rust:
+
+
+
+```rust
+
+use std::collections::BTreeSet;
+
+use rand::Rng;
+
+
+
+fn main() {
+
+ // Benchmark plain vector VS btree in a task of iteration over all elements
+
+ let mut rand = rand::thread_rng();
+
+ let vector: Vec<_> = (0..1000000).map(|_| rand.gen::()).collect();
+
+ let btree: BTreeSet<_> = vector.iter().copied().collect();
+
+
+
+ {
+
+ let mut sum = 0;
+
+ for el in vector {
+
+ sum += el;
+
+ }
+
+ } // Elapsed: 850.924µs
+
+
+
+ {
+
+ let mut sum = 0;
+
+ for el in btree {
+
+ sum += el;
+
+ }
+
+ } // Elapsed: 5.213025ms, ~6x slower
+
+
+
+}
+
+```
+
+
+
+[Vector databases](https://qdrant.tech/), like Qdrant, have to deal with a large variety of data structures.
+
+If we could make them immutable, it would significantly improve performance and optimize memory usage.
+
+
+
+## How Does Immutability Help?
+
+
+
+A large part of the immutable advantage comes from the fact that we know the exact data we need to put into the structure even before we start building it.
+
+The simplest example is a sorted array: we would know exactly how many elements we have to put into the array so we can allocate the exact amount of memory once.
+
+
+
+More complex data structures might require additional statistics to be collected before the structure is built.
+
+A Qdrant-related example of this is [Scalar Quantization](/articles/scalar-quantization/#conversion-to-integers): in order to select proper quantization levels, we have to know the distribution of the data.
+
+
+
+{{< figure src=""/articles_data/immutable-data-structures/quantization-quantile.png"" alt=""Scalar Quantization Quantile"" caption=""Scalar Quantization Quantile"" width=""70%"" >}}
+
+
+
+
+
+Computing this distribution requires knowing all the data in advance, but once we have it, applying scalar quantization is a simple operation.
+
+
+
+Let's take a look at a non-exhaustive list of data structures and potential improvements we can get from making them immutable:
+
+
+
+|Function| Mutable Data Structure | Immutable Alternative | Potential improvements |
+
+|----|------|------|------------------------|
+
+| Read by index | Array | Fixed chunk of memory | Allocate exact amount of memory |
+
+| Vector Storage | Array or Arrays | Memory-mapped file | Offload data to disk |
+
+| Read sorted ranges| B-Tree | Sorted Array | Store all data close, avoid cache misses |
+
+| Read by key | Hash Map | Hash Map with Perfect Hashing | Avoid hash collisions |
+
+| Get documents by keyword | Inverted Index | Inverted Index with Sorted and BitPacked Postings | Less memory usage, faster search |
+
+| Vector Search | HNSW graph | HNSW graph with payload-aware connections | Better precision with filters |
+
+| Tenant Isolation | Vector Storage | Defragmented Vector Storage | Faster access to on-disk data |
+
+
+
+
+
+For more info on payload-aware connections in HNSW, read our [previous article](/articles/filtrable-hnsw/).
+
+
+
+This time around, we will focus on the latest additions to Qdrant:
+
+- **the immutable hash map with perfect hashing**
+
+- **defragmented vector storage**.
+
+
+
+### Perfect Hashing
+
+
+
+A hash table is one of the most commonly used data structures implemented in almost every programming language, including Rust.
+
+It provides fast access to elements by key, with an average time complexity of O(1) for read and write operations.
+
+
+
+There is, however, the assumption that should be satisfied for the hash table to work efficiently: *hash collisions should not cause too much overhead*.
+
+In a hash table, each key is mapped to a ""bucket,"" a slot where the value is stored.
+
+When different keys map to the same bucket, a collision occurs.
+
+
+
+In regular mutable hash tables, minimization of collisions is achieved by:
+
+
+
+* making the number of buckets bigger so the probability of collision is lower
+
+* using a linked list or a tree to store multiple elements with the same hash
+
+
+
+However, these strategies have overheads, which become more significant if we consider using high-latency storage like disk.
+
+
+
+Indeed, every read operation from disk is several orders of magnitude slower than reading from RAM, so we want to know the correct location of the data from the first attempt.
+
+
+
+In order to achieve this, we can use a so-called minimal perfect hash function (MPHF).
+
+This special type of hash function is constructed specifically for a given set of keys, and it guarantees no collisions while using minimal amount of buckets.
+
+
+
+In Qdrant, we decided to use *fingerprint-based minimal perfect hash function* implemented in the [ph crate 🦀](https://crates.io/crates/ph) by [Piotr Beling](https://dl.acm.org/doi/10.1145/3596453).
+
+According to our benchmarks, using the perfect hash function does introduce some overhead in terms of hashing time, but it significantly reduces the time for the whole operation:
+
+
+
+| Volume | `ph::Function` | `std::hash::Hash` | `HashMap::get`|
+
+|--------|----------------|-------------------|---------------|
+
+| 1000 | 60ns | ~20ns | 34ns |
+
+| 100k | 90ns | ~20ns | 220ns |
+
+| 10M | 238ns | ~20ns | 500ns |
+
+
+
+Even thought the absolute time for hashing is higher, the time for the whole operation is lower, because PHF guarantees no collisions.
+
+The difference is even more significant when we consider disk read time, which
+
+might up to several milliseconds (10^6 ns).
+
+
+
+PHF RAM size scales linearly for `ph::Function`: 3.46 kB for 10k elements, 119MB for 350M elements.
+
+The construction time required to build the hash function is surprisingly low, and we only need to do it once:
+
+
+
+| Volume | `ph::Function` (construct) | PHF size | Size of int64 keys (for reference) |
+
+|--------|----------------------------|----------|------------------------------------|
+
+| 1M | 52ms | 0.34Mb | 7.62Mb |
+
+| 100M | 7.4s | 33.7Mb | 762.9Mb |
+
+
+
+The usage of PHF in Qdrant lets us minimize the latency of cold reads, which is especially important for large-scale multi-tenant systems. With PHF, it is enough to read a single page from a disk to get the exact location of the data.
+
+
+
+### Defragmentation
+
+
+
+When you read data from a disk, you almost never read a single byte. Instead, you read a page, which is a fixed-size chunk of data.
+
+On many systems, the page size is 4KB, which means that every read operation will read 4KB of data, even if you only need a single byte.
+
+
+
+Vector search, on the other hand, requires reading a lot of small vectors, which might create a large overhead.
+
+It is especially noticeable if we use binary quantization, where the size of even large OpenAI 1536d vectors is compressed down to **192 bytes**.
+
+
+
+{{< figure src=""/articles_data/immutable-data-structures/page-vector.png"" alt=""Overhead when reading a single vector"" caption=""Overhead when reading single vector"" width=""80%"" >}}
+
+
+
+That means if the vectors we access during the search are randomly scattered across the disk, we will have to read 4KB for each vector, which is 20 times more than the actual data size.
+
+
+
+There is, however, a simple way to avoid this overhead: **defragmentation**.
+
+If we knew some additional information about the data, we could combine all relevant vectors into a single page.
+
+
+
+{{< figure src=""/articles_data/immutable-data-structures/defragmentation.png"" alt=""Defragmentation"" caption=""Defragmentation"" width=""70%"" >}}
+
+
+
+This additional information is available to Qdrant via the [payload index](/documentation/concepts/indexing/#payload-index).
+
+
+
+By specifying the payload index, which is going to be used for filtering most of the time, we can put all vectors with the same payload together.
+
+This way, reading a single page will also read nearby vectors, which will be used in the search.
+
+
+
+This approach is especially efficient for [multi-tenant systems](/documentation/guides/multiple-partitions/), where only a small subset of vectors is actively used for search.
+
+The capacity of such a deployment is typically defined by the size of the hot subset, which is much smaller than the total number of vectors.
+
+
+
+> Grouping relevant vectors together allows us to optimize the size of the hot subset by avoiding caching of irrelevant data.
+
+The following benchmark data compares RPS for defragmented and non-defragmented storage:
+
+
+
+| % of hot subset | Tenant Size (vectors) | RPS, Non-defragmented | RPS, Defragmented |
+
+|-----------------|-----------------------|-----------------------|-------------------|
+
+| 2.5% | 50k | 1.5 | 304 |
+
+| 12.5% | 50k | 0.47 | 279 |
+
+| 25% | 50k | 0.4 | 63 |
+
+| 50% | 50k | 0.3 | 8 |
+
+| 2.5% | 5k | 56 | 490 |
+
+| 12.5% | 5k | 5.8 | 488 |
+
+| 25% | 5k | 3.3 | 490 |
+
+| 50% | 5k | 3.1 | 480 |
+
+| 75% | 5k | 2.9 | 130 |
+
+| 100% | 5k | 2.7 | 95 |
+
+
+
+
+
+**Dataset size:** 2M 768d vectors (~6Gb Raw data), binary quantization, 650Mb of RAM limit.
+
+All benchmarks are made with minimal RAM allocation to demonstrate disk cache efficiency.
+
+
+
+As you can see, the biggest impact is on the small tenant size, where defragmentation allows us to achieve **100x more RPS**.
+
+Of course, the real-world impact of defragmentation depends on the specific workload and the size of the hot subset, but enabling this feature can significantly improve the performance of Qdrant.
+
+
+
+Please find more details on how to enable defragmentation in the [indexing documentation](/documentation/concepts/indexing/#tenant-index).
+
+
+
+
+
+## Updating Immutable Data Structures
+
+
+
+One may wonder how Qdrant allows updating collection data if everything is immutable.
+
+Indeed, [Qdrant API](https://api.qdrant.tech) allows the change of any vector or payload at any time, so from the user's perspective, the whole collection is mutable at any time.
+
+
+
+As it usually happens with every decent magic trick, the secret is disappointingly simple: not all data in Qdrant is immutable.
+
+In Qdrant, storage is divided into segments, which might be either mutable or immutable.
+
+New data is always written to the mutable segment, which is later converted to the immutable one by the optimization process.
+
+
+
+{{< figure src=""/articles_data/immutable-data-structures/optimization.png"" alt=""Optimization process"" caption=""Optimization process"" width=""80%"" >}}
+
+
+
+If we need to update the data in the immutable or currenly optimized segment, instead of changing the data in place, we perform a copy-on-write operation, move the data to the mutable segment, and update it there.
+
+
+
+Data in the original segment is marked as deleted, and later vacuumed by the optimization process.
+
+
+
+## Downsides and How to Compensate
+
+
+
+While immutable data structures are great for read-heavy operations, they come with trade-offs:
+
+
+
+- **Higher update costs:** Immutable structures are less efficient for updates. The amortized time complexity might be the same as mutable structures, but the constant factor is higher.
+
+- **Rebuilding overhead:** In some cases, we may need to rebuild indices or structures for the same data more than once.
+
+- **Read-heavy workloads:** Immutability assumes a search-heavy workload, which is typical for search engines but not for all applications.
+
+
+
+In Qdrant, we mitigate these downsides by allowing the user to adapt the system to their specific workload.
+
+For example, changing the default size of the segment might help to reduce the overhead of rebuilding indices.
+
+
+
+In extreme cases, multi-segment storage can act as a single segment, falling back to the mutable data structure when needed.
+
+
+
+## Conclusion
+
+
+
+Immutable data structures, while tricky to implement correctly, offer significant performance gains, especially for read-heavy systems like search engines. They allow us to take full advantage of hardware optimizations, reduce memory overhead, and improve cache performance.
+
+
+
+In Qdrant, the combination of techniques like perfect hashing and defragmentation brings further benefits, making our vector search operations faster and more efficient. While there are trade-offs, the flexibility of Qdrant’s architecture — including segment-based storage — allows us to balance the best of both worlds.
+
+
+",articles/immutable-data-structures.md
+"---
+
+title: ""Qdrant 1.8.0: Enhanced Search Capabilities for Better Results""
+
+draft: false
+
+slug: qdrant-1.8.x
+
+short_description: ""Faster sparse vectors.Optimized indexation. Optional CPU resource management.""
+
+description: ""Explore the latest in search technology with Qdrant 1.8.0! Discover faster performance, smarter indexing, and enhanced search capabilities.""
+
+social_preview_image: /articles_data/qdrant-1.8.x/social_preview.png
+
+small_preview_image: /articles_data/qdrant-1.8.x/icon.svg
+
+preview_dir: /articles_data/qdrant-1.8.x/preview
+
+weight: -140
+
+date: 2024-03-06T00:00:00-08:00
+
+author: David Myriel, Mike Jang
+
+featured: false
+
+tags:
+
+ - vector search
+
+ - new features
+
+ - sparse vectors
+
+ - hybrid search
+
+ - CPU resource management
+
+ - text field index
+
+---
+
+
+
+# Unlocking Next-Level Search: Exploring Qdrant 1.8.0's Advanced Search Capabilities
+
+
+
+[Qdrant 1.8.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.8.0).
+
+This time around, we have focused on Qdrant's internals. Our goal was to optimize performance so that your existing setup can run faster and save on compute. Here is what we've been up to:
+
+
+
+- **Faster [sparse vectors](https://qdrant.tech/articles/sparse-vectors/):** [Hybrid search](https://qdrant.tech/articles/hybrid-search/) is up to 16x faster now!
+
+- **CPU resource management:** You can allocate CPU threads for faster indexing.
+
+- **Better indexing performance:** We optimized text [indexing](https://qdrant.tech/documentation/concepts/indexing/) on the backend.
+
+
+
+## Faster search with sparse vectors
+
+
+
+Search throughput is now up to 16 times faster for sparse vectors. If you are [using Qdrant for hybrid search](/articles/sparse-vectors/), this means that you can now handle up to sixteen times as many queries. This improvement comes from extensive backend optimizations aimed at increasing efficiency and capacity.
+
+
+
+What this means for your setup:
+
+
+
+- **Query speed:** The time it takes to run a search query has been significantly reduced.
+
+- **Search capacity:** Qdrant can now handle a much larger volume of search requests.
+
+- **User experience:** Results will appear faster, leading to a smoother experience for the user.
+
+- **Scalability:** You can easily accommodate rapidly growing users or an expanding dataset.
+
+
+
+### Sparse vectors benchmark
+
+
+
+Performance results are publicly available for you to test. Qdrant's R&D developed a dedicated [open-source benchmarking tool](https://github.com/qdrant/sparse-vectors-benchmark) just to test sparse vector performance.
+
+
+
+A real-life simulation of sparse vector queries was run against the [NeurIPS 2023 dataset](https://big-ann-benchmarks.com/neurips23.html). All tests were done on an 8 CPU machine on Azure.
+
+
+
+Latency (y-axis) has dropped significantly for queries. You can see the before/after here:
+
+
+
+![dropping latency](/articles_data/qdrant-1.8.x/benchmark.png)
+
+**Figure 1:** Dropping latency in sparse vector search queries across versions 1.7-1.8.
+
+
+
+The colors within both scatter plots show the frequency of results. The red dots show that the highest concentration is around 2200ms (before) and 135ms (after). This tells us that latency for sparse vector queries dropped by about a factor of 16. Therefore, the time it takes to retrieve an answer with Qdrant is that much shorter.
+
+
+
+This performance increase can have a dramatic effect on hybrid search implementations. [Read more about how to set this up.](/articles/sparse-vectors/)
+
+
+
+FYI, sparse vectors were released in [Qdrant v.1.7.0](/articles/qdrant-1.7.x/#sparse-vectors). They are stored using a different index, so first [check out the documentation](/documentation/concepts/indexing/#sparse-vector-index) if you want to try an implementation.
+
+
+
+## CPU resource management
+
+
+
+Indexing is Qdrant’s most resource-intensive process. Now you can account for this by allocating compute use specifically to indexing. You can assign a number CPU resources towards indexing and leave the rest for search. As a result, indexes will build faster, and search quality will remain unaffected.
+
+
+
+This isn't mandatory, as Qdrant is by default tuned to strike the right balance between indexing and search. However, if you wish to define specific CPU usage, you will need to do so from `config.yaml`.
+
+
+
+This version introduces a `optimizer_cpu_budget` parameter to control the maximum number of CPUs used for indexing.
+
+
+
+> Read more about `config.yaml` in the [configuration file](/documentation/guides/configuration/).
+
+
+
+```yaml
+
+# CPU budget, how many CPUs (threads) to allocate for an optimization job.
+
+optimizer_cpu_budget: 0
+
+```
+
+
+
+- If left at 0, Qdrant will keep 1 or more CPUs unallocated - depending on CPU size.
+
+- If the setting is positive, Qdrant will use this exact number of CPUs for indexing.
+
+- If the setting is negative, Qdrant will subtract this number of CPUs from the available CPUs for indexing.
+
+
+
+For most users, the default `optimizer_cpu_budget` setting will work well. We only recommend you use this if your indexing load is significant.
+
+
+
+Our backend leverages dynamic CPU saturation to increase indexing speed. For that reason, the impact on search query performance ends up being minimal. Ultimately, you will be able to strike the best possible balance between indexing times and search performance.
+
+
+
+This configuration can be done at any time, but it requires a restart of Qdrant. Changing it affects both existing and new collections.
+
+
+
+> **Note:** This feature is not configurable on [Qdrant Cloud](https://qdrant.to/cloud).
+
+
+
+## Better indexing for text data
+
+
+
+In order to [minimize your RAM expenditure](https://qdrant.tech/articles/memory-consumption/), we have developed a new way to index specific types of data. Please keep in mind that this is a backend improvement, and you won't need to configure anything.
+
+
+
+> Going forward, if you are indexing immutable text fields, we estimate a 10% reduction in RAM loads. Our benchmark result is based on a system that uses 64GB of RAM. If you are using less RAM, this reduction might be higher than 10%.
+
+
+
+Immutable text fields are static and do not change once they are added to Qdrant. These entries usually represent some type of attribute, description or tag. Vectors associated with them can be indexed more efficiently, since you don’t need to re-index them anymore. Conversely, mutable fields are dynamic and can be modified after their initial creation. Please keep in mind that they will continue to require additional RAM.
+
+
+
+This approach ensures stability in the [vector search](https://qdrant.tech/documentation/overview/vector-search/) index, with faster and more consistent operations. We achieved this by setting up a field index which helps minimize what is stored. To improve search performance we have also optimized the way we load documents for searches with a text field index. Now our backend loads documents mostly sequentially and in increasing order.
+
+
+
+
+
+## Minor improvements and new features
+
+
+
+Beyond these enhancements, [Qdrant v1.8.0](https://github.com/qdrant/qdrant/releases/tag/v1.8.0) adds and improves on several smaller features:
+
+
+
+1. **Order points by payload:** In addition to searching for semantic results, you might want to retrieve results by specific metadata (such as price). You can now use Scroll API to [order points by payload key](/documentation/concepts/points/#order-points-by-payload-key).
+
+2. **Datetime support:** We have implemented [datetime support for the payload index](/documentation/concepts/filtering/#datetime-range). Prior to this, if you wanted to search for a specific datetime range, you would have had to convert dates to UNIX timestamps. ([PR#3320](https://github.com/qdrant/qdrant/issues/3320))
+
+3. **Check collection existence:** You can check whether a collection exists via the `/exists` endpoint to the `/collections/{collection_name}`. You will get a true/false response. ([PR#3472](https://github.com/qdrant/qdrant/pull/3472)).
+
+4. **Find points** whose payloads match more than the minimal amount of conditions. We included the `min_should` match feature for a condition to be `true` ([PR#3331](https://github.com/qdrant/qdrant/pull/3466/)).
+
+5. **Modify nested fields:** We have improved the `set_payload` API, adding the ability to update nested fields ([PR#3548](https://github.com/qdrant/qdrant/pull/3548)).
+
+
+
+## Experience the Power of Qdrant 1.8.0
+
+
+
+Ready to experience the enhanced performance of Qdrant 1.8.0? Upgrade now and explore the major improvements, from faster sparse vectors to optimized CPU resource management and better indexing for text data. Take your search capabilities to the next level with Qdrant's latest version. [Try a demo today](https://qdrant.tech/demo/) and see the difference firsthand!
+
+
+
+## Release notes
+
+
+
+For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.8.0).
+
+Qdrant is an open-source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
+",articles/qdrant-1.8.x.md
+"---
+
+title: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round.
+
+short_description: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round.
+
+description: We announce Qdrant seed round investment and share our thoughts on Vector Databases and New AI Age.
+
+preview_dir: /articles_data/seed-round/preview
+
+social_preview_image: /articles_data/seed-round/seed-social.png
+
+small_preview_image: /articles_data/quantum-quantization/icon.svg
+
+weight: 6
+
+author: Andre Zayarni
+
+draft: false
+
+author_link: https://www.linkedin.com/in/zayarni
+
+date: 2023-04-19T00:42:00.000Z
+
+---
+
+
+
+
+
+> Vector databases are here to stay. The New Age of AI is powered by vector embeddings, and vector databases are a foundational part of the stack. At Qdrant, we are working on cutting-edge open-source vector similarity search solutions to power fantastic AI applications with the best possible performance and excellent developer experience.
+
+>
+
+> Our 7.5M seed funding – led by [Unusual Ventures](https://www.unusual.vc/), awesome angels, and existing investors – will help us bring these innovations to engineers and empower them to make the most of their unstructured data and the awesome power of LLMs at any scale.
+
+
+
+We are thrilled to announce that we just raised our seed round from the best possible investor we could imagine for this stage. Let’s talk about fundraising later – it is a story itself that I could probably write a bestselling book about. First, let's dive into a bit of background about our project, our progress, and future plans.
+
+
+
+## A need for vector databases.
+
+
+
+Unstructured data is growing exponentially, and we are all part of a huge unstructured data workforce. This blog post is unstructured data; your visit here produces unstructured and semi-structured data with every web interaction, as does every photo you take or email you send. The global datasphere will grow to [165 zettabytes by 2025](https://github.com/qdrant/qdrant/pull/1639), and about 80% of that will be unstructured. At the same time, the rising demand for AI is vastly outpacing existing infrastructure. Around 90% of machine learning research results fail to reach production because of a lack of tools.
+
+
+
+
+
+{{< figure src=/articles_data/seed-round/demand.png caption=""Demand for AI tools"" alt=""Vector Databases Demand"" >}}
+
+
+
+Thankfully there’s a new generation of tools that let developers work with unstructured data in the form of vector embeddings, which are deep representations of objects obtained from a neural network model. A vector database, also known as a vector similarity search engine or approximate nearest neighbour (ANN) search database, is a database designed to store, manage, and search high-dimensional data with an additional payload. Vector Databases turn research prototypes into commercial AI products. Vector search solutions are industry agnostic and bring solutions for a number of use cases, including classic ones like semantic search, matching engines, and recommender systems to more novel applications like anomaly detection, working with time series, or biomedical data. The biggest limitation is to have a neural network encoder in place for the data type you are working with.
+
+
+
+
+
+{{< figure src=/articles_data/seed-round/use-cases.png caption=""Vector Search Use Cases"" alt=""Vector Search Use Cases"" >}}
+
+
+
+With the rise of large language models (LLMs), Vector Databases have become the fundamental building block of the new AI Stack. They let developers build even more advanced applications by extending the “knowledge base” of LLMs-based applications like ChatGPT with real-time and real-world data.
+
+
+
+A new AI product category, “Co-Pilot for X,” was born and is already affecting how we work. Starting from producing content to developing software. And this is just the beginning, there are even more types of novel applications being developed on top of this stack.
+
+
+
+{{< figure src=/articles_data/seed-round/ai-stack.png caption=""New AI Stack"" alt=""New AI Stack"" >}}
+
+
+
+## Enter Qdrant. ##
+
+
+
+At the same time, adoption has only begun. Vector Search Databases are replacing VSS libraries like FAISS, etc., which, despite their disadvantages, are still used by ~90% of projects out there They’re hard-coupled to the application code, lack of production-ready features like basic CRUD operations or advanced filtering, are a nightmare to maintain and scale and have many other difficulties that make life hard for developers.
+
+
+
+The current Qdrant ecosystem consists of excellent products to work with vector embeddings. We launched our managed vector database solution, Qdrant Cloud, early this year, and it is already serving more than 1,000 Qdrant clusters. We are extending our offering now with managed on-premise solutions for enterprise customers.
+
+
+
+{{< figure src=/articles_data/seed-round/ecosystem.png caption=""Qdrant Ecosystem"" alt=""Qdrant Vector Database Ecosystem"" >}}
+
+
+
+
+
+Our plan for the current [open-source roadmap](https://github.com/qdrant/qdrant/blob/master/docs/roadmap/README.md) is to make billion-scale vector search affordable. Our recent release of the [Scalar Quantization](/articles/scalar-quantization/) improves both memory usage (x4) as well as speed (x2). Upcoming [Product Quantization](https://www.irisa.fr/texmex/people/jegou/papers/jegou_searching_with_quantization.pdf) will introduce even another option with more memory saving. Stay tuned.
+
+
+
+Qdrant started more than two years ago with the mission of building a vector database powered by a well-thought-out tech stack. Using Rust as the system programming language and technical architecture decision during the development of the engine made Qdrant the leading and one of the most popular vector database solutions.
+
+
+
+Our unique custom modification of the [HNSW algorithm](/articles/filtrable-hnsw/) for Approximate Nearest Neighbor Search (ANN) allows querying the result with a state-of-the-art speed and applying filters without compromising on results. Cloud-native support for distributed deployment and replications makes the engine suitable for high-throughput applications with real-time latency requirements. Rust brings stability, efficiency, and the possibility to make optimization on a very low level. In general, we always aim for the best possible results in [performance](/benchmarks/), code quality, and feature set.
+
+
+
+Most importantly, we want to say a big thank you to our [open-source community](https://qdrant.to/discord), our adopters, our contributors, and our customers. Your active participation in the development of our products has helped make Qdrant the best vector database on the market. I cannot imagine how we could do what we’re doing without the community or without being open-source and having the TRUST of the engineers. Thanks to all of you!
+
+
+
+I also want to thank our team. Thank you for your patience and trust. Together we are strong. Let’s continue doing great things together.
+
+
+
+## Fundraising ##
+
+The whole process took only a couple of days, we got several offers, and most probably, we would get more with different conditions. We decided to go with Unusual Ventures because they truly understand how things work in the open-source space. They just did it right.
+
+
+
+Here is a big piece of advice for all investors interested in open-source: Dive into the community, and see and feel the traction and product feedback instead of looking at glossy pitch decks. With Unusual on our side, we have an active operational partner instead of one who simply writes a check. That help is much more important than overpriced valuations and big shiny names.
+
+
+
+Ultimately, the community and adopters will decide what products win and lose, not VCs. Companies don’t need crazy valuations to create products that customers love. You do not need Ph.D. to innovate. You do not need to over-engineer to build a scalable solution. You do not need ex-FANG people to have a great team. You need clear focus, a passion for what you’re building, and the know-how to do it well.
+
+
+
+We know how.
+
+
+
+PS: This text is written by me in an old-school way without any ChatGPT help. Sometimes you just need inspiration instead of AI ;-)
+
+
+",articles/seed-round.md
+"---
+
+title: ""Optimizing RAG Through an Evaluation-Based Methodology""
+
+short_description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient.
+
+description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient.
+
+social_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview/social_preview.jpg
+
+small_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/icon.svg
+
+preview_dir: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview
+
+weight: -131
+
+author: Atita Arora
+
+author_link: https://github.com/atarora
+
+date: 2024-06-12T00:00:00.000Z
+
+draft: false
+
+keywords:
+
+- vector database
+
+- vector search
+
+- retrieval augmented generation
+
+- quotient
+
+- optimization
+
+- rag
+
+---
+
+
+
+In today's fast-paced, information-rich world, AI is revolutionizing knowledge management. The systematic process of capturing, distributing, and effectively using knowledge within an organization is one of the fields in which AI provides exceptional value today.
+
+
+
+> The potential for AI-powered knowledge management increases when leveraging Retrieval Augmented Generation (RAG), a methodology that enables LLMs to access a vast, diverse repository of factual information from knowledge stores, such as vector databases.
+
+
+
+This process enhances the accuracy, relevance, and reliability of generated text, thereby mitigating the risk of faulty, incorrect, or nonsensical results sometimes associated with traditional LLMs. This method not only ensures that the answers are contextually relevant but also up-to-date, reflecting the latest insights and data available.
+
+
+
+While RAG enhances the accuracy, relevance, and reliability of traditional LLM solutions, **an evaluation strategy can further help teams ensure their AI products meet these benchmarks of success.**
+
+
+
+## Relevant tools for this experiment
+
+
+
+In this article, we’ll break down a RAG Optimization workflow experiment that demonstrates that evaluation is essential to build a successful RAG strategy. We will use Qdrant and Quotient for this experiment.
+
+
+
+[Qdrant](https://qdrant.tech/) is a vector database and vector similarity search engine designed for efficient storage and retrieval of high-dimensional vectors. Because Qdrant offers efficient indexing and searching capabilities, it is ideal for implementing RAG solutions, where quickly and accurately retrieving relevant information from extremely large datasets is crucial. Qdrant also offers a wealth of additional features, such as quantization, multivector support and multi-tenancy.
+
+
+
+Alongside Qdrant we will use Quotient, which provides a seamless way to evaluate your RAG implementation, accelerating and improving the experimentation process.
+
+
+
+[Quotient](https://www.quotientai.co/) is a platform that provides tooling for AI developers to build evaluation frameworks and conduct experiments on their products. Evaluation is how teams surface the shortcomings of their applications and improve performance in key benchmarks such as faithfulness, and semantic similarity. Iteration is key to building innovative AI products that will deliver value to end users.
+
+
+
+> 💡 The [accompanying notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient) for this exercise can be found on GitHub for future reference.
+
+
+
+## Summary of key findings
+
+
+
+1. **Irrelevance and Hallucinations**: When the documents retrieved are irrelevant, evidenced by low scores in both Chunk Relevance and Context Relevance, the model is prone to generating inaccurate or fabricated information.
+
+2. **Optimizing Document Retrieval**: By retrieving a greater number of documents and reducing the chunk size, we observed improved outcomes in the model's performance.
+
+3. **Adaptive Retrieval Needs**: Certain queries may benefit from accessing more documents. Implementing a dynamic retrieval strategy that adjusts based on the query could enhance accuracy.
+
+4. **Influence of Model and Prompt Variations**: Alterations in language models or the prompts used can significantly impact the quality of the generated responses, suggesting that fine-tuning these elements could optimize performance.
+
+
+
+Let us walk you through how we arrived at these findings!
+
+
+
+## Building a RAG pipeline
+
+
+
+To evaluate a RAG pipeline , we will have to build a RAG Pipeline first. In the interest of simplicity, we are building a Naive RAG in this article. There are certainly other versions of RAG :
+
+
+
+![shades_of_rag.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/shades_of_rag.png)
+
+
+
+The illustration below depicts how we can leverage a RAG Evaluation framework to assess the quality of RAG Application.
+
+
+
+![qdrant_and_quotient.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/qdrant_and_quotient.png)
+
+
+
+We are going to build a RAG application using Qdrant’s Documentation and the premeditated [hugging face dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc).
+
+We will then assess our RAG application’s ability to answer questions about Qdrant.
+
+
+
+To prepare our knowledge store we will use Qdrant, which can be leveraged in 3 different ways as below :
+
+
+
+```python
+
+##Uncomment to initialise qdrant client in memory
+
+#client = qdrant_client.QdrantClient(
+
+# location="":memory:"",
+
+#)
+
+
+
+##Uncomment below to connect to Qdrant Cloud
+
+client = qdrant_client.QdrantClient(
+
+ os.environ.get(""QDRANT_URL""),
+
+ api_key=os.environ.get(""QDRANT_API_KEY""),
+
+)
+
+
+
+## Uncomment below to connect to local Qdrant
+
+#client = qdrant_client.QdrantClient(""http://localhost:6333"")
+
+```
+
+
+
+We will be using [Qdrant Cloud](https://cloud.qdrant.io/login) so it is a good idea to provide the `QDRANT_URL` and `QDRANT_API_KEY` as environment variables for easier access.
+
+
+
+Moving on, we will need to define the collection name as :
+
+
+
+```python
+
+COLLECTION_NAME = ""qdrant-docs-quotient""
+
+```
+
+
+
+In this case , we may need to create different collections based on the experiments we conduct.
+
+
+
+To help us provide seamless embedding creations throughout the experiment, we will use Qdrant’s native embedding provider [Fastembed](https://qdrant.github.io/fastembed/) which supports [many different models](https://qdrant.github.io/fastembed/examples/Supported_Models/) including dense as well as sparse vector models.
+
+
+
+We can initialize and switch the embedding model of our choice as below :
+
+
+
+```python
+
+## Declaring the intended Embedding Model with Fastembed
+
+from fastembed.embedding import TextEmbedding
+
+
+
+## General Fastembed specific operations
+
+##Initilising embedding model
+
+## Using Default Model - BAAI/bge-small-en-v1.5
+
+embedding_model = TextEmbedding()
+
+
+
+## For custom model supported by Fastembed
+
+#embedding_model = TextEmbedding(model_name=""BAAI/bge-small-en"", max_length=512)
+
+#embedding_model = TextEmbedding(model_name=""sentence-transformers/all-MiniLM-L6-v2"", max_length=384)
+
+
+
+## Verify the chosen Embedding model
+
+embedding_model.model_name
+
+```
+
+
+
+Before implementing RAG, we need to prepare and index our data in Qdrant.
+
+
+
+This involves converting textual data into vectors using a suitable encoder (e.g., sentence transformers), and storing these vectors in Qdrant for retrieval.
+
+
+
+```python
+
+from langchain.text_splitter import RecursiveCharacterTextSplitter
+
+from langchain.docstore.document import Document as LangchainDocument
+
+
+
+## Load the dataset with qdrant documentation
+
+dataset = load_dataset(""atitaarora/qdrant_doc"", split=""train"")
+
+
+
+## Dataset to langchain document
+
+langchain_docs = [
+
+ LangchainDocument(page_content=doc[""text""], metadata={""source"": doc[""source""]})
+
+ for doc in dataset
+
+]
+
+
+
+len(langchain_docs)
+
+
+
+#Outputs
+
+#240
+
+```
+
+
+
+You can preview documents in the dataset as below :
+
+
+
+```python
+
+## Here's an example of what a document in our dataset looks like
+
+print(dataset[100]['text'])
+
+
+
+```
+
+
+
+## Evaluation dataset
+
+
+
+To measure the quality of our RAG setup, we will need a representative evaluation dataset. This dataset should contain realistic questions and the expected answers.
+
+
+
+Additionally, including the expected contexts for which your RAG pipeline is designed to retrieve information would be beneficial.
+
+
+
+We will be using a [prebuilt evaluation dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc_qna).
+
+
+
+If you are struggling to make an evaluation dataset for your use case , you can use your documents and some techniques described in this [notebook](https://github.com/qdrant/qdrant-rag-eval/blob/master/synthetic_qna/notebook/Synthetic_question_generation.ipynb)
+
+
+
+### Building the RAG pipeline
+
+
+
+We establish the data preprocessing parameters essential for the RAG pipeline and configure the Qdrant vector database according to the specified criteria.
+
+
+
+Key parameters under consideration are:
+
+
+
+- **Chunk size**
+
+- **Chunk overlap**
+
+- **Embedding model**
+
+- **Number of documents retrieved (retrieval window)**
+
+
+
+Following the ingestion of data in Qdrant, we proceed to retrieve pertinent documents corresponding to each query. These documents are then seamlessly integrated into our evaluation dataset, enriching the contextual information within the designated **`context`** column to fulfil the evaluation aspect.
+
+
+
+Next we define methods to take care of logistics with respect to adding documents to Qdrant
+
+
+
+```python
+
+def add_documents(client, collection_name, chunk_size, chunk_overlap, embedding_model_name):
+
+ """"""
+
+ This function adds documents to the desired Qdrant collection given the specified RAG parameters.
+
+ """"""
+
+
+
+ ## Processing each document with desired TEXT_SPLITTER_ALGO, CHUNK_SIZE, CHUNK_OVERLAP
+
+ text_splitter = RecursiveCharacterTextSplitter(
+
+ chunk_size=chunk_size,
+
+ chunk_overlap=chunk_overlap,
+
+ add_start_index=True,
+
+ separators=[""\n\n"", ""\n"", ""."", "" "", """"],
+
+ )
+
+
+
+ docs_processed = []
+
+ for doc in langchain_docs:
+
+ docs_processed += text_splitter.split_documents([doc])
+
+
+
+ ## Processing documents to be encoded by Fastembed
+
+ docs_contents = []
+
+ docs_metadatas = []
+
+
+
+ for doc in docs_processed:
+
+ if hasattr(doc, 'page_content') and hasattr(doc, 'metadata'):
+
+ docs_contents.append(doc.page_content)
+
+ docs_metadatas.append(doc.metadata)
+
+ else:
+
+ # Handle the case where attributes are missing
+
+ print(""Warning: Some documents do not have 'page_content' or 'metadata' attributes."")
+
+
+
+ print(""processed: "", len(docs_processed))
+
+ print(""content: "", len(docs_contents))
+
+ print(""metadata: "", len(docs_metadatas))
+
+
+
+ ## Adding documents to Qdrant using desired embedding model
+
+ client.set_model(embedding_model_name=embedding_model_name)
+
+ client.add(collection_name=collection_name, metadata=docs_metadatas, documents=docs_contents)
+
+```
+
+
+
+and retrieving documents from Qdrant during our RAG Pipeline assessment.
+
+
+
+```python
+
+def get_documents(collection_name, query, num_documents=3):
+
+ """"""
+
+ This function retrieves the desired number of documents from the Qdrant collection given a query.
+
+ It returns a list of the retrieved documents.
+
+ """"""
+
+ search_results = client.query(
+
+ collection_name=collection_name,
+
+ query_text=query,
+
+ limit=num_documents,
+
+ )
+
+ results = [r.metadata[""document""] for r in search_results]
+
+ return results
+
+```
+
+
+
+### Setting up Quotient
+
+
+
+You will need an account log in, which you can get by requesting access on [Quotient's website](https://www.quotientai.co/). Once you have an account, you can create an API key by running the `quotient authenticate` CLI command.
+
+
+
+
+
+
+
+**Once you have your API key, make sure to set it as an environment variable called `QUOTIENT_API_KEY`**
+
+
+
+```python
+
+# Import QuotientAI client and connect to QuotientAI
+
+from quotientai.client import QuotientClient
+
+from quotientai.utils import show_job_progress
+
+
+
+# IMPORTANT: be sure to set your API key as an environment variable called QUOTIENT_API_KEY
+
+# You will need this set before running the code below. You may also uncomment the following line and insert your API key:
+
+# os.environ['QUOTIENT_API_KEY'] = ""YOUR_API_KEY""
+
+
+
+quotient = QuotientClient()
+
+```
+
+
+
+**QuotientAI** provides a seamless way to integrate *RAG evaluation* into your applications. Here, we'll see how to use it to evaluate text generated from an LLM, based on retrieved knowledge from the Qdrant vector database.
+
+
+
+After retrieving the top similar documents and populating the `context` column, we can submit the evaluation dataset to Quotient and execute an evaluation job. To run a job, all you need is your evaluation dataset and a `recipe`.
+
+
+
+***A recipe is a combination of a prompt template and a specified LLM.***
+
+
+
+**Quotient** orchestrates the evaluation run and handles version control and asset management throughout the experimentation process.
+
+
+
+***Prior to assessing our RAG solution, it's crucial to outline our optimization goals.***
+
+
+
+In the context of *question-answering on Qdrant documentation*, our focus extends beyond merely providing helpful responses. Ensuring the absence of any *inaccurate or misleading information* is paramount.
+
+
+
+In other words, **we want to minimize hallucinations** in the LLM outputs.
+
+
+
+For our evaluation, we will be considering the following metrics, with a focus on **Faithfulness**:
+
+
+
+- **Context Relevance**
+
+- **Chunk Relevance**
+
+- **Faithfulness**
+
+- **ROUGE-L**
+
+- **BERT Sentence Similarity**
+
+- **BERTScore**
+
+
+
+### Evaluation in action
+
+
+
+The function below takes an evaluation dataset as input, which in this case contains questions and their corresponding answers. It retrieves relevant documents based on the questions in the dataset and populates the context field with this information from Qdrant. The prepared dataset is then submitted to QuotientAI for evaluation for the chosen metrics. After the evaluation is complete, the function displays aggregated statistics on the evaluation metrics followed by the summarized evaluation results.
+
+
+
+```python
+
+def run_eval(eval_df, collection_name, recipe_id, num_docs=3, path=""eval_dataset_qdrant_questions.csv""):
+
+ """"""
+
+ This function evaluates the performance of a complete RAG pipeline on a given evaluation dataset.
+
+
+
+ Given an evaluation dataset (containing questions and ground truth answers),
+
+ this function retrieves relevant documents, populates the context field, and submits the dataset to QuotientAI for evaluation.
+
+ Once the evaluation is complete, aggregated statistics on the evaluation metrics are displayed.
+
+
+
+ The evaluation results are returned as a pandas dataframe.
+
+ """"""
+
+
+
+ # Add context to each question by retrieving relevant documents
+
+ eval_df['documents'] = eval_df.apply(lambda x: get_documents(collection_name=collection_name,
+
+ query=x['input_text'],
+
+ num_documents=num_docs), axis=1)
+
+ eval_df['context'] = eval_df.apply(lambda x: ""\n"".join(x['documents']), axis=1)
+
+
+
+ # Now we'll save the eval_df to a CSV
+
+ eval_df.to_csv(path, index=False)
+
+
+
+ # Upload the eval dataset to QuotientAI
+
+ dataset = quotient.create_dataset(
+
+ file_path=path,
+
+ name=""qdrant-questions-eval-v1"",
+
+ )
+
+
+
+ # Create a new task for the dataset
+
+ task = quotient.create_task(
+
+ dataset_id=dataset['id'],
+
+ name='qdrant-questions-qa-v1',
+
+ task_type='question_answering'
+
+ )
+
+
+
+ # Run a job to evaluate the model
+
+ job = quotient.create_job(
+
+ task_id=task['id'],
+
+ recipe_id=recipe_id,
+
+ num_fewshot_examples=0,
+
+ limit=500,
+
+ metric_ids=[5, 7, 8, 11, 12, 13, 50],
+
+ )
+
+
+
+ # Show the progress of the job
+
+ show_job_progress(quotient, job['id'])
+
+
+
+ # Once the job is complete, we can get our results
+
+ data = quotient.get_eval_results(job_id=job['id'])
+
+
+
+ # Add the results to a pandas dataframe to get statistics on performance
+
+ df = pd.json_normalize(data, ""results"")
+
+ df_stats = df[df.columns[df.columns.str.contains(""metric|completion_time"")]]
+
+
+
+ df.columns = df.columns.str.replace(""metric."", """")
+
+ df_stats.columns = df_stats.columns.str.replace(""metric."", """")
+
+
+
+ metrics = {
+
+ 'completion_time_ms':'Completion Time (ms)',
+
+ 'chunk_relevance': 'Chunk Relevance',
+
+ 'selfcheckgpt_nli_relevance':""Context Relevance"",
+
+ 'selfcheckgpt_nli':""Faithfulness"",
+
+ 'rougeL_fmeasure':""ROUGE-L"",
+
+ 'bert_score_f1':""BERTScore"",
+
+ 'bert_sentence_similarity': ""BERT Sentence Similarity"",
+
+ 'completion_verbosity':""Completion Verbosity"",
+
+ 'verbosity_ratio':""Verbosity Ratio"",}
+
+
+
+ df = df.rename(columns=metrics)
+
+ df_stats = df_stats.rename(columns=metrics)
+
+
+
+ display(df_stats[metrics.values()].describe())
+
+
+
+ return df
+
+
+
+main_metrics = [
+
+ 'Context Relevance',
+
+ 'Chunk Relevance',
+
+ 'Faithfulness',
+
+ 'ROUGE-L',
+
+ 'BERT Sentence Similarity',
+
+ 'BERTScore',
+
+ ]
+
+```
+
+
+
+## Experimentation
+
+
+
+Our approach is rooted in the belief that improvement thrives in an environment of exploration and discovery. By systematically testing and tweaking various components of the RAG pipeline, we aim to incrementally enhance its capabilities and performance.
+
+
+
+In the following section, we dive into the details of our experimentation process, outlining the specific experiments conducted and the insights gained.
+
+
+
+### Experiment 1 - Baseline
+
+
+
+Parameters
+
+
+
+- **Embedding Model: `bge-small-en`**
+
+- **Chunk size: `512`**
+
+- **Chunk overlap: `64`**
+
+- **Number of docs retrieved (Retireval Window): `3`**
+
+- **LLM: `Mistral-7B-Instruct`**
+
+
+
+We’ll process our documents based on configuration above and ingest them into Qdrant using `add_documents` method introduced earlier
+
+
+
+```python
+
+#experiment1 - base config
+
+chunk_size = 512
+
+chunk_overlap = 64
+
+embedding_model_name = ""BAAI/bge-small-en""
+
+num_docs = 3
+
+
+
+COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}""
+
+
+
+add_documents(client,
+
+ collection_name=COLLECTION_NAME,
+
+ chunk_size=chunk_size,
+
+ chunk_overlap=chunk_overlap,
+
+ embedding_model_name=embedding_model_name)
+
+
+
+#Outputs
+
+#processed: 4504
+
+#content: 4504
+
+#metadata: 4504
+
+```
+
+
+
+Notice the `COLLECTION_NAME` which helps us segregate and identify our collections based on the experiments conducted.
+
+
+
+To proceed with the evaluation, let’s create the `evaluation recipe` up next
+
+
+
+```python
+
+# Create a recipe for the generator model and prompt template
+
+recipe_mistral = quotient.create_recipe(
+
+ model_id=10,
+
+ prompt_template_id=1,
+
+ name='mistral-7b-instruct-qa-with-rag',
+
+ description='Mistral-7b-instruct using a prompt template that includes context.'
+
+)
+
+recipe_mistral
+
+
+
+#Outputs recipe JSON with the used prompt template
+
+#'prompt_template': {'id': 1,
+
+# 'name': 'Default Question Answering Template',
+
+# 'variables': '[""input_text"",""context""]',
+
+# 'created_at': '2023-12-21T22:01:54.632367',
+
+# 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:',
+
+# 'owner_profile_id': None}
+
+```
+
+
+
+To get a list of your existing recipes, you can simply run:
+
+
+
+```python
+
+quotient.list_recipes()
+
+```
+
+
+
+Notice the recipe template is a simplest prompt using `Question` from evaluation template `Context` from document chunks retrieved from Qdrant and `Answer` generated by the pipeline.
+
+
+
+To kick off the evaluation
+
+
+
+```python
+
+# Kick off an evaluation job
+
+experiment_1 = run_eval(eval_df,
+
+ collection_name=COLLECTION_NAME,
+
+ recipe_id=recipe_mistral['id'],
+
+ num_docs=num_docs,
+
+ path=f""{COLLECTION_NAME}_{num_docs}_mistral.csv"")
+
+```
+
+
+
+This may take few minutes (depending on the size of evaluation dataset!)
+
+
+
+We can look at the results from our first (baseline) experiment as below :
+
+
+
+![experiment1_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_eval.png)
+
+
+
+Notice that we have a pretty **low average Chunk Relevance** and **very large standard deviations for both Chunk Relevance and Context Relevance**.
+
+
+
+Let's take a look at some of the lower performing datapoints with **poor Faithfulness**:
+
+
+
+```python
+
+with pd.option_context('display.max_colwidth', 0):
+
+ display(experiment_1[['content.input_text', 'content.answer','content.documents','Chunk Relevance','Context Relevance','Faithfulness']
+
+ ].sort_values(by='Faithfulness').head(2))
+
+```
+
+
+
+![experiment1_bad_examples.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_bad_examples.png)
+
+
+
+In instances where the retrieved documents are **irrelevant (where both Chunk Relevance and Context Relevance are low)**, the model also shows **tendencies to hallucinate** and **produce poor quality responses**.
+
+
+
+The quality of the retrieved text directly impacts the quality of the LLM-generated answer. Therefore, our focus will be on enhancing the RAG setup by **adjusting the chunking parameters**.
+
+
+
+### Experiment 2 - Adjusting the chunk parameter
+
+
+
+Keeping all other parameters constant, we changed the `chunk size` and `chunk overlap` to see if we can improve our results.
+
+
+
+Parameters :
+
+
+
+- **Embedding Model : `bge-small-en`**
+
+- **Chunk size: `1024`**
+
+- **Chunk overlap: `128`**
+
+- **Number of docs retrieved (Retireval Window): `3`**
+
+- **LLM: `Mistral-7B-Instruct`**
+
+
+
+We will reprocess the data with the updated parameters above:
+
+
+
+```python
+
+## for iteration 2 - lets modify chunk configuration
+
+## We will start with creating seperate collection to store vectors
+
+
+
+chunk_size = 1024
+
+chunk_overlap = 128
+
+embedding_model_name = ""BAAI/bge-small-en""
+
+num_docs = 3
+
+
+
+COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}""
+
+
+
+add_documents(client,
+
+ collection_name=COLLECTION_NAME,
+
+ chunk_size=chunk_size,
+
+ chunk_overlap=chunk_overlap,
+
+ embedding_model_name=embedding_model_name)
+
+
+
+#Outputs
+
+#processed: 2152
+
+#content: 2152
+
+#metadata: 2152
+
+```
+
+
+
+Followed by running evaluation :
+
+
+
+![experiment2_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment2_eval.png)
+
+
+
+and **comparing it with the results from Experiment 1:**
+
+
+
+![graph_exp1_vs_exp2.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_vs_exp2.png)
+
+
+
+We observed slight enhancements in our LLM completion metrics (including BERT Sentence Similarity, BERTScore, ROUGE-L, and Knowledge F1) with the increase in *chunk size*. However, it's noteworthy that there was a significant decrease in *Faithfulness*, which is the primary metric we are aiming to optimize.
+
+
+
+Moreover, *Context Relevance* demonstrated an increase, indicating that the RAG pipeline retrieved more relevant information required to address the query. Nonetheless, there was a considerable drop in *Chunk Relevance*, implying that a smaller portion of the retrieved documents contained pertinent information for answering the question.
+
+
+
+**The correlation between the rise in Context Relevance and the decline in Chunk Relevance suggests that retrieving more documents using the smaller chunk size might yield improved results.**
+
+
+
+### Experiment 3 - Increasing the number of documents retrieved (retrieval window)
+
+
+
+This time, we are using the same RAG setup as `Experiment 1`, but increasing the number of retrieved documents from **3** to **5**.
+
+
+
+Parameters :
+
+
+
+- **Embedding Model : `bge-small-en`**
+
+- **Chunk size: `512`**
+
+- **Chunk overlap: `64`**
+
+- **Number of docs retrieved (Retrieval Window): `5`**
+
+- **LLM: : `Mistral-7B-Instruct`**
+
+
+
+We can use the collection from Experiment 1 and run evaluation with modified `num_docs` parameter as :
+
+
+
+```python
+
+#collection name from Experiment 1
+
+COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}""
+
+
+
+#running eval for experiment 3
+
+experiment_3 = run_eval(eval_df,
+
+ collection_name=COLLECTION_NAME,
+
+ recipe_id=recipe_mistral['id'],
+
+ num_docs=num_docs,
+
+ path=f""{COLLECTION_NAME}_{num_docs}_mistral.csv"")
+
+```
+
+
+
+Observe the results as below :
+
+
+
+![experiment_3_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment_3_eval.png)
+
+
+
+Comparing the results with Experiment 1 and 2 :
+
+
+
+![graph_exp1_exp2_exp3.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3.png)
+
+
+
+As anticipated, employing the smaller chunk size while retrieving a larger number of documents resulted in achieving the highest levels of both *Context Relevance* and *Chunk Relevance.* Additionally, it yielded the **best** (albeit marginal) *Faithfulness* score, indicating a *reduced occurrence of inaccuracies or hallucinations*.
+
+
+
+Looks like we have achieved a good hold on our chunking parameters but it is worth testing another embedding model to see if we can get better results.
+
+
+
+### Experiment 4 - Changing the embedding model
+
+
+
+Let us try using **MiniLM** for this experiment
+
+****Parameters :
+
+
+
+- **Embedding Model : `MiniLM-L6-v2`**
+
+- **Chunk size: `512`**
+
+- **Chunk overlap: `64`**
+
+- **Number of docs retrieved (Retrieval Window): `5`**
+
+- **LLM: : `Mistral-7B-Instruct`**
+
+
+
+We will have to create another collection for this experiment :
+
+
+
+```python
+
+#experiment-4
+
+chunk_size=512
+
+chunk_overlap=64
+
+embedding_model_name=""sentence-transformers/all-MiniLM-L6-v2""
+
+num_docs=5
+
+
+
+COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}""
+
+
+
+add_documents(client,
+
+ collection_name=COLLECTION_NAME,
+
+ chunk_size=chunk_size,
+
+ chunk_overlap=chunk_overlap,
+
+ embedding_model_name=embedding_model_name)
+
+
+
+#Outputs
+
+#processed: 4504
+
+#content: 4504
+
+#metadata: 4504
+
+```
+
+
+
+We will observe our evaluations as :
+
+
+
+![experiment4_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment4_eval.png)
+
+
+
+Comparing these with our previous experiments :
+
+
+
+![graph_exp1_exp2_exp3_exp4.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4.png)
+
+
+
+It appears that `bge-small` was more proficient in capturing the semantic nuances of the Qdrant Documentation.
+
+
+
+Up to this point, our experimentation has focused solely on the *retrieval aspect* of our RAG pipeline. Now, let's explore altering the *generation aspect* or LLM while retaining the optimal parameters identified in Experiment 3.
+
+
+
+### Experiment 5 - Changing the LLM
+
+
+
+Parameters :
+
+
+
+- **Embedding Model : `bge-small-en`**
+
+- **Chunk size: `512`**
+
+- **Chunk overlap: `64`**
+
+- **Number of docs retrieved (Retrieval Window): `5`**
+
+- **LLM: : `GPT-3.5-turbo`**
+
+
+
+For this we can repurpose our collection from Experiment 3 while the evaluations to use a new recipe with **GPT-3.5-turbo** model.
+
+
+
+```python
+
+#collection name from Experiment 3
+
+COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}""
+
+
+
+# We have to create a recipe using the same prompt template and GPT-3.5-turbo
+
+recipe_gpt = quotient.create_recipe(
+
+ model_id=5,
+
+ prompt_template_id=1,
+
+ name='gpt3.5-qa-with-rag-recipe-v1',
+
+ description='GPT-3.5 using a prompt template that includes context.'
+
+)
+
+
+
+recipe_gpt
+
+
+
+#Outputs
+
+#{'id': 495,
+
+# 'name': 'gpt3.5-qa-with-rag-recipe-v1',
+
+# 'description': 'GPT-3.5 using a prompt template that includes context.',
+
+# 'model_id': 5,
+
+# 'prompt_template_id': 1,
+
+# 'created_at': '2024-05-03T12:14:58.779585',
+
+# 'owner_profile_id': 34,
+
+# 'system_prompt_id': None,
+
+# 'prompt_template': {'id': 1,
+
+# 'name': 'Default Question Answering Template',
+
+# 'variables': '[""input_text"",""context""]',
+
+# 'created_at': '2023-12-21T22:01:54.632367',
+
+# 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:',
+
+# 'owner_profile_id': None},
+
+# 'model': {'id': 5,
+
+# 'name': 'gpt-3.5-turbo',
+
+# 'endpoint': 'https://api.openai.com/v1/chat/completions',
+
+# 'revision': 'placeholder',
+
+# 'created_at': '2024-02-06T17:01:21.408454',
+
+# 'model_type': 'OpenAI',
+
+# 'description': 'Returns a maximum of 4K output tokens.',
+
+# 'owner_profile_id': None,
+
+# 'external_model_config_id': None,
+
+# 'instruction_template_cls': 'NoneType'}}
+
+```
+
+
+
+Running the evaluations as :
+
+
+
+```python
+
+experiment_5 = run_eval(eval_df,
+
+ collection_name=COLLECTION_NAME,
+
+ recipe_id=recipe_gpt['id'],
+
+ num_docs=num_docs,
+
+ path=f""{COLLECTION_NAME}_{num_docs}_gpt.csv"")
+
+```
+
+
+
+We observe :
+
+
+
+![experiment5_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment5_eval.png)
+
+
+
+and comparing all the 5 experiments as below :
+
+
+
+![graph_exp1_exp2_exp3_exp4_exp5.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4_exp5.png)
+
+
+
+**GPT-3.5 surpassed Mistral-7B in all metrics**! Notably, Experiment 5 exhibited the **lowest occurrence of hallucination**.
+
+
+
+## Conclusions
+
+
+
+Let’s take a look at our results from all 5 experiments above
+
+
+
+![overall_eval_results.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/overall_eval_results.png)
+
+
+
+We still have a long way to go in improving the retrieval performance of RAG, as indicated by our generally poor results thus far. It might be beneficial to **explore alternative embedding models** or **different retrieval strategies** to address this issue.
+
+
+
+The significant variations in *Context Relevance* suggest that **certain questions may necessitate retrieving more documents than others**. Therefore, investigating a **dynamic retrieval strategy** could be worthwhile.
+
+
+
+Furthermore, there's ongoing **exploration required on the generative aspect** of RAG.
+
+Modifying LLMs or prompts can substantially impact the overall quality of responses.
+
+
+
+This iterative process demonstrates how, starting from scratch, continual evaluation and adjustments throughout experimentation can lead to the development of an enhanced RAG system.
+
+
+
+## Watch this workshop on YouTube
+
+
+
+> A workshop version of this article is [available on YouTube](https://www.youtube.com/watch?v=3MEMPZR1aZA). Follow along using our [GitHub notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient).
+
+
+
+",articles/rapid-rag-optimization-with-qdrant-and-quotient.md
+"---
+
+title: Qdrant Articles
+
+page_title: Articles about Vector Search
+
+description: Articles about vector search and similarity larning related topics. Latest updates on Qdrant vector search engine.
+
+section_title: Check out our latest publications
+
+subtitle: Check out our latest publications
+
+img: /articles_data/title-img.png
+
+---
+",articles/_index.md
+"---
+
+title: Why Rust?
+
+short_description: ""A short history on how we chose rust and what it has brought us""
+
+description: Qdrant could be built in any language. But it's written in Rust. Here*s why.
+
+social_preview_image: /articles_data/why-rust/preview/social_preview.jpg
+
+preview_dir: /articles_data/why-rust/preview
+
+weight: 10
+
+author: Andre Bogus
+
+author_link: https://llogiq.github.io
+
+date: 2023-05-11T10:00:00+01:00
+
+draft: false
+
+keywords: rust, programming, development
+
+aliases: [ /articles/why_rust/ ]
+
+---
+
+
+
+# Building Qdrant in Rust
+
+
+
+Looking at the [github repository](https://github.com/qdrant/qdrant), you can see that Qdrant is built in [Rust](https://rust-lang.org). Other offerings may be written in C++, Go, Java or even Python. So why does Qdrant chose Rust? Our founder Andrey had built the first prototype in C++, but didn’t trust his command of the language to scale to a production system (to be frank, he likened it to cutting his leg off). He was well versed in Java and Scala and also knew some Python. However, he considered neither a good fit:
+
+
+
+**Java** is also more than 30 years old now. With a throughput-optimized VM it can often at least play in the same ball park as native services, and the tooling is phenomenal. Also portability is surprisingly good, although the GC is not suited for low-memory applications and will generally take good amount of RAM to deliver good performance. That said, the focus on throughput led to the dreaded GC pauses that cause latency spikes. Also the fat runtime incurs high start-up delays, which need to be worked around.
+
+
+
+**Scala** also builds on the JVM, although there is a native compiler, there was the question of compatibility. So Scala shared the limitations of Java, and although it has some nice high-level amenities (of which Java only recently copied a subset), it still doesn’t offer the same level of control over memory layout as, say, C++, so it is similarly disqualified.
+
+
+
+**Python**, being just a bit younger than Java, is ubiquitous in ML projects, mostly owing to its tooling (notably jupyter notebooks), being easy to learn and integration in most ML stacks. It doesn’t have a traditional garbage collector, opting for ubiquitous reference counting instead, which somewhat helps memory consumption. With that said, unless you only use it as glue code over high-perf modules, you may find yourself waiting for results. Also getting complex python services to perform stably under load is a serious technical challenge.
+
+
+
+## Into the Unknown
+
+
+
+So Andrey looked around at what younger languages would fit the challenge. After some searching, two contenders emerged: Go and Rust. Knowing neither, Andrey consulted the docs, and found hinself intrigued by Rust with its promise of Systems Programming without pervasive memory unsafety.
+
+
+
+This early decision has been validated time and again. When first learning Rust, the compiler’s error messages are very helpful (and have only improved in the meantime). It’s easy to keep memory profile low when one doesn’t have to wrestle a garbage collector and has complete control over stack and heap. Apart from the much advertised memory safety, many footguns one can run into when writing C++ have been meticulously designed out. And it’s much easier to parallelize a task if one doesn’t have to fear data races.
+
+
+
+With Qdrant written in Rust, we can offer cloud services that don’t keep us awake at night, thanks to Rust’s famed robustness. A current qdrant docker container comes in at just a bit over 50MB — try that for size. As for performance… have some [benchmarks](/benchmarks/).
+
+
+
+And we don’t have to compromise on ergonomics either, not for us nor for our users. Of course, there are downsides: Rust compile times are usually similar to C++’s, and though the learning curve has been considerably softened in the last years, it’s still no match for easy-entry languages like Python or Go. But learning it is a one-time cost. Contrast this with Go, where you may find [the apparent simplicity is only skin-deep](https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride).
+
+
+
+## Smooth is Fast
+
+
+
+The complexity of the type system pays large dividends in bugs that didn’t even make it to a commit. The ecosystem for web services is also already quite advanced, perhaps not at the same point as Java, but certainly matching or outcompeting Go.
+
+
+
+Some people may think that the strict nature of Rust will slow down development, which is true only insofar as it won’t let you cut any corners. However, experience has conclusively shown that this is a net win. In fact, Rust lets us [ride the wall](https://the-race.com/nascar/bizarre-wall-riding-move-puts-chastain-into-nascar-folklore/), which makes us faster, not slower.
+
+
+
+The job market for Rust programmers is certainly not as big as that for Java or Python programmers, but the language has finally reached the mainstream, and we don’t have any problems getting and retaining top talent. And being an open source project, when we get contributions, we don’t have to check for a wide variety of errors that Rust already rules out.
+
+
+
+## In Rust We Trust
+
+
+
+Finally, the Rust community is a very friendly bunch, and we are delighted to be part of that. And we don’t seem to be alone. Most large IT companies (notably Amazon, Google, Huawei, Meta and Microsoft) have already started investing in Rust. It’s in the Windows font system already and in the process of coming to the Linux kernel (build support has already been included). In machine learning applications, Rust has been tried and proven by the likes of Aleph Alpha and Huggingface, among many others.
+
+
+
+To sum up, choosing Rust was a lucky guess that has brought huge benefits to Qdrant. Rust continues to be our not-so-secret weapon.
+
+
+
+### Key Takeaways:
+
+
+
+- **Rust's Advantages for Qdrant:** Rust provides memory safety and control without a garbage collector, which is crucial for Qdrant's high-performance cloud services.
+
+
+
+- **Low Overhead:** Qdrant's Rust-based system offers efficiency, with small Docker container sizes and robust performance benchmarks.
+
+
+
+- **Complexity vs. Simplicity:** Rust's strict type system reduces bugs early in development, making it faster in the long run despite initial learning curves.
+
+
+
+- **Adoption by Major Players:** Large tech companies like Amazon, Google, and Microsoft are embracing Rust, further validating Qdrant's choice.
+
+
+
+- **Community and Talent:** The supportive Rust community and increasing availability of Rust developers make it easier for Qdrant to grow and innovate.",articles/why-rust.md
+"---
+
+title: ""Qdrant x.y.0 - #required; update version and headline""
+
+draft: true # Change to false to publish the article at /articles/
+
+slug: qdrant-x.y.z # required; subtitute version number
+
+short_description: ""Headline-like description.""
+
+description: ""Headline with more detail. Suggested limit: 140 characters. ""
+
+# Follow instructions in https://github.com/qdrant/landing_page?tab=readme-ov-file#articles to create preview images
+
+# social_preview_image: /articles_data//social_preview.jpg # This image will be used in social media previews, should be 1200x600px. Required.
+
+# small_preview_image: /articles_data//icon.svg # This image will be used in the list of articles at the footer, should be 40x40px
+
+# preview_dir: /articles_data//preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required.
+
+weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. Negative numbers OK.
+
+author: # Author of the article. Required.
+
+author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Not required.
+
+date: 2022-06-28T13:00:00+03:00 # Date of the article. Required. If the date is in the future it does not appear in the build
+
+tags: # Keywords for SEO
+
+ - vector databases comparative benchmark
+
+ - benchmark
+
+ - performance
+
+ - latency
+
+---
+
+
+
+[Qdrant x.y.0 is out!]((https://github.com/qdrant/qdrant/releases/tag/vx.y.0).
+
+
+
+Include headlines:
+
+
+
+- **Headline 1:** Description
+
+- **Headline 2:** Description
+
+- **Headline 3:** Description
+
+
+
+## Related to headline 1
+
+
+
+Description
+
+
+
+Highlights:
+
+
+
+- **Detail 1:** Description
+
+- **Detail 2:** Description
+
+- **Detail 3:** Description
+
+
+
+Include before / after information, ideally with graphs and/or numbers
+
+Include links to documentation
+
+Note limits, such as availability on Qdrant Cloud
+
+
+
+## Minor improvements and new features
+
+
+
+Beyond these enhancements, [Qdrant vx.y.0](https://github.com/qdrant/qdrant/releases/tag/vx.y.0) adds and improves on several smaller features:
+
+
+
+1.
+
+1.
+
+
+
+## Release notes
+
+
+
+For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/vx.y.0).
+
+Qdrant is an open source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
+",articles/templates/release-post-template.md
+"---
+
+review: “With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure.”
+
+names: Tara Walker
+
+positions: Principal Software Engineer at Microsoft
+
+avatar:
+
+ src: /img/customers/tara-walker.svg
+
+ alt: Tara Walker Avatar
+
+logo:
+
+ src: /img/brands/microsoft-gray.svg
+
+ alt: Logo
+
+sitemapExclude: true
+
+---
+",qdrant-for-startups/qdrant-for-startups-testimonial.md
+"---
+
+title: Apply Now
+
+form:
+
+ id: startup-program-form
+
+ title: Join our Startup Program
+
+ firstNameLabel: First Name
+
+ lastNameLabel: Last Name
+
+ businessEmailLabel: Business Email
+
+ companyNameLabel: Company Name
+
+ companyUrlLabel: Company URL
+
+ cloudProviderLabel: Cloud Provider
+
+ productDescriptionLabel: Product Description
+
+ latestFundingRoundLabel: Latest Funding Round
+
+ numberOfEmployeesLabel: Number of Employees
+
+ info: By submitting, I confirm that I have read and understood the
+
+ link:
+
+ url: /
+
+ text: Terms and Conditions.
+
+ button: Send Message
+
+ hubspotFormOptions: '{
+
+ ""region"": ""eu1"",
+
+ ""portalId"": ""139603372"",
+
+ ""formId"": ""59eb058b-0145-4ab0-b49a-c37708d20a60"",
+
+ ""submitButtonClass"": ""button button_contained"",
+
+ }'
+
+sitemapExclude: true
+
+---
+
+
+",qdrant-for-startups/qdrant-for-startups-form.md
+"---
+
+title: Program FAQ
+
+questions:
+
+- id: 0
+
+ question: Who is eligible?
+
+ answer: |
+
+
+
+
Pre-seed, Seed or Series A startups (under five years old)
+
+
Has not previously participated in the Qdrant for Startups program
+
+
Must be building an AI-driven product or services (agencies or devshops are not eligible)
+
+
A live, functional website is a must for all applicants
+
+
Billing must be done directly with Qdrant (not through a marketplace)
+
+
+
+- id: 1
+
+ question: When will I get notified about my application?
+
+ answer: Upon submitting your application, we will review it and notify you of your status within 7 business days.
+
+- id: 2
+
+ question: What is the price?
+
+ answer: It is free to apply to the program. As part of the program, you will receive up to a 20% discount on Qdrant Cloud, valid for 12 months. For detailed cloud pricing, please visit qdrant.tech/pricing.
+
+- id: 3
+
+ question: How can my startup join the program?
+
+ answer: Your startup can join the program by simply submitting the application on this page. Once submitted, we will review your application and notify you of your status within 7 business days.
+
+sitemapExclude: true
+
+---
+",qdrant-for-startups/qdrant-for-startups-faq.md
+"---
+
+title: Why join Qdrant for Startups?
+
+mainCard:
+
+ title: Discount for Qdrant Cloud
+
+ description: Receive up to 20% discount on Qdrant Cloud for the first year and start building now.
+
+ image:
+
+ src: /img/qdrant-for-startups-benefits/card1.png
+
+ alt: Qdrant Discount for Startups
+
+cards:
+
+- id: 0
+
+ title: Expert Technical Advice
+
+ description: Get access to one-on-one sessions with experts for personalized technical advice.
+
+ image:
+
+ src: /img/qdrant-for-startups-benefits/card2.svg
+
+ alt: Expert Technical Advice
+
+- id: 1
+
+ title: Co-Marketing Opportunities
+
+ description: We’d love to share your work with our community. Exclusive access to our Vector Space Talks, joint blog posts, and more.
+
+ image:
+
+ src: /img/qdrant-for-startups-benefits/card3.svg
+
+ alt: Co-Marketing Opportunities
+
+description: Qdrant is the leading open source vector database and similarity search engine designed to handle high-dimensional vectors for performance and massive-scale AI applications.
+
+link:
+
+ url: /documentation/overview/
+
+ text: Learn More
+
+sitemapExclude: true
+
+---
+",qdrant-for-startups/qdrant-for-startups-benefits.md
+"---
+
+title: Qdrant For Startups
+
+description: Qdrant For Startups
+
+cascade:
+
+- _target:
+
+ environment: production
+
+ build:
+
+ list: never
+
+ render: never
+
+ publishResources: false
+
+sitemapExclude: true
+
+# todo: remove sitemapExclude and change building options after the page is ready to be published
+
+---
+",qdrant-for-startups/_index.md
+"---
+
+title: Qdrant for Startups
+
+description: Powering The Next Wave of AI Innovators, Qdrant for Startups is committed to being the catalyst for the next generation of AI pioneers. Our program is specifically designed to provide AI-focused startups with the right resources to scale. If AI is at the heart of your startup, you're in the right place.
+
+button:
+
+ text: Apply Now
+
+ url: ""#form""
+
+image:
+
+ src: /img/qdrant-for-startups-hero.svg
+
+ srcMobile: /img/mobile/qdrant-for-startups-hero.svg
+
+ alt: Qdrant for Startups
+
+sitemapExclude: true
+
+---
+
+
+",qdrant-for-startups/qdrant-for-startups-hero.md
+"---
+
+title: Distributed
+
+icon:
+
+ - url: /features/cloud.svg
+
+ - url: /features/cluster.svg
+
+weight: 50
+
+sitemapExclude: True
+
+---
+
+
+
+Cloud-native and scales horizontally. \
+
+No matter how much data you need to serve - Qdrant can always be used with just the right amount of computational resources.
+",features/distributed.md
+"---
+
+title: Rich data types
+
+icon:
+
+ - url: /features/data.svg
+
+weight: 40
+
+sitemapExclude: True
+
+---
+
+
+
+Vector payload supports a large variety of data types and query conditions, including string matching, numerical ranges, geo-locations, and more.
+
+Payload filtering conditions allow you to build almost any custom business logic that should work on top of similarity matching.",features/rich-data-types.md
+"---
+
+title: Efficient
+
+icon:
+
+ - url: /features/sight.svg
+
+weight: 60
+
+sitemapExclude: True
+
+---
+
+
+
+Effectively utilizes your resources.
+
+Developed entirely in Rust language, Qdrant implements dynamic query planning and payload data indexing.
+
+Hardware-aware builds are also available for Enterprises.
+",features/optimized.md
+"---
+
+title: Easy to Use API
+
+icon:
+
+ - url: /features/settings.svg
+
+ - url: /features/microchip.svg
+
+weight: 10
+
+sitemapExclude: True
+
+---
+
+
+
+Provides the [OpenAPI v3 specification](https://api.qdrant.tech/api-reference) to generate a client library in almost any programming language.
+
+Alternatively utilise [ready-made client for Python](https://github.com/qdrant/qdrant-client) or other programming languages with additional functionality.",features/easy-to-use.md
+"---
+
+title: Filterable
+
+icon:
+
+ - url: /features/filter.svg
+
+weight: 30
+
+sitemapExclude: True
+
+---
+
+
+
+Support additional payload associated with vectors.
+
+Not only stores payload but also allows filter results based on payload values. \
+
+Unlike Elasticsearch post-filtering, Qdrant guarantees all relevant vectors are retrieved.
+",features/filterable.md
+"---
+
+title: Fast and Accurate
+
+icon:
+
+ - url: /features/speed.svg
+
+ - url: /features/target.svg
+
+weight: 20
+
+sitemapExclude: True
+
+---
+
+
+
+Implement a unique custom modification of the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for Approximate Nearest Neighbor Search.
+
+Search with a [State-of-the-Art speed](https://github.com/qdrant/benchmark/tree/master/search_benchmark) and apply search filters without [compromising on results](https://blog.vasnetsov.com/posts/categorical-hnsw/).
+",features/fast-and-accurate.md
+"---
+
+title: ""Make the most of your Unstructured Data""
+
+icon:
+
+sitemapExclude: True
+
+_build:
+
+ render: never
+
+ list: never
+
+ publishResources: false
+
+cascade:
+
+ _build:
+
+ render: never
+
+ list: never
+
+ publishResources: false
+
+---
+
+
+
+Qdrant is a vector database & vector similarity search engine.
+
+It deploys as an API service providing search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more!
+",features/_index.md
+"---
+
+title: Are you contributing to our code, content, or community?
+
+button:
+
+ url: https://forms.gle/q4fkwudDsy16xAZk8
+
+ text: Become a Star
+
+image:
+
+ src: /img/stars.svg
+
+ alt: Stars
+
+sitemapExclude: true
+
+---
+",stars/stars-get-started.md
+"---
+
+title: Meet our Stars
+
+cards:
+
+ - id: 0
+
+ image:
+
+ src: /img/stars/robert-caulk.jpg
+
+ alt: Robert Caulk Photo
+
+ name: Robert Caulk
+
+ position: Founder of Emergent Methods
+
+ description: Robert is working with a team on AskNews.app to adaptively enrich, index, and report on over 1 million news articles per day
+
+ - id: 1
+
+ image:
+
+ src: /img/stars/joshua-mo.jpg
+
+ alt: Joshua Mo Photo
+
+ name: Joshua Mo
+
+ position: DevRel at Shuttle.rs
+
+ description: Hey there! I primarily use Rust and am looking forward to contributing to the Qdrant community!
+
+ - id: 2
+
+ image:
+
+ src: /img/stars/nick-khami.jpg
+
+ alt: Nick Khami Photo
+
+ name: Nick Khami
+
+ position: Founder & Product Engineer
+
+ description: Founder and product engineer at Trieve and has been using Qdrant since late 2022
+
+ - id: 3
+
+ image:
+
+ src: /img/stars/owen-colegrove.jpg
+
+ alt: Owen Colegrove Photo
+
+ name: Owen Colegrove
+
+ position: Founder of SciPhi
+
+ description: Physics PhD, Quant @ Citadel and Founder at SciPhi
+
+ - id: 4
+
+ image:
+
+ src: /img/stars/m-k-pavan-kumar.jpg
+
+ alt: M K Pavan Kumar Photo
+
+ name: M K Pavan Kumar
+
+ position: Data Scientist and Lead GenAI
+
+ description: A seasoned technology expert with 14 years of experience in full stack development, cloud solutions, & artificial intelligence
+
+ - id: 5
+
+ image:
+
+ src: /img/stars/niranjan-akella.jpg
+
+ alt: Niranjan Akella Photo
+
+ name: Niranjan Akella
+
+ position: Scientist by Heart & AI Engineer
+
+ description: I build & deploy AI models like LLMs, Diffusion Models & Vision Models at scale
+
+ - id: 6
+
+ image:
+
+ src: /img/stars/bojan-jakimovski.jpg
+
+ alt: Bojan Jakimovski Photo
+
+ name: Bojan Jakimovski
+
+ position: Machine Learning Engineer
+
+ description: I'm really excited to show the power of the Qdrant as vector database
+
+ - id: 7
+
+ image:
+
+ src: /img/stars/haydar-kulekci.jpg
+
+ alt: Haydar KULEKCI Photo
+
+ name: Haydar KULEKCI
+
+ position: Senior Software Engineer
+
+ description: I am a senior software engineer and consultant with over 10 years of experience in data management, processing, and software development.
+
+ - id: 8
+
+ image:
+
+ src: /img/stars/nicola-procopio.jpg
+
+ alt: Nicola Procopio Photo
+
+ name: Nicola Procopio
+
+ position: Senior Data Scientist @ Fincons Group
+
+ description: Nicola, a data scientist and open-source enthusiast since 2009, has used Qdrant since 2023. He developed fastembed for Haystack, vector search for Cheshire Cat A.I., and shares his expertise through articles, tutorials, and talks.
+
+ - id: 9
+
+ image:
+
+ src: /img/stars/eduardo-vasquez.jpg
+
+ alt: Eduardo Vasquez Photo
+
+ name: Eduardo Vasquez
+
+ position: Data Scientist and MLOps Engineer
+
+ description: I am a Data Scientist and MLOps Engineer exploring generative AI and LLMs, creating YouTube content on RAG workflows and fine-tuning LLMs. I hold an MSc in Statistics and Data Science.
+
+ - id: 10
+
+ image:
+
+ src: /img/stars/benito-martin.jpg
+
+ alt: Benito Martin Photo
+
+ name: Benito Martin
+
+ position: Independent Consultant | Data Science, ML and AI Project Implementation | Teacher and Course Content Developer
+
+ description: Over the past year, Benito developed MLOps and LLM projects. Based in Switzerland, Benito continues to advance his skills.
+
+ - id: 11
+
+ image:
+
+ src: /img/stars/nirant-kasliwal.jpg
+
+ alt: Nirant Kasliwal Photo
+
+ name: Nirant Kasliwal
+
+ position: FastEmbed Creator
+
+ description: I'm a Machine Learning consultant specializing in NLP and Vision systems for early-stage products. I've authored an NLP book recommended by Dr. Andrew Ng to Stanford's CS230 students and maintain FastEmbed at Qdrant for speed.
+
+ - id: 12
+
+ image:
+
+ src: /img/stars/denzell-ford.jpg
+
+ alt: Denzell Ford Photo
+
+ name: Denzell Ford
+
+ position: Founder at Trieve, has been using Qdrant since late 2022.
+
+ description: Denzell Ford, the founder of Trieve, has been using Qdrant since late 2022. He's passionate about helping people in the community.
+
+ - id: 13
+
+ image:
+
+ src: /img/stars/pavan-nagula.jpg
+
+ alt: Pavan Nagula Photo
+
+ name: Pavan Nagula
+
+ position: Data Scientist | Machine Learning and Generative AI
+
+ description: I'm Pavan, a data scientist specializing in AI, ML, and big data analytics. I love experimenting with new technologies in the AI and ML space, and Qdrant is a place where I've seen such innovative implementations recently.
+
+sitemapExclude: true
+
+---
+
+
+",stars/stars-list.md
+"---
+
+title: Everything you need to extend your current reach to be the voice of the developer community and represent Qdrant
+
+benefits:
+
+- id: 0
+
+ icon:
+
+ src: /icons/outline/training-blue.svg
+
+ alt: Training
+
+ title: Training
+
+ description: You will be equipped with the assets and knowledge to organize and execute successful talks and events. Get access to our content library with slide decks, templates, and more.
+
+- id: 1
+
+ icon:
+
+ src: /icons/outline/award-blue.svg
+
+ alt: Award
+
+ title: Recognition
+
+ description: Win a certificate and be featured on our website page. Plus, enjoy the distinction of receiving exclusive Qdrant swag.
+
+- id: 2
+
+ icon:
+
+ src: /icons/outline/travel-blue.svg
+
+ alt: Travel
+
+ title: Travel
+
+ description: Benefit from a dedicated travel fund for speaking engagements at developer conferences.
+
+- id: 3
+
+ icon:
+
+ src: /icons/outline/star-ticket-blue.svg
+
+ alt: Star ticket
+
+ title: Beta-tests
+
+ description: Get a front-row seat to the future of Qdrant with opportunities to beta-test new releases and access our detailed product roadmap.
+
+sitemapExclude: true
+
+---
+
+
+",stars/stars-benefits.md
+"---
+
+title: Join our growing community
+
+cards:
+
+- id: 0
+
+ icon:
+
+ src: /img/stars-marketplaces/github.svg
+
+ alt: Github icon
+
+ title: Stars
+
+ statsToUse: githubStars
+
+ description: Join our GitHub community and contribute to the future of vector databases.
+
+ link:
+
+ text: Start Contributing
+
+ url: https://github.com/qdrant/qdrant
+
+- id: 1
+
+ icon:
+
+ src: /img/stars-marketplaces/discord.svg
+
+ alt: Discord icon
+
+ title: Members
+
+ statsToUse: discordMembers
+
+ description: Discover and chat on a vibrant community of developers working on the future of AI.
+
+ link:
+
+ text: Join our Conversations
+
+ url: https://qdrant.to/discord
+
+- id: 2
+
+ icon:
+
+ src: /img/stars-marketplaces/twitter.svg
+
+ alt: Twitter icon
+
+ title: Followers
+
+ statsToUse: twitterFollowers
+
+ description: Join us on X, participate and find out about our updates and releases before anyone else.
+
+ link:
+
+ text: Spread the Word
+
+ url: https://qdrant.to/twitter
+
+sitemapExclude: true
+
+---
+
+
+",stars/stars-marketplaces.md
+"---
+
+title: About Qdrant Stars
+
+descriptionFirstPart: Qdrant Stars is an exclusive program to the top contributors and evangelists inside the Qdrant community.
+
+descriptionSecondPart: These are the experts responsible for leading community discussions, creating high-quality content, and participating in Qdrant’s events and meetups.
+
+image:
+
+ src: /img/stars-about.png
+
+ alt: Stars program
+
+sitemapExclude: true
+
+---
+
+
+",stars/stars-about.md
+"---
+
+title: You are already a star in our community!
+
+description: The Qdrant Stars program is here to take that one step further.
+
+button:
+
+ text: Become a Star
+
+ url: https://forms.gle/q4fkwudDsy16xAZk8
+
+image:
+
+ src: /img/stars-hero.svg
+
+ alt: Stars
+
+sitemapExclude: true
+
+---
+
+
+",stars/stars-hero.md
+"---
+
+title: Qdrant Stars
+
+description: Qdrant Stars - Our Ambassador Program
+
+build:
+
+ render: always
+
+cascade:
+
+- build:
+
+ list: local
+
+ publishResources: false
+
+ render: never
+
+---
+
+
+",stars/_index.md
+"---
+
+title: Qdrant Private Cloud. Run Qdrant On-Premise.
+
+description: Effortlessly deploy and manage your enterprise-ready vector database fully on-premise, enhancing security for AI-driven applications.
+
+contactUs:
+
+ text: Contact us
+
+ url: /contact-sales/
+
+sitemapExclude: true
+
+---
+
+
+",private-cloud/private-cloud-hero.md
+"---
+
+title: Qdrant Private Cloud offers a dedicated, on-premise solution that guarantees supreme data privacy and sovereignty.
+
+description: Designed for enterprise-grade demands, it provides a seamless management experience for your vector database, ensuring optimal performance and security for vector search and AI applications.
+
+image:
+
+ src: /img/private-cloud-data-privacy.svg
+
+ alt: Private cloud data privacy
+
+sitemapExclude: true
+
+---
+
+
+",private-cloud/private-cloud-about.md
+"---
+
+content: To learn more about Qdrant Private Cloud, please contact our team.
+
+contactUs:
+
+ text: Contact us
+
+ url: /contact-sales/
+
+sitemapExclude: true
+
+---
+
+
+",private-cloud/private-cloud-get-contacted.md
+"---
+
+title: private-cloud
+
+description: private-cloud
+
+build:
+
+ render: always
+
+cascade:
+
+- build:
+
+ list: local
+
+ publishResources: false
+
+ render: never
+
+---
+",private-cloud/_index.md
+"---
+
+draft: false
+
+title: Building a High-Performance Entity Matching Solution with Qdrant -
+
+ Rishabh Bhardwaj | Vector Space Talks
+
+slug: entity-matching-qdrant
+
+short_description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses
+
+ building a high-performance hotel matching solution with Qdrant.
+
+description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building
+
+ a high-performance hotel matching solution with Qdrant, addressing data
+
+ inconsistency, duplication, and real-time processing challenges.
+
+preview_image: /blog/from_cms/rishabh-bhardwaj-cropped.png
+
+date: 2024-01-09T11:53:56.825Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Vector Space Talk
+
+ - Entity Matching Solution
+
+ - Real Time Processing
+
+---
+
+> *""When we were building proof of concept for this solution, we initially started with Postgres. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed... then we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.”*\
+
+> -- Rishabh Bhardwaj
+
+>
+
+
+
+How does the HNSW (Hierarchical Navigable Small World) algorithm benefit the solution built by Rishabh?
+
+
+
+Rhishabh, a Data Engineer at HRS Group, excels in designing, developing, and maintaining data pipelines and infrastructure crucial for data-driven decision-making processes. With extensive experience, Rhishabh brings a profound understanding of data engineering principles and best practices to the role. Proficient in SQL, Python, Airflow, ETL tools, and cloud platforms like AWS and Azure, Rhishabh has a proven track record of delivering high-quality data solutions that align with business needs. Collaborating closely with data analysts, scientists, and stakeholders at HRS Group, Rhishabh ensures the provision of valuable data and insights for informed decision-making.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/3IMIZljXqgYBqt671eaR9b?si=HUV6iwzIRByLLyHmroWTFA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/tDWhMAOyrcE).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top Takeaways:**
+
+
+
+Data inconsistency, duplication, and real-time processing challenges? Rishabh Bhardwaj, Data Engineer at HRS Group has the solution!
+
+
+
+In this episode, Rishabh dives into the nitty-gritty of creating a high-performance hotel matching solution with Qdrant, covering everything from data inconsistency challenges to the speed and accuracy enhancements achieved through the HNSW algorithm.
+
+
+
+5 Keys to Learning from the Episode:
+
+
+
+1. Discover the importance of data consistency and the challenges it poses when dealing with multiple sources and languages.
+
+2. Learn how Qdrant, an open-source vector database, outperformed other solutions and provided an efficient solution for high-speed matching.
+
+3. Explore the unique modification of the HNSW algorithm in Qdrant and how it optimized the performance of the solution.
+
+4. Dive into the crucial role of geofiltering and how it ensures accurate matching based on hotel locations.
+
+5. Gain insights into the considerations surrounding GDPR compliance and the secure handling of hotel data.
+
+
+
+> Fun Fact: Did you know that Rishabh and his team experimented with multiple transformer models to find the best fit for their entity resolution use case? Ultimately, they found that the Mini LM model struck the perfect balance between speed and accuracy. Talk about a winning combination!
+
+>
+
+
+
+## Show Notes:
+
+
+
+02:24 Data from different sources is inconsistent and complex.\
+
+05:03 Using Postgres for proof, switched to Qdrant for better results\
+
+09:16 Geofiltering is crucial for validating our matches.\
+
+11:46 Insights on performance metrics and benchmarks.\
+
+16:22 We experimented with different values and found the desired number.\
+
+19:54 We experimented with different models and found the best one.\
+
+21:01 API gateway connects multiple clients for entity resolution.\
+
+24:31 Multiple languages supported, using transcript API for accuracy.
+
+
+
+## More Quotes from Rishabh:
+
+
+
+*""One of the major challenges is the data inconsistency.”*\
+
+-- Rishabh Bhardwaj
+
+
+
+*""So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the embeddings.”*\
+
+-- Rishabh Bhardwaj
+
+
+
+*""Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner.”*\
+
+-- Rishabh Bhardwaj
+
+
+
+## Transcript:
+
+Demetrios:
+
+Hello, fellow travelers in vector space. Dare, I call you astronauts? Today we've got an incredible conversation coming up with Rishabh, and I am happy that you all have joined us. Rishabh, it's great to have you here, man. How you doing?
+
+
+
+Rishabh Bhardwaj:
+
+Thanks for having me, Demetrios. I'm doing really great.
+
+
+
+Demetrios:
+
+Cool. I love hearing that. And I know you are in India. It is a little bit late there, so I appreciate you taking the time to come on the Vector space talks with us today. You've got a lot of stuff that you're going to be talking about. For anybody that does not know you, you are a data engineer at Hrs Group, and you're responsible for designing, developing, and maintaining data pipelines and infrastructure that supports the company. I am excited because today we're going to be talking about building a high performance hotel matching solution with Qdrant. Of course, there's a little kicker there.
+
+
+
+Demetrios:
+
+We want to get into how you did that and how you leveraged Qdrant. Let's talk about it, man. Let's get into it. I want to know give us a quick overview of what exactly this is. I gave the title, but I think you can tell us a little bit more about building this high performance hotel matching solution.
+
+
+
+Rishabh Bhardwaj:
+
+Definitely. So to start with, a brief description about the project. So we have some data in our internal databases, and we ingest a lot of data on a regular basis from different sources. So Hrs is basically a global tech company focused on business travel, and we have one of the most used hotel booking portals in Europe. So one of the major things that is important for customer satisfaction is the content that we provide them on our portals. Right. So the issue or the key challenges that we have is basically with the data itself that we ingest from different sources. One of the major challenges is the data inconsistency.
+
+
+
+Rishabh Bhardwaj:
+
+So different sources provide data in different formats, not only in different formats. It comes in multiple languages as well. So almost all the languages being used across Europe and also other parts of the world as well. So, Majorly, the data is coming across 20 different languages, and it makes it really difficult to consolidate and analyze this data. And this inconsistency in data often leads to many errors in data interpretation and decision making as well. Also, there is a challenge of data duplication, so the same piece of information can be represented differently across various sources, which could then again lead to data redundancy. And identifying and resolving these duplicates is again a significant challenge. Then the last challenge I can think about is that this data processing happens in real time.
+
+
+
+Rishabh Bhardwaj:
+
+So we have a constant influx of data from multiple sources, and processing and updating this information in real time is a really daunting task. Yeah.
+
+
+
+Demetrios:
+
+And when you are talking about this data duplication, are you saying things like, it's the same information in French and German? Or is it something like it's the same column, just a different way in like, a table?
+
+
+
+Rishabh Bhardwaj:
+
+Actually, it is both the cases, so the same entities can be coming in multiple languages. And then again, second thing also wow.
+
+
+
+Demetrios:
+
+All right, cool. Well, that sets the scene for us. Now, I feel like you brought some slides along. Feel free to share those whenever you want. I'm going to fire away the first question and ask about this. I'm going to go straight into Qdrant questions and ask you to elaborate on how the unique modification of Qdrant of the HNSW algorithm benefits your solution. So what are you doing there? How are you leveraging that? And how also to add another layer to this question, this ridiculously long question that I'm starting to get myself into, how do you handle geo filtering based on longitude and latitude? So, to summarize my lengthy question, let's just start with the HNSW algorithm. How does that benefit your solution?
+
+
+
+Rishabh Bhardwaj:
+
+Sure. So to begin with, I will give you a little backstory. So when we were building proof of concept for this solution, we initially started with Postgres, because we had some Postgres databases lying around in development environments, and we just wanted to try out and build a proof of concept. So we installed an extension called Pgvector. And at that point of time, it used to have IVF Flat indexing approach. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed. Basically, if we want to increase the speed, then we would suffer a lot on basis of recall. Then we started looking for native vector databases in the market, and then we saw some benchmarks and we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.
+
+
+
+Rishabh Bhardwaj:
+
+And also, it was open source and really easy to host and use. We just needed to deploy a docker image in EC two instance and we can really start using it.
+
+
+
+Demetrios:
+
+Did you guys do your own benchmarks too? Or was that just like, you looked, you saw, you were like, all right, let's give this thing a spin.
+
+
+
+Rishabh Bhardwaj:
+
+So while deciding initially we just looked at the publicly available benchmarks, but later on, when we started using Qdrant, we did our own benchmarks internally. Nice.
+
+
+
+Demetrios:
+
+All right.
+
+
+
+Rishabh Bhardwaj:
+
+We just deployed a docker image of Qdrant in one of the EC Two instances and started experimenting with it. Very soon we realized that the HNSW indexing algorithm that it uses to build the indexing for the vectors, it was really efficient. We noticed that as compared to the PG Vector IVF Flat approach, it was around 16 times faster. And it didn't mean that it was not that accurate. It was actually 5% more accurate as compared to the previous results. So hold up.
+
+
+
+Demetrios:
+
+16 times faster and 5% more accurate. And just so everybody out there listening knows we're not paying you to say this, right?
+
+
+
+Rishabh Bhardwaj:
+
+No, not at all.
+
+
+
+Demetrios:
+
+All right, keep going. I like it.
+
+
+
+Rishabh Bhardwaj:
+
+Yeah. So initially, during the experimentations, we begin with the default values for the HNSW algorithm that Qdrant ships with. And these benchmarks that I just told you about, it was based on those parameters. But as our use cases evolved, we also experimented on multiple values of basically M and EF construct that Qdrant allow us to specify in the indexing algorithm.
+
+
+
+Demetrios:
+
+Right.
+
+
+
+Rishabh Bhardwaj:
+
+So also the other thing is, Qdrant also provides the functionality to specify those parameters while making the search as well. So it does not mean if we build the index initially, we only have to use those specifications. We can again specify them during the search as well.
+
+
+
+Demetrios:
+
+Okay.
+
+
+
+Rishabh Bhardwaj:
+
+Yeah. So some use cases we have requires 100% accuracy. It means we do not need to worry about speed at all in those use cases. But there are some use cases in which speed is really important when we need to match, like, a million scale data set. In those use cases, speed is really important, and we can adjust a little bit on the accuracy part. So, yeah, this configuration that Qdrant provides for indexing really benefited us in our approach.
+
+
+
+Demetrios:
+
+Okay, so then layer into that all the fun with how you're handling geofiltering.
+
+
+
+Rishabh Bhardwaj:
+
+So geofiltering is also a very important feature in our solution because the entities that we are dealing with in our data majorly consist of hotel entities. Right. And hotel entities often comes with the geocordinates. So even if we match it using one of the Embedding models, then we also need to make sure that whatever the model has matched with a certain cosine similarity is also true. So in order to validate that, we use geofiltering, which also comes in stacked with Qdrant. So we provide geocordinate data from our internal databases, and then we match it from what we get from multiple sources as well. And it also has a radius parameter, which we can provide to tune in. How much radius do we want to take in account in order for this to be filterable?
+
+
+
+Demetrios:
+
+Yeah. Makes sense. I would imagine that knowing where the hotel location is is probably a very big piece of the puzzle that you're serving up for people. So as you were doing this, what are some things that came up that were really important? I know you talked about working with Europe. There's a lot of GDPR concerns. Was there, like, privacy considerations that you had to address? Was there security considerations when it comes to handling hotel data? Vector, Embeddings, how did you manage all that stuff?
+
+
+
+Rishabh Bhardwaj:
+
+So GDP compliance? Yes. It does play a very important role in this whole solution.
+
+
+
+Demetrios:
+
+That was meant to be a thumbs up. I don't know what happened there. Keep going. Sorry, I derailed that.
+
+
+
+Rishabh Bhardwaj:
+
+No worries. Yes. So GDPR compliance is also one of the key factors that we take in account while building this solution to make sure that nothing goes out of the compliance. We basically deployed Qdrant inside a private EC two instance, and it is also protected by an API key. And also we have built custom authentication workflows using Microsoft Azure SSO.
+
+
+
+Demetrios:
+
+I see. So there are a few things that I also want to ask, but I do want to open it up. There are people that are listening, watching live. If anyone wants to ask any questions in the chat, feel free to throw something in there and I will ask away. In the meantime, while people are typing in what they want to talk to you about, can you talk to us about any insights into the performance metrics? And really, these benchmarks that you did where you saw it was, I think you said, 16 times faster and then 5% more accurate. What did that look like? What benchmarks did you do? How did you benchmark it? All that fun stuff. And what are some things to keep in mind if others out there want to benchmark? And I guess you were just benchmarking it against Pgvector, right?
+
+
+
+Rishabh Bhardwaj:
+
+Yes, we did.
+
+
+
+Demetrios:
+
+Okay, cool.
+
+
+
+Rishabh Bhardwaj:
+
+So for benchmarking, we have some data sets that are already matched to some entities. This was done partially by humans and partially by other algorithms that we use for matching in the past. And it is already consolidated data sets, which we again used for benchmarking purposes. Then the benchmarks that I specified were only against PG vector, and we did not benchmark it any further because the speed and the accuracy that Qdrant provides, I think it is already covering our use case and it is way more faster than we thought the solution could be. So right now we did not benchmark against any other vector database or any other solution.
+
+
+
+Demetrios:
+
+Makes sense just to also get an idea in my head kind of jumping all over the place, so forgive me. The semantic components of the hotel, was it text descriptions or images or a little bit of both? Everything?
+
+
+
+Rishabh Bhardwaj:
+
+Yes. So semantic comes just from the descriptions of the hotels, and right now it does not include the images. But in future use cases, we are also considering using images as well to calculate the semantic similarity between two entities.
+
+
+
+Demetrios:
+
+Nice. Okay, cool. Good. I am a visual guy. You got slides for us too, right? If I'm not mistaken? Do you want to share those or do you want me to keep hitting you with questions? We have something from Brad in the chat and maybe before you share any slides, is there a map visualization as part of the application UI? Can you speak to what you used?
+
+
+
+Rishabh Bhardwaj:
+
+If so, not right now, but this is actually a great idea and we will try to build it as soon as possible.
+
+
+
+Demetrios:
+
+Yeah, it makes sense. Where you have the drag and you can see like within this area, you have X amount of hotels, and these are what they look like, et cetera, et cetera.
+
+
+
+Rishabh Bhardwaj:
+
+Yes, definitely.
+
+
+
+Demetrios:
+
+Awesome. All right, so, yeah, feel free to share any slides you have, otherwise I can hit you with another question in the meantime, which is I'm wondering about the configurations you used for the HNSW index in Qdrant and what were the number of edges per node and the number of neighbors to consider during the index building. All of that fun stuff that goes into the nitty gritty of it.
+
+
+
+Rishabh Bhardwaj:
+
+So should I go with the slide first or should I answer your question first?
+
+
+
+Demetrios:
+
+Probably answer the question so we don't get too far off track, and then we can hit up your slides. And the slides, I'm sure, will prompt many other questions from my side and the audience's side.
+
+
+
+Rishabh Bhardwaj:
+
+So, for HNSW configuration, we have specified the value of M, which is, I think, basically the layers as 64, and the value for EF construct is 256.
+
+
+
+Demetrios:
+
+And how did you go about that?
+
+
+
+Rishabh Bhardwaj:
+
+So we did some again, benchmarks based on the single model that we have selected, which is mini LM, L six, V two. I will talk about it later also. But we basically experimented with different values of M and EF construct, and we came to this number that this is the value that we want to go ahead with. And also when I said that in some cases, indexing is not required at all, speed is not required at all, we want to make sure that whatever we are matching is 100% accurate. In that case, the Python client for Qdrant also provides a parameter called exact, and if we specify it as true, then it basically does not use indexing and it makes a full search on the whole vector collection, basically.
+
+
+
+Demetrios:
+
+Okay, so there's something for me that's pretty fascinating there on these different use cases. What else differs in the different ones? Because you have certain needs for speed or accuracy. It seems like those are the main trade offs that you're working with. What differs in the way that you set things up?
+
+
+
+Rishabh Bhardwaj:
+
+So in some cases so there are some internal databases that need to have hotel entities in a very sophisticated manner. It means it should not contain even a single duplicate entity. In those cases, accuracy is the most important thing we look at, and in some cases, for data analytics and consolidation purposes, we want speed more, but the accuracy should not be that much in value.
+
+
+
+Demetrios:
+
+So what does that look like in practice? Because you mentioned okay, when we are looking for the accuracy, we make sure that it comes through all of the different records. Right. Are there any other things in practice that you did differently?
+
+
+
+Rishabh Bhardwaj:
+
+Not really. Nothing I can think of right now.
+
+
+
+Demetrios:
+
+Okay, if anything comes up yeah, I'll remind you, but hit us with the slides, man. What do you got for the visual learners out there?
+
+
+
+Rishabh Bhardwaj:
+
+Sure. So I have an architecture diagram of what the solution looks like right now. So, this is the current architecture that we have in production. So, as I mentioned, we have deployed the Qdrant vector database in an EC Two, private EC Two instance hosted inside a VPC. And then we have some batch jobs running, which basically create Embeddings. And the source data basically first comes into S three buckets into a data lake. We do a little bit of preprocessing data cleaning and then it goes through a batch process of generating the Embeddings using the Mini LM model, mini LML six, V two. And this model is basically hosted in a SageMaker serverless inference endpoint, which allows us to not worry about servers and we can scale it as much as we want.
+
+
+
+Rishabh Bhardwaj:
+
+And it really helps us to build the Embeddings in a really fast manner.
+
+
+
+Demetrios:
+
+Why did you choose that model? Did you go through different models or was it just this one worked well enough and you went with it?
+
+
+
+Rishabh Bhardwaj:
+
+No, actually this was, I think the third or the fourth model that we tried out with. So what happens right now is if, let's say we want to perform a task such as sentence similarity and we go to the Internet and we try to find a model, it is really hard to see which model would perform best in our use case. So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. So we did a lot of experiments. We used, I think, Mpnet model and a lot of multilingual models as well. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the Embeddings. So we have deployed it in a serverless inference endpoint in SageMaker. And once we generate the Embeddings in a glue job, we then store them into the vector database Qdrant.
+
+
+
+Rishabh Bhardwaj:
+
+Then this part here is what goes on in the real time scenario. So, we have multiple clients, basically multiple application that would connect to an API gateway. We have exposed this API gateway in such a way that multiple clients can connect to it and they can use this entity resolution service according to their use cases. And we take in different parameters. Some are mandatory, some are not mandatory, and then they can use it based on their use case. The API gateway is connected to a lambda function which basically performs search on Qdrant vector database using the same Embeddings that can be generated from the same model that we hosted in the serverless inference endpoint. So, yeah, this is how the diagram looks right now. It did not used to look like this sometime back, but we have evolved it, developed it, and now we have got to this point where it is really scalable because most of the infrastructure that we have used here is serverless and it can be scaled up to any number of requests that you want.
+
+
+
+Demetrios:
+
+What did you have before that was the MVP.
+
+
+
+Rishabh Bhardwaj:
+
+So instead of this one, we had a real time inference endpoint which basically limited us to some number of requests that we had preset earlier while deploying the model. So this was one of the bottlenecks and then lambda function was always there, I think this one and also I think in place of this Qdrant vector database, as I mentioned, we had Postgres. So yeah, that was also a limitation because it used to use a lot of compute capacity within the EC two instance as compared to Qdrant. Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner.
+
+
+
+Demetrios:
+
+Awesome. Cool. This is fascinating. From my side, I love seeing what you've done and how you went about iterating on the architecture and starting off with something that you had up and running and then optimizing it. So this project has been how long has it been in the making and what has the time to market been like that first MVP from zero to one and now it feels like you're going to one to infinity by making it optimized. What's the time frames been here?
+
+
+
+Rishabh Bhardwaj:
+
+I think we started this in the month of May this year. Now it's like five to six months already. So the first working solution that we built was in around one and a half months and then from there onwards we have tried to iterate it to make it better and better.
+
+
+
+Demetrios:
+
+Cool. Very cool. Some great questions come through in the chat. Do you have multiple language support for hotel names? If so, did you see any issues with such mappings?
+
+
+
+Rishabh Bhardwaj:
+
+Yes, we do have support for multiple languages and we do not do it using currently using the multilingual models because what we realized is the multilingual models are built on journal sentences and not based it is not trained on entities like names, hotel names and traveler names, et cetera. So when we experimented with the multilingual models it did not provide much satisfactory results. So we used transcript API from Google and it is able to basically translate a lot of languages across that we have across the data and it really gives satisfactory results in terms of entity resolution.
+
+
+
+Demetrios:
+
+Awesome. What other transformers were considered for the evaluation?
+
+
+
+Rishabh Bhardwaj:
+
+The ones I remember from top of my head are Mpnet, then there is a Chinese model called Text to VEC, Shiba something and Bert uncased, if I remember correctly. Yeah, these were some of the models.
+
+
+
+Demetrios:
+
+That we considered and nothing stood out that worked that well or was it just that you had to make trade offs on all of them?
+
+
+
+Rishabh Bhardwaj:
+
+So in terms of accuracy, Mpnet was a little bit better than Mini LM but then again it was a lot slower than the Mini LM model. It was around five times slower than the Mini LM model, so it was not a big trade off to give up with. So we decided to go ahead with Mini LM.
+
+
+
+Demetrios:
+
+Awesome. Well, dude, this has been pretty enlightening. I really appreciate you coming on here and doing this. If anyone else has any questions for you, we'll leave all your information on where to get in touch in the chat. Rishabh, thank you so much. This is super cool. I appreciate you coming on here. Anyone that's listening, if you want to come onto the vector space talks, feel free to reach out to me and I'll make it happen.
+
+
+
+Demetrios:
+
+This is really cool to see the different work that people are doing and how you all are evolving the game, man. I really appreciate this.
+
+
+
+Rishabh Bhardwaj:
+
+Thank you, Demetrios. Thank you for inviting inviting me and have a nice day.",blog/building-a-high-performance-entity-matching-solution-with-qdrant-rishabh-bhardwaj-vector-space-talks-005.md
+"---
+
+draft: false
+
+preview_image: /blog/from_cms/inception.png
+
+sitemapExclude: true
+
+title: Qdrant has joined NVIDIA Inception Program
+
+slug: qdrant-joined-nvidia-inception-program
+
+short_description: Recently Qdrant has become a member of the NVIDIA Inception.
+
+description: Along with the various opportunities it gives, we are the most
+
+ excited about GPU support since it is an essential feature in Qdrant's
+
+ roadmap. Stay tuned for our new updates.
+
+date: 2022-04-04T12:06:36.819Z
+
+author: Alyona Kavyerina
+
+featured: false
+
+author_link: https://www.linkedin.com/in/alyona-kavyerina/
+
+tags:
+
+ - Corporate news
+
+ - NVIDIA
+
+categories:
+
+ - News
+
+---
+
+Recently we've become a member of the NVIDIA Inception. It is a program that helps boost the evolution of technology startups through access to their cutting-edge technology and experts, connects startups with venture capitalists, and provides marketing support.
+
+
+
+Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap.
+
+Stay tuned for our new updates.",blog/qdrant-has-joined-nvidia-inception-program.md
+"---
+
+draft: false
+
+title: ""Kairoswealth & Qdrant: Transforming Wealth Management with AI-Driven Insights and Scalable Vector Search""
+
+short_description: ""Transforming wealth management with AI-driven insights and scalable vector search.""
+
+description: ""Enhancing wealth management using AI-driven insights and efficient vector search for improved recommendations and scalability.""
+
+preview_image: /blog/case-study-kairoswealth/preview.png
+
+social_preview_image: /blog/case-study-kairoswealth/preview.png
+
+date: 2024-07-10T00:02:00Z
+
+author: Qdrant
+
+featured: false
+
+tags:
+
+ - Kairoswealth
+
+ - Vincent Teyssier
+
+ - AI-Driven Insights
+
+ - Performance Scalability
+
+ - Multi-Tenancy
+
+ - Financial Recommendations
+
+---
+
+
+
+![Kairoswealth overview](/blog/case-study-kairoswealth/image2.png)
+
+
+
+### **About Kairoswealth**
+
+
+
+[Kairoswealth](https://kairoswealth.com/) is a comprehensive wealth management platform designed to provide users with a holistic view of their financial portfolio. The platform offers access to unique financial products and automates back-office operations through its AI assistant, Gaia.
+
+
+
+![Dashboard Kairoswealth](/blog/case-study-kairoswealth/image3.png)
+
+
+
+### **Motivations for Adopting a Vector Database**
+
+
+
+“At Kairoswealth we encountered several use cases necessitating the ability to run similarity queries on large datasets. Key applications included product recommendations and retrieval-augmented generation (RAG),” says [Vincent Teyssier](https://www.linkedin.com/in/vincent-teyssier/), Chief Technology & AI Officer at Kairoswealth. These needs drove the search for a more robust and scalable vector database solution.
+
+
+
+### **Challenges with Previous Solutions**
+
+
+
+“We faced several critical showstoppers with our previous vector database solution, which led us to seek an alternative,” says Teyssier. These challenges included:
+
+
+
+- **Performance Scalability:** Significant performance degradation occurred as more data was added, despite various optimizations.
+
+- **Robust Multi-Tenancy:** The previous solution struggled with multi-tenancy, impacting performance.
+
+- **RAM Footprint:** High memory consumption was an issue.
+
+
+
+### **Qdrant Use Cases at Kairoswealth**
+
+
+
+Kairoswealth leverages Qdrant for several key use cases:
+
+
+
+- **Internal Data RAG:** Efficiently handling internal RAG use cases.
+
+- **Financial Regulatory Reports RAG:** Managing and generating financial reports.
+
+- **Recommendations:** Enhancing the accuracy and efficiency of recommendations with the Kairoswealth platform.
+
+
+
+![Stock recommendation](/blog/case-study-kairoswealth/image1.png)
+
+
+
+### **Why Kairoswealth Chose Qdrant**
+
+
+
+Some of the key reasons, why Kairoswealth landed on Qdrant as the vector database of choice are:
+
+
+
+1. **High Performance with 2.4M Vectors:** “Qdrant efficiently handled the indexing of 1.2 million vectors with 16 metadata fields each, maintaining high performance with no degradation. Similarity queries and scrolls run in less than 0.3 seconds. When we doubled the dataset to 2.4 million vectors, performance remained consistent.So we decided to double that to 2.4M vectors, and it's as if we were inserting our first vector!” says Teyssier.
+
+2. **8x Memory Efficiency:** The database storage size with Qdrant was eight times smaller than the previous solution, enabling the deployment of the entire dataset on smaller instances and saving significant infrastructure costs.
+
+3. **Embedded Capabilities:** “Beyond simple search and similarity, Qdrant hosts a bunch of very nice features around recommendation engines, adding positive and negative examples for better spacial narrowing, efficient multi-tenancy, and many more,” says Teyssier.
+
+4. **Support and Community:** “The Qdrant team, led by Andre Zayarni, provides exceptional support and has a strong passion for data engineering,” notes Teyssier, “the team's commitment to open-source and their active engagement in helping users, from beginners to veterans, is highly valued by Kairoswealth.”
+
+
+
+### **Conclusion**
+
+
+
+Kairoswealth's transition to Qdrant has enabled them to overcome significant challenges related to performance, scalability, and memory efficiency, while also benefiting from advanced features and robust support. This partnership positions Kairoswealth to continue innovating in the wealth management sector, leveraging the power of AI to deliver superior services to their clients.
+
+
+
+### **Future Roadmap for Kairoswealth**
+
+
+
+Kairoswealth is seizing the opportunity to disrupt the wealth management sector, which has traditionally been underserved by technology. For example, they are developing the Kairos Terminal, a natural language interface that translates user queries into OpenBB commands (a set of tools for financial analysis and data visualization within the OpenBB Terminal). With regards to the future of the wealth management sector, Teyssier notes that “the integration of Generative AI will automate back-office tasks such as data collation, data reconciliation, and market research. This technology will also enable wealth managers to scale their services to broader segments, including affluent clients, by automating relationship management and interactions.”
+",blog/case-study-kairoswealth.md
+"---
+
+draft: false
+
+title: Vector Search for Content-Based Video Recommendation - Gladys and Samuel
+
+ from Dailymotion
+
+slug: vector-search-vector-recommendation
+
+short_description: Gladys Roch and Samuel Leonardo Gracio join us in this
+
+ episode to share their knowledge on content-based recommendation.
+
+description: Gladys Roch and Samuel Leonardo Gracio from Dailymotion, discussed
+
+ optimizing video recommendations using Qdrant's vector search alongside
+
+ challenges and solutions in content-based recommender systems.
+
+preview_image: /blog/from_cms/gladys-and-sam-bp-cropped.png
+
+date: 2024-03-19T14:08:00.190Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Vector Space Talks
+
+ - Vector Search
+
+ - Video Recommender
+
+ - content based recommendation
+
+---
+
+> ""*The vector search engine that we chose is Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which matches the recommender tag that we have.*”\
+
+-- Gladys Roch
+
+>
+
+
+
+Gladys Roch is a French Machine Learning Engineer at Dailymotion working on recommender systems for video content.
+
+
+
+> ""*We don't have full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement.*”\
+
+-- Samuel Leonardo Gracio
+
+>
+
+
+
+Samuel Leonardo Gracio, a Senior Machine Learning Engineer at Dailymotion, mainly works on recommender systems and video classification.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/4YYASUZKcT5A90d6H2mOj9?si=a5GgBd4JTR6Yo3HBJfiejQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/z_0VjMZ2JY0).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top takeaways:**
+
+
+
+Are you captivated by how video recommendations that are engineered to serve up your next binge-worthy content? We definitely are.
+
+
+
+Get ready to unwrap the secrets that keep millions engaged, as Demetrios chats with the brains behind the scenes of Dailymotion. This episode is packed with insights straight from ML Engineers at Dailymotion who are reshaping how we discover videos online.
+
+
+
+Here's what you’ll unbox from this episode:
+
+
+
+1. **The Mech Behind the Magic:** Understand how a robust video embedding process can change the game - from textual metadata to audio signals and beyond.
+
+2. **The Power of Multilingual Understanding:** Discover the tools that help recommend videos to a global audience, transcending language barriers.
+
+3. **Breaking the Echo Chamber:** Learn about Dailymotion's 'perspective' feature that's transforming the discovery experience for users.
+
+4. **Challenges & Triumphs:** Hear how Qdrant helps Dailymotion tackle a massive video catalog and ensure the freshest content pops on your feed.
+
+5. **Behind the Scenes with Qdrant:** Get an insider’s look at why Dailymotion entrusted their recommendation needs to Qdrant's capable hands (or should we say algorithms?).
+
+
+
+> Fun Fact: Did you know that Dailymotion juggles over 13 million recommendations daily? That's like serving up a personalized video playlist to the entire population of Greece. Every single day!
+
+>
+
+
+
+## Show notes:
+
+
+
+00:00 Vector Space Talks intro with Gladys and Samuel.\
+
+05:07 Recommender system needs vector search for recommendations.\
+
+09:29 Chose vector search engine for fast neighbor search.\
+
+13:23 Video transcript use for scalable multilingual embedding.\
+
+16:35 Transcripts prioritize over video title and tags.\
+
+17:46 Videos curated based on metadata for quality.\
+
+20:53 Qdrant setup overview for machine learning engineers.\
+
+25:25 Enhanced recommendation system improves user engagement.\
+
+29:36 Recommender system, A/B testing, collection aliases strategic.\
+
+33:03 Dailymotion's new feature diversifies video perspectives.\
+
+34:58 Exploring different perspectives and excluding certain topics.
+
+
+
+## More Quotes from Gladys and Sam:
+
+
+
+""*Basically, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant.*”\
+
+-- Gladys Roch
+
+
+
+*""We basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction.”*\
+
+-- Samuel Leonardo Gracio
+
+
+
+*""But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues.”*\
+
+-- Gladys Roch
+
+
+
+*""The fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution.”*\
+
+-- Samuel Leonardo Gracio
+
+
+
+## Transcript:
+
+Demetrios:
+
+I don't know if you all realize what you got yourself into, but we are back for another edition of the Vector Space Talks. My stream is a little bit chunky and slow, so I think we're just to get into it with Gladys and Samuel from Daily motion. Thank you both for joining us. It is an honor to have you here. For everyone that is watching, please throw your questions and anything else that you want to remark about into the chat. We love chatting with you and I will jump on screen if there is something that we need to stop the presentation about and ask right away. But for now, I think you all got some screen shares you want to show us.
+
+
+
+Samuel Leonardo Gracio:
+
+Yes, exactly. So first of all, thank you for the invitation, of course. And yes, I will share my screen. We have a presentation. Excellent. Should be okay now.
+
+
+
+Demetrios:
+
+Brilliant.
+
+
+
+Samuel Leonardo Gracio:
+
+So can we start?
+
+
+
+Demetrios:
+
+I would love it. Yes, I'm excited. I think everybody else is excited too.
+
+
+
+Gladys Roch:
+
+So welcome, everybody, to our vector space talk. I'm Gladys Roch, machine learning engineer at Dailymotion.
+
+
+
+Samuel Leonardo Gracio:
+
+And I'm Samuel, senior machine learning engineer at Dailymotion.
+
+
+
+Gladys Roch:
+
+Today we're going to talk about Vector search in the context of recommendation and in particular how Qdrant. That's going to be a hard one. We actually got used to pronouncing Qdrant as a french way, so we're going to sleep a bit during this presentation, sorry, in advance, the Qdrant and how we use it for our content based recommender. So we are going to first present the context and why we needed a vector database and why we chose Qdrant, how we fit Qdrant, what we put in it, and we are quite open about the pipelines that we've set up and then we get into the results and how Qdrant helped us solve the issue that we had.
+
+
+
+Samuel Leonardo Gracio:
+
+Yeah. So first of all, I will talk about, globally, the recommendation at Dailymotion. So just a quick introduction about Dailymotion, because you're not all french, so you may not all know what Dailymotion is. So we are a video hosting platform as YouTube or TikTok, and we were founded in 2005. So it's a node company for videos and we have 400 million unique users per month. So that's a lot of users and videos and views. So that's why we think it's interesting. So Dailymotion is we can divide the product in three parts.
+
+
+
+Samuel Leonardo Gracio:
+
+So one part is the native app. As you can see, it's very similar from other apps like TikTok or Instagram reels. So you have vertical videos, you just scroll and that's it. We also have a website. So Dailymotion.com, that is our main product, historical product. So on this website you have a watching page like you can have for instance, on YouTube. And we are also a video player that you can find in most of the french websites and even in other countries. And so we have recommendation almost everywhere and different recommenders for each of these products.
+
+
+
+Gladys Roch:
+
+Okay, so that's Dailymotion. But today we're going to focus on one of our recommender systems. Actually, the machine learning engineer team handles multiple recommender systems. But the video to video recommendation is the oldest and the most used. And so it's what you can see on the screen, it's what you have the recommendation queue of videos that you can see on the side or below the videos that you're watching. And to compute these suggestions, we have multiple models running. So that's why it's a global system. This recommendation is quite important for Dailymotion.
+
+
+
+Gladys Roch:
+
+It's actually a key component. It's one of the main levers of audience generation. So for everybody who comes to the website from SEO or other ways, then that's how we generate more audience and more engagement. So it's very important in the revenue stream of the platform. So working on it is definitely a main topic of the team and that's why we are evolving on this topic all the time.
+
+
+
+Samuel Leonardo Gracio:
+
+Okay, so why would we need a vector search for this recommendation? I think we are here for that. So as many platforms and as many recommender systems, I think we have a very usual approach based on a collaborative model. So we basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction. And we have a problem that I think all the recommender systems can have, which is a costar tissue. So this costar tissue is for new users and new videos, in fact. So if we don't have any information or interaction, it's difficult to recommend anything based on this collaborative approach.
+
+
+
+Samuel Leonardo Gracio:
+
+So the idea to solve that was to use a content based recommendation. It's also a classic solution. And the idea is when you have a very fresh video. So video, hey, in this case, a good thing to recommend when you don't have enough information is to recommend a very similar video and hope that the user will watch it also. So for that, of course, we use Qdrant and we will explain how. So yeah, the idea is to put everything in the vector space. So each video at Dailymotion will go through an embedding model. So for each video we'll get a video on embedding.
+
+
+
+Samuel Leonardo Gracio:
+
+We will describe how we do that just after and put it in a vector space. So after that we could use Qdrant to, sorry, Qdrant to query and get similar videos that we will recommend to our users.
+
+
+
+Gladys Roch:
+
+Okay, so if we have embeddings to represent our videos, then we have a vector space, but we need to be able to query this vector space and not only to query it, but to do it at scale and online because it's like a recommender facing users. So we have a few requirements. The first one is that we have a lot of videos in our catalog. So actually doing an exact neighbor search would be unreasonable, unrealistic. It's a combinatorial explosion issue, so we can't do an exact Knn. Plus we also have new videos being uploaded to Dailymotion every hour. So if we could somehow manage to do KNN and to pre compute it, it would never be up to date and it would be very expensive to recompute all the time to include all the new videos. So we need a solution that can integrate new videos all the time.
+
+
+
+Gladys Roch:
+
+And we're also at scale, we serve over 13 million recommendation each day. So it means that we need a big setup to retrieve the neighbors of many videos all day. And finally, we have users waiting for the recommendation. So it's not just pre computed and stored, and it's not just content knowledge. We are trying to provide the recommendation as fast as possible. So we have time constraints and we only have a few hundred milliseconds to compute the recommendation that we're going to show the user. So we need to be able to retrieve the close video that we'd like to propose to the user very fast. So we need to be able to navigate this vector space that we are building quite quickly.
+
+
+
+Gladys Roch:
+
+So of course we need vector search engine. That's the most easy way to do it, to be able to compute and approximate neighbor search and to do it at scale. So obviously, evidently the vector search engine that we chose this Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which match the recommendous tag that we have. A very important issue for us was to be able to not only put the embeddings of the vectors in this space but also to put metadata with it to be able to get a bit more information and not just a mathematical representation of the video in this database. And actually doing that make it filterable, which means that we can retrieve neighbors of a video, but given some constraints, and it's very important for us typically for language constraints. Samuel will talk a bit more in details about that just after.
+
+
+
+Gladys Roch:
+
+But we have an embedding that is multilingual and we need to be able to filter all the language, all the videos on their language to offer more robust recommendation for our users. And also Qdrant is distributed and so it's scalable and we needed that due to the load that I just talked about. So that's the main points that led us to choose Qdrant.
+
+
+
+Samuel Leonardo Gracio:
+
+And also they have an amazing team.
+
+
+
+Gladys Roch:
+
+So that's another, that would be our return of experience. The team of Qdrant is really nice. You helped us actually put in place the cluster.
+
+
+
+Samuel Leonardo Gracio:
+
+Yeah. So what do we put in our Qdrant cluster? So how do we build our robust video embedding? I think it's really interesting. So the first point for us was to know what a video is about. So it's a really tricky question, in fact. So of course, for each video uploaded on the platform, we have the video signal, so many frames representing the video, but we don't use that for our meetings. And in fact, why we are not using them, it's because it contains a lot of information. Right, but not what we want. For instance, here you have video about an interview of LeBron James.
+
+
+
+Samuel Leonardo Gracio:
+
+But if you only use the frames, the video signal, you can't even know what he's saying, what the video is about, in fact. So we still try to use it. But in fact, the most interesting thing to represent our videos are the textual metadata. So the textual metadata, we have them for every video. So for every video uploaded on the platform, we have a video title, video description that are put by the person that uploads the video. But we also have automatically detected tags. So for instance, for this video, you could have LeBron James, and we also have subtitles that are automatically generated. So just to let you know, we do that using whisper, which is an open source solution provided by OpenAI, and we do it at scale.
+
+
+
+Samuel Leonardo Gracio:
+
+When a video is uploaded, we directly have the video transcript and we can use this information to represent our videos with just a textual embedding, which is far more easy to treat, and we need less compute than for frames, for instance. So the other issue for us was that we needed an embedding that could scale so that does not require too much time to compute because we have a lot of videos, more than 400 million videos, and we have many videos uploaded every hour, so it needs to scale. We also have many languages on our platform, more than 300 languages in the videos. And even if we are a french video platform, in fact, it's only a third of our videos that are actually in French. Most of the videos are in English or other languages such as Turkish, Spanish, Arabic, et cetera. So we needed something multilingual, which is not very easy to find. But we came out with this embedding, which is called multilingual universal sentence encoder. It's not the most famous embedding, so I think it's interesting to share it.
+
+
+
+Samuel Leonardo Gracio:
+
+It's open source, so everyone can use it. It's available on Tensorflow hub, and I think that now it's also available on hugging face, so it's easy to implement and to use it. The good thing is that it's pre trained, so you don't even have to fine tune it on your data. You can, but I think it's not even required. And of course it's multilingual, so it doesn't work with every languages. But still we have the main languages that are used on our platform. It focuses on semantical similarity. And you have an example here when you have different video titles.
+
+
+
+Samuel Leonardo Gracio:
+
+So for instance, one about soccer, another one about movies. Even if you have another video title in another language, if it's talking about the same topic, they will have a high cosine similarity. So that's what we want. We want to be able to recommend every video that we have in our catalog, not depending on the language. And the good thing is that it's really fast. Actually, it's a few milliseconds on cpu, so it's really easy to scale. So that was a huge requirement for us.
+
+
+
+Demetrios:
+
+Can we jump in here?
+
+
+
+Demetrios:
+
+There's a few questions that are coming through that I think are pretty worth. And it's actually probably more suited to the last slide. Sameer is asking this one, actually, one more back. Sorry, with the LeBron. Yeah, so it's really about how you understand the videos. And Sameer was wondering if you can quote unquote hack the understanding by putting some other tags or.
+
+
+
+Samuel Leonardo Gracio:
+
+Ah, you mean from a user perspective, like the person uploading the video, right?
+
+
+
+Demetrios:
+
+Yeah, exactly.
+
+
+
+Samuel Leonardo Gracio:
+
+You could do that before using transcripts, but since we are using them mainly and we only use the title, so the tags are automatically generated. So it's on our side. So the title and description, you can put whatever you want. But since we have the transcript, we know the content of the video and we embed that. So the title and the description are not the priority in the embedding. So I think it's still possible, but we don't have such use case. In fact, most of the people uploading videos are just trying to put the right title, but I think it's still possible. But yeah, with the transcript we don't have any examples like that.
+
+
+
+Samuel Leonardo Gracio:
+
+Yeah, hopefully.
+
+
+
+Demetrios:
+
+So that's awesome to think about too. It kind of leads into the next question, which is around, and this is from Juan Pablo. What do you do with videos that have no text and no meaningful audio, like TikTok or a reel?
+
+
+
+Samuel Leonardo Gracio:
+
+So for the moment, for these videos, we are only using the signal from the title tags, description and other video metadata. And we also have a moderation team which is watching the videos that we have here in the mostly recommended videos. So we know that the videos that we recommend are mostly good videos. And for these videos, so that don't have audio signal, we are forced to use the title tags and description. So these are the videos where the risk is at the maximum for us currently. But we are also working at the moment on something using the audio signal and the frames, but not all the frames. But for the moment, we don't have this solution. Right.
+
+
+
+Gladys Roch:
+
+Also, as I said, it's not just one model, we're talking about the content based model. But if we don't have a similarity score that is high enough, or if we're just not confident about the videos that were the closest, then we will default to another model. So it's not just one, it's a huge system.
+
+
+
+Samuel Leonardo Gracio:
+
+Yeah, and one point also, we are talking about videos with few interactions, so they are not videos at risk. I mean, they don't have a lot of views. When this content based algo is called, they are important because there are very fresh videos, and fresh videos will have a lot of views in a few minutes. But when the collaborative model will be retrained, it will be able to recommend videos on other things than the content itself, but it will use the collaborative signal. So I'm not sure that it's a really important risk for us. But still, I think we could still do some improvement for that aspect.
+
+
+
+Demetrios:
+
+So where do I apply to just watch videos all day for the content team? All right, I'll let you get back to it. Sorry to interrupt. And if anyone else has good questions.
+
+
+
+Samuel Leonardo Gracio:
+
+And I think it's good to ask your question during the presentation, it's more easier to answer. So, yeah, sorry, I was saying that we had this multilingual embedding, and just to present you our embedding pipeline. So, for each video that is uploaded or edited, because you can change the video title whenever you want, we have a pub sub event that is sent to a dataflow pipeline. So it's a streaming job for every video we will retrieve. So textual metadata, title, description tags or transcript, preprocess it to remove some words, for instance, and then call the model to have this embedding. And then. So we put it in bigquery, of course, but also in Qdrant.
+
+
+
+Gladys Roch:
+
+So I'm going to present a bit our Qdrant setup. So actually all this was deployed by our tier DevOps team, not by us machine learning engineers. So it's an overview, and I won't go into the details because I'm not familiar with all of this, but basically, as Samuel said, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant. And on the other hand, our recommender queries the Qdrant vector space through GrPC ingress. And actually Qdrant is running on six pods that are using arm nodes. And you have the specificities of which type of nodes we're using there, if you're interested. But basically that's the setup. And what is interesting is that our recommendation stack for now, it's on premise, which means it's running on Dailymotion servers, not on the Google Kubernetes engine, whereas Qdrant is on the TKE.
+
+
+
+Gladys Roch:
+
+So we are querying it from outside. And also if you have more questions about this setup, we'll be happy to redirect you to the DevOps team that helped us put that in place. And so finally the results. So we stated earlier that we had a call start issue. So before Qdrant, we had a lot of difficulties with this challenge. We had a collaborative recommender that was trained and performed very well on high senior videos, which means that is videos with a lot of interactions. So we can see what user like to watch, which videos they like to watch together. And we also had a metadata recommender.
+
+
+
+Gladys Roch:
+
+But first, this collaborative recommender was actually also used to compute call start recommendation, which is not allowed what it is trained on, but we were using a default embedding to compute like a default recommendation for call start, which led to a lot of popularity issues. Popularity issues for recommender system is when you always recommend the same video that is hugely popular and it's like a feedback loop. A lot of people will default to this video because it might be clickbait and then we will have a lot of inhaler action. So it will pollute the collaborative model all over again. So we had popularity issues with this, obviously. And we also had like this metadata recommender that only focused on a very small scope of trusted owners and trusted video sources. So it was working. It was an auto encoder and it was fine, but the scope was too small.
+
+
+
+Gladys Roch:
+
+Too few videos could be recommended through this model. And also those two models were trained very infrequently, only every 4 hours and 5 hours, which means that any fresh videos on the platform could not be recommended properly for like 4 hours. So it was the main issue because Dailymotion uses a lot of fresh videos and we have a lot of news, et cetera. So we need to be topical and this couldn't be done with this huge delay. So we had overall bad performances on the Los signal. And so with squadron we fixed that. We still have our collaborative recommender. It has evolved since then.
+
+
+
+Gladys Roch:
+
+It's actually computed much more often, but the collaborative model is only focused on high signal now and it's not computed like default recommendation for low signal that it doesn't know. And we have a content based recommender based on the muse embedding and Qdrant that is able to recommend to users video as soon as they are uploaded on the platform. And it has like a growing scope, 20 million vectors at the moment. But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues.
+
+
+
+Gladys Roch:
+
+What I was talking about fresh videos, popularities, low performances. We fixed that and we were very happy with the setup. It's running smoothly. Yeah, I think that's it for the presentation, for the slides at least. So we are open to discussion and if you have any questions to go into the details of the recommender system. So go ahead, shoot.
+
+
+
+Demetrios:
+
+I've got some questions while people are typing out everything in the chat and the first one I think that we should probably get into is how did the evaluation process go for you when you were looking at different vector databases and vector search engines?
+
+
+
+Samuel Leonardo Gracio:
+
+So that's a good point. So first of all, you have to know that we are working with Google cloud platform. So the first thing that we did was to use their vector search engine, so which called matching engine.
+
+
+
+Gladys Roch:
+
+Right.
+
+
+
+Samuel Leonardo Gracio:
+
+But the issue with matching engine is that we could not in fact add the API, wasn't easy to use. First of all. The second thing was that we could not put metadata, as we do in Qdrant, and filter out, pre filter before the query, as we are doing now in a Qdrant. And the first thing is that their solution is managed. Yeah, is managed. We don't have the full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement. We had a really cool documentation, so it was easy to test some things and basically we couldn't find any drawbacks for our use case at least.
+
+
+
+Samuel Leonardo Gracio:
+
+And moreover, the fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution, because we implemented Qdrant.
+
+
+
+Gladys Roch:
+
+Like on February or even January 2023. So Qdrant is fairly new, so the documentation was still under construction. And so you helped us through the discord to set up the cluster. So it was really nice.
+
+
+
+Demetrios:
+
+Excellent. And what about least favorite parts of using Qdrant?
+
+
+
+Gladys Roch:
+
+Yeah, I have one. I discovered it was not actually a requirement at the beginning, but for recommender systems we tend to do a lot of a B test. And you might wonder what's the deal with Qdrant and a b test. It's not related, but actually we were able to a b test our collection. So how we compute the embedding? First we had an embedding without the transcript, and now we have an embedding that includes the transcript. So we wanted to a b test that. And on Quellin you can have collection aliases and this is super helpful because you can have two collections that live on the cluster at the same time, and then on your code you can just call the production collection and then set the alias to the proper one. So for a d testing and rollout it's very useful.
+
+
+
+Gladys Roch:
+
+And I found it when I first wanted to do an a test. So I like this one. It was an existed and I like it also, the second thing I like is the API documentation like the one that is auto generated with all the examples and how to query any info on Qdrant. It's really nice for someone who's not from DevOps. It help us just debug our collection whenever. So it's very easy to get into.
+
+
+
+Samuel Leonardo Gracio:
+
+And the fact that the product is evolving so fast, like every week almost. You have a new feeder. I think it's really cool. There is one community and I think, yeah, it's really interesting and it's amazing to have such people working on that on an open source project like this one.
+
+
+
+Gladys Roch:
+
+We had feedback from our devot team when preparing this presentation. We reached out to them for the small schema that I tried to present. And yeah, they said that the open source community of quasant was really nice. It was easy to contribute, it was very open on Discord. I think we did a return on experience at some point on how we set up the cluster at the beginning. And yeah, they were very hyped by the fact that it's coded in rust. I don't know if you hear this a lot, but to them it's even more encouraging contributing with this kind of new language.
+
+
+
+Demetrios:
+
+100% excellent. So last question from my end, and it is on if you're using Qdrant for anything else when it comes to products at Dailymotion, yes, actually we do.
+
+
+
+Samuel Leonardo Gracio:
+
+I have one slide about this.
+
+
+
+Gladys Roch:
+
+We have slides because we presented quadrum to another talk a few weeks ago.
+
+
+
+Samuel Leonardo Gracio:
+
+So we didn't prepare this slide just for this presentation, it's from another presentation, but still, it's a good point because we're currently trained to use it in other projects. So as we said in this presentation, we're mostly using it for the watching page. So Dailymotion.com but we also introduced it in the mobile app recently through a feature that is called perspective. So the goal of the feature is to be able to break this vertical feed algorithm to let the users to have like a button to discover new videos. So when you go through your feed, sometimes you will get a video talking about, I don't know, a movie. You will get this button, which is called perspective, and you will be able to have other videos talking about the same movie but giving to you another point of view. So people liking the movie, people that didn't like the movie, and we use Qdrant, sorry for the candidate generation part. So to get the similar videos and to get the videos that are talking about the same subject.
+
+
+
+Samuel Leonardo Gracio:
+
+So I won't talk too much about this project because it will require another presentation of 20 minutes or more. But still we are using it in other projects and yeah, it's really interesting to see what we are able to do with that tool.
+
+
+
+Gladys Roch:
+
+Once we have the vector space set up, we can just query it from everywhere. In every project of recommendation.
+
+
+
+Samuel Leonardo Gracio:
+
+We also tested some search. We are testing many things actually, but we don't have implemented it yet. For the moment we just have this perspective feed and the content based Roko, but we still have a lot of ideas using this vector search space.
+
+
+
+Demetrios:
+
+I love that idea on the get another perspective. So it's not like you get, as you were mentioning before, you don't get that echo chamber and just about everyone saying the same thing. You get to see are there other sides to this? And I can see how that could be very uh, Juan Pablo is back, asking questions in the chat about are you able to recommend videos with negative search queries and negative in the sense of, for example, as a user I want to see videos of a certain topic, but I want to exclude some topics from the video.
+
+
+
+Gladys Roch:
+
+Okay. We actually don't do that at the moment, but we know we can with squadron we can set positive and negative points from where to query. So actually for the moment we only retrieve close positive neighbors and we apply some business filters on top of that recommendation. But that's it.
+
+
+
+Samuel Leonardo Gracio:
+
+And that's because we have also this collaborative model, which is our main recommender system. But I think we definitely need to check that and maybe in the future we will implement that. We saw that many documentation about this and I'm pretty sure that it would work very well on our use case.
+
+
+
+Demetrios:
+
+Excellent. Well folks, I think that's about it for today. I want to thank you so much for coming and chatting with us and teaching us about how you're using Qdrant and being very transparent about your use. I learned a ton. And for anybody that's out there doing recommender systems and interested in more, I think they can reach out to you on LinkedIn. I've got both of your we'll drop them in the chat right now and we'll let everybody enjoy. So don't get lost in vector base. We will see you all later.
+
+
+
+Demetrios:
+
+If anyone wants to give a talk next, reach out to me. We always are looking for incredible talks and so this has been great. Thank you all.
+
+
+
+Gladys Roch:
+
+Thank you.
+
+
+
+Samuel Leonardo Gracio:
+
+Thank you very much for the invitation and for everyone listening. Thank you.
+
+
+
+Gladys Roch:
+
+See you. Bye.
+",blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talks.md
+"---
+
+draft: false
+
+title: Indexify Unveiled - Diptanu Gon Choudhury | Vector Space Talks
+
+slug: indexify-content-extraction-engine
+
+short_description: Diptanu Gon Choudhury discusses how Indexify is transforming
+
+ the AI-driven workflow in enterprises today.
+
+description: Diptanu Gon Choudhury shares insights on re-imaging Spark and data
+
+ infrastructure while discussing his work on Indexify to enhance AI-driven
+
+ workflows and knowledge bases.
+
+preview_image: /blog/from_cms/diptanu-choudhury-cropped.png
+
+date: 2024-01-26T16:40:55.469Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Vector Space Talks
+
+ - Indexify
+
+ - structured extraction engine
+
+ - rag-based applications
+
+---
+
+> *""We have something like Qdrant, which is very geared towards doing Vector search. And so we understand the shape of the storage system now.”*\
+
+— Diptanu Gon Choudhury
+
+>
+
+
+
+Diptanu Gon Choudhury is the founder of Tensorlake. They are building Indexify - an open-source scalable structured extraction engine for unstructured data to build near-real-time knowledgebase for AI/agent-driven workflows and query engines. Before building Indexify, Diptanu created the Nomad cluster scheduler at Hashicorp, inventor of the Titan/Titus cluster scheduler at Netflix, led the FBLearner machine learning platform, and built the real-time speech inference engine at Facebook.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/6MSwo7urQAWE7EOxO7WTns?si=_s53wC0wR9C4uF8ngGYQlg), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RoOgTxHkViA).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top takeaways:**
+
+
+
+Discover how reimagined data infrastructures revolutionize AI-agent workflows as Diptanu delves into Indexify, transforming raw data into real-time knowledge bases, and shares expert insights on optimizing rag-based applications, all amidst the ever-evolving landscape of Spark.
+
+
+
+Here's What You'll Discover:
+
+
+
+1. **Innovative Data Infrastructure**: Diptanu dives deep into how Indexify is revolutionizing the enterprise world by providing a sharper focus on data infrastructure and a refined abstraction for generative AI this year.
+
+2. **AI-Copilot for Call Centers**: Learn how Indexify streamlines customer service with a real-time knowledge base, transforming how agents interact and resolve issues.
+
+3. **Scaling Real-Time Indexing**: discover the system’s powerful capability to index content as it happens, enabling multiple extractors to run simultaneously. It’s all about the right model and the computing capacity for on-the-fly content generation.
+
+4. **Revamping Developer Experience**: get a glimpse into the future as Diptanu chats with Demetrios about reimagining Spark to fit today's tech capabilities, vastly different from just two years ago!
+
+5. **AI Agent Workflow Insights**: Understand the crux of AI agent-driven workflows, where models dynamically react to data, making orchestrated decisions in live environments.
+
+
+
+> Fun Fact: The development of Indexify by Diptanu was spurred by the rising use of Large Language Models in applications and the subsequent need for better data infrastructure to support these technologies.
+
+>
+
+
+
+## Show notes:
+
+
+
+00:00 AI's impact on model production and workflows.\
+
+05:15 Building agents need indexes for continuous updates.\
+
+09:27 Early RaG and LLMs adopters neglect data infrastructure.\
+
+12:32 Design partner creating copilot for call centers.\
+
+17:00 Efficient indexing and generation using scalable models.\
+
+20:47 Spark is versatile, used for many cases.\
+
+24:45 Recent survey paper on RAG covers tips.\
+
+26:57 Evaluation of various aspects of data generation.\
+
+28:45 Balancing trust and cost in factual accuracy.
+
+
+
+## More Quotes from Diptanu:
+
+
+
+*""In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production.”*\
+
+-- Diptanu Gon Choudhury
+
+
+
+*""Over a period of time, you want to extract new information out of existing data, because models are getting better continuously.”*\
+
+-- Diptanu Gon Choudhury
+
+
+
+*""We are in the golden age of demos. Golden age of demos with LLMs. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on.”*\
+
+-- Diptanu Gon Choudhury
+
+
+
+## Transcript:
+
+Demetrios:
+
+We are live, baby. This is it. Welcome back to another vector space talks. I'm here with my man Diptanu. He is the founder and creator of Tenterlake. They are building indexify, an open source, scalable, structured extraction engine for unstructured data to build near real time knowledge bases for AI agent driven workflows and query engines. And if it sounds like I just threw every buzzword in the book into that sentence, you can go ahead and say, bingo, we are here, and we're about to dissect what all that means in the next 30 minutes. So, dude, first of all, I got to just let everyone know who is here, that you are a bit of a hard hitter.
+
+
+
+Demetrios:
+
+You've got some track record under some notches on your belt. We could say before you created Tensorlake, let's just let people know that you were at Hashicorp, you created the nomad cluster scheduler, and you were the inventor of Titus cluster scheduler at Netflix. You led the FB learner machine learning platform and built real time speech inference engine at Facebook. You may be one of the most decorated people we've had on and that I have had the pleasure of talking to, and that's saying a lot. I've talked to a lot of people in my day, so I want to dig in, man. First question I've got for you, it's a big one. What the hell do you mean by AI agent driven workflows? Are you talking to autonomous agents? Are you talking, like the voice agents? What's that?
+
+
+
+Diptanu Gon Choudhury:
+
+Yeah, I was going to say that what a great last couple of years has been for AI. I mean, in context, learning has kind of, like, changed the way people do models and access models and use models in production, like at Facebook. In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production. It's a little bit of a Yolo where I feel like people have stopped measuring how well models are doing and just ship in production, but here we are. But I think underpinning all of this is kind of like this whole idea that models are capable of reasoning over data and non parametric knowledge to a certain extent. And what we are seeing now is workflows stop being completely heuristics driven, or as people say, like software 10 driven. And people are putting models in the picture where models are reacting to data that a workflow is seeing, and then people are using models behavior on the data and kind of like making the model decide what should the workflow do? And I think that's pretty much like, to me, what an agent is that an agent responds to information of the world and information which is external and kind of reacts to the information and kind of orchestrates some kind of business process or some kind of workflow, some kind of decision making in a workflow.
+
+
+
+Diptanu Gon Choudhury:
+
+That's what I mean by agents. And they can be like autonomous. They can be something that writes an email or writes a chat message or something like that. The spectrum is wide here.
+
+
+
+Demetrios:
+
+Excellent. So next question, logical question is, and I will second what you're saying. Like the advances that we've seen in the last year, wow. And the times are a change in, we are trying to evaluate while in production. And I like the term, yeah, we just yoloed it, or as the young kids say now, or so I've heard, because I'm not one of them, but we just do it for the plot. So we are getting those models out there, we're seeing if they work. And I imagine you saw some funny quotes from the Chevrolet chat bot, that it was a chat bot on the Chevrolet support page, and it was asked if Teslas are better than Chevys. And it said, yeah, Teslas are better than Chevys.
+
+
+
+Demetrios:
+
+So yes, that's what we do these days. This is 2024, baby. We just put it out there and test and prod. Anyway, getting back on topic, let's talk about indexify, because there was a whole lot of jargon that I said of what you do, give me the straight shooting answer. Break it down for me like I was five. Yeah.
+
+
+
+Diptanu Gon Choudhury:
+
+So if you are building an agent today, which depends on augmented generation, like retrieval, augmented generation, and given that this is Qdrant's show, I'm assuming people are very much familiar with Arag and augmented generation. So if people are building applications where the data is external or non parametric, and the model needs to see updated information all the time, because let's say, the documents under the hood that the application is using for its knowledge base is changing, or someone is building a chat application where new chat messages are coming all the time, and the agent or the model needs to know about what is happening, then you need like an index, or a set of indexes, which are continuously updated. And you also, over a period of time, you want to extract new information out of existing data, because models are getting better continuously. And the other thing is, AI, until now, or until a couple of years back, used to be very domain oriented or task oriented, where modality was the key behind models. Now we are entering into a world where information being encoded in any form, documents, videos or whatever, are important to these workflows that people are building or these agents that people are building. And so you need capability to ingest any kind of data and then build indexes out of them. And indexes, in my opinion, are not just embedding indexes, they could be indexes of semi structured data. So let's say you have an invoice.
+
+
+
+Diptanu Gon Choudhury:
+
+You want to maybe transform that invoice into semi structured data of where the invoice is coming from or what are the line items and so on. So in a nutshell, you need good data infrastructure to store these indexes and serve these indexes. And also you need a scalable compute engine so that whenever new data comes in, you're able to index them appropriately and update the indexes and so on. And also you need capability to experiment, to add new extractors into your platform, add new models into your platform, and so on. Indexify helps you with all that, right? So indexify, imagine indexify to be an online service with an API so that developers can upload any form of unstructured data, and then a bunch of extractors run in parallel on the cluster and extract information out of this unstructured data, and then update indexes on something like Qdrant or postgres for semi structured data continuously.
+
+
+
+Demetrios:
+
+Okay?
+
+
+
+Diptanu Gon Choudhury:
+
+And you basically get that in a single application, in a single binary, which is distributed on your cluster. You wouldn't have any external dependencies other than storage systems, essentially, to have a very scalable data infrastructure for your Rag applications or for your LLM agents.
+
+
+
+Demetrios:
+
+Excellent. So then talk to me about the inspiration for creating this. What was it that you saw that gave you that spark of, you know what? There needs to be something on the market that can handle this. Yeah.
+
+
+
+Diptanu Gon Choudhury:
+
+Earlier this year I was working with founder of a generative AI startup here. I was looking at what they were doing, I was helping them out, and I saw that. And then I looked around, I looked around at what is happening. Not earlier this year as in 2023. Somewhere in early 2023, I was looking at how developers are building applications with llms, and we are in the golden age of demos. Golden age of demos with llms. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on. And I mostly saw that the data infrastructure part of those demos or those applications were very basic people would do like one shot transformation of data, build indexes and then do stuff, build an application on top.
+
+
+
+Diptanu Gon Choudhury:
+
+And then I started talking to early adopters of RaG and llms in enterprises, and I started talking to them about how they're building their data pipelines and their data infrastructure for llms. And I feel like people were mostly excited about the application layer, right? A very less amount of thought was being put on the data infrastructure, and it was almost like built out of duct tape, right, of pipeline, like pipelines and workflows like RabbitMQ, like x, Y and z, very bespoke pipelines, which are good at one shot transformation of data. So you put in some documents on a queue, and then somehow the documents get embedded and put into something like Qdrant. But there was no thought about how do you re index? How do you add a new capability into your pipeline? Or how do you keep the whole system online, right? Keep the indexes online while reindexing and so on. And so classically, if you talk to a distributed systems engineer, they would be, you know, this is a mapreduce problem, right? So there are tools like Spark, there are tools like any skills ray, and they would classically solve these problems, right? And if you go to Facebook, we use Spark for something like this, or like presto, or we have a ton of big data infrastructure for handling things like this. And I thought that in 2023 we need a better abstraction for doing something like this. The world is moving to our server less, right? Developers understand functions. Developer thinks about computers as functions and functions which are distributed on the cluster and can transform content into something that llms can consume.
+
+
+
+Diptanu Gon Choudhury:
+
+And that was the inspiration I was thinking, what would it look like if we redid Spark or ray for generative AI in 2023? How can we make it so easy so that developers can write functions to extract content out of any form of unstructured data, right? You don't need to think about text, audio, video, or whatever. You write a function which can kind of handle a particular data type and then extract something out of it. And now how can we scale it? How can we give developers very transparently, like, all the abilities to manage indexes and serve indexes in production? And so that was the inspiration for it. I wanted to reimagine Mapreduce for generative AI.
+
+
+
+Demetrios:
+
+Wow. I like the vision you sent me over some ideas of different use cases that we can walk through, and I'd love to go through that and put it into actual tangible things that you've been seeing out there. And how you can plug it in to these different use cases. I think the first one that I wanted to look at was building a copilot for call center agents and what that actually looks like in practice. Yeah.
+
+
+
+Diptanu Gon Choudhury:
+
+So I took that example because that was super close to my heart in the sense that we have a design partner like who is doing this. And you'll see that in a call center, the information that comes in into a call center or the information that an agent in a human being in a call center works with is very rich. In a call center you have phone calls coming in, you have chat messages coming in, you have emails going on, and then there are also documents which are knowledge bases for human beings to answer questions or make decisions on. Right. And so they're working with a lot of data and then they're always pulling up a lot of information. And so one of our design partner is like building a copilot for call centers essentially. And what they're doing is they want the humans in a call center to answer questions really easily based on the context of a conversation or a call that is happening with one of their users, or pull up up to date information about the policies of the company and so on. And so the way they are using indexify is that they ingest all the content, like the raw content that is coming in video, not video, actually, like audio emails, chat messages into indexify.
+
+
+
+Diptanu Gon Choudhury:
+
+And then they have a bunch of extractors which handle different type of modalities, right? Some extractors extract information out of emails. Like they would do email classification, they would do embedding of emails, they would do like entity extraction from emails. And so they are creating many different types of indexes from emails. Same with speech. Right? Like data that is coming on through calls. They would transcribe them first using ASR extractor, and from there on the speech would be embedded and the whole pipeline for a text would be invoked into it, and then the speech would be searchable. If someone wants to find out what conversation has happened, they would be able to look up things. There is a summarizer extractor, which is like looking at a phone call and then summarizing what the customer had called and so on.
+
+
+
+Diptanu Gon Choudhury:
+
+So they are basically building a near real time knowledge base of one what is happening with the customer. And also they are pulling in information from their documents. So that's like one classic use case. Now the only dependency now they have is essentially like a blob storage system and serving infrastructure for indexes, like in this case, like Qdrant and postgres. And they have a bunch of extractors that they have written in house and some extractors that we have written, they're using them out of the box and they can scale the system to as much as they need. And it's kind of like giving them a high level abstraction of building indexes and using them in llms.
+
+
+
+Demetrios:
+
+So I really like this idea of how you have the unstructured and you have the semi structured and how those play together almost. And I think one thing that is very clear is how you've got the transcripts, you've got the embeddings that you're doing, but then you've also got documents that are very structured and maybe it's from the last call and it's like in some kind of a database. And I imagine we could say whatever, salesforce, it's in a salesforce and you've got it all there. And so there is some structure to that data. And now you want to be able to plug into all of that and you want to be able to, especially in this use case, the call center agents, human agents need to make decisions and they need to make decisions fast. Right. So the real time aspect really plays a part of that.
+
+
+
+Diptanu Gon Choudhury:
+
+Exactly.
+
+
+
+Demetrios:
+
+You can't have it be something that it'll get back to you in 30 seconds, or maybe 30 seconds is okay, but really the less time the better. And so traditionally when I think about using llms, I kind of take real time off the table. Have you had luck with making it more real time? Yeah.
+
+
+
+Diptanu Gon Choudhury:
+
+So there are two aspects of it. How quickly can your indexes be updated? As of last night, we can index all of Wikipedia under five minutes on AWS. We can run up to like 5000 extractors with indexify concurrently and parallel. I feel like we got the indexing part covered. Unless obviously you are using a model as behind an API where we don't have any control. But assuming you're using some kind of embedding model or some kind of extractor model, right, like a named entity extractor or an speech to text model that you control and you understand the I Ops, we can scale it out and our system can kind of handle the scale of getting it indexed really quickly. Now on the generation side, that's where it's a little bit more nuanced, right? Generation depends on how big the generation model is. If you're using GPD four, then obviously you would be playing with the latency budgets that OpenAI provides.
+
+
+
+Diptanu Gon Choudhury:
+
+If you're using some other form of models like mixture MoE or something which is very optimized and you have worked on making the model optimized, then obviously you can cut it down. So it depends on the end to end stack. It's not like a single piece of software. It's not like a monolithic piece of software. So it depends on a lot of different factors. But I can confidently claim that we have gotten the indexing side of real time aspects covered as long as the models people are using are reasonable and they have enough compute in their cluster.
+
+
+
+Demetrios:
+
+Yeah. Okay. Now talking again about the idea of rethinking the developer experience with this and almost reimagining what Spark would be if it were created today.
+
+
+
+Diptanu Gon Choudhury:
+
+Exactly.
+
+
+
+Demetrios:
+
+How do you think that there are manifestations in what you've built that play off of things that could only happen because you created it today as opposed to even two years ago.
+
+
+
+Diptanu Gon Choudhury:
+
+Yeah. So I think, for example, take Spark, right? Spark was born out of big data, like the 2011 twelve era of big data. In fact, I was one of the committers on Apache Mesos, the cluster scheduler that Spark used for a long time. And then when I was at Hashicorp, we tried to contribute support for Nomad in Spark. What I'm trying to say is that Spark is a task scheduler at the end of the day and it uses an underlying scheduler. So the teams that manage spark today or any other similar tools, they have like tens or 15 people, or they're using like a hosted solution, which is super complex to manage. Right. A spark cluster is not easy to manage.
+
+
+
+Diptanu Gon Choudhury:
+
+I'm not saying it's a bad thing or whatever. Software written at any given point in time reflect the world in which it was born. And so obviously it's from that era of systems engineering and so on. And since then, systems engineering has progressed quite a lot. I feel like we have learned how to make software which is scalable, but yet simpler to understand and to operate and so on. And the other big thing in spark that I feel like is missing or any skills, Ray, is that they are not natively integrated into the data stack. Right. They don't have an opinion on what the data stack is.
+
+
+
+Diptanu Gon Choudhury:
+
+They're like excellent Mapreduce systems, and then the data stuff is layered on top. And to a certain extent that has allowed them to generalize to so many different use cases. People use spark for everything. At Facebook, I was using Spark for batch transcoding of speech, to text, for various use cases with a lot of issues under the hood. Right? So they are tied to the big data storage infrastructure. So when I am reimagining Spark, I almost can take the position that we are going to use blob storage for ingestion and writing raw data, and we will have low latency serving infrastructure in the form of something like postgres or something like clickhouse or something for serving like structured data or semi structured data. And then we have something like Qdrant, which is very geared towards doing vector search and so on. And so we understand the shape of the storage system now.
+
+
+
+Diptanu Gon Choudhury:
+
+We understand that developers want to integrate with them. So now we can control the compute layer such that the compute layer is optimized for doing the compute and producing data such that they can be written in those data stores, right? So we understand the I Ops, right? The I O, what is it called? The I O characteristics of the underlying storage system really well. And we understand that the use case is that people want to consume those data in llms, right? So we can make design decisions such that how we write into those, into the storage system, how we serve very specifically for llms, that I feel like a developer would be making those decisions themselves, like if they were using some other tool.
+
+
+
+Demetrios:
+
+Yeah, it does feel like optimizing for that and recognizing that spark is almost like a swiss army knife. As you mentioned, you can do a million things with it, but sometimes you don't want to do a million things. You just want to do one thing and you want it to be really easy to be able to do that one thing. I had a friend who worked at some enterprise and he was talking about how spark engineers have all the job security in the world, because a, like you said, you need a lot of them, and b, it's hard stuff being able to work on that and getting really deep and knowing it and the ins and outs of it. So I can feel where you're coming from on that one.
+
+
+
+Diptanu Gon Choudhury:
+
+Yeah, I mean, we basically integrated the compute engine with the storage so developers don't have to think about it. Plug in whatever storage you want. We support, obviously, like all the blob stores, and we support Qdrant and postgres right now, indexify in the future can even have other storage engines. And now all an application developer needs to do is deploy this on AWS or GCP or whatever, right? Have enough compute, point it to the storage systems, and then now build your application. You don't need to make any of the hard decisions or build a distributed systems by bringing together like five different tools and spend like five months building the data layer, focus on the application, build your agents.
+
+
+
+Demetrios:
+
+So there is something else. As we are winding down, I want to ask you one last thing, and if anyone has any questions, feel free to throw them in the chat. I am monitoring that also, but I am wondering about advice that you have for people that are building rag based applications, because I feel like you've probably seen quite a few out there in the wild. And so what are some optimizations or some nice hacks that you've seen that have worked really well? Yeah.
+
+
+
+Diptanu Gon Choudhury:
+
+So I think, first of all, there is a recent paper, like a rack survey paper. I really like it. Maybe you can have the link on the show notes if you have one. There was a recent survey paper, I really liked it, and it covers a lot of tips and tricks that people can use with Rag. But essentially, Rag is an information. Rag is like a two step process in its essence. One is the document selection process and the document reading process. Document selection is how do you retrieve the most important information out of million documents that might be there, and then the reading process is how do you jam them in the context of a model, and so that the model can kind of ground its generation based on the context.
+
+
+
+Diptanu Gon Choudhury:
+
+So I think the most tricky part here, and the part which has the most tips and tricks is the document selection part. And that is like a classic information retrieval problem. So I would suggest people doing a lot of experimentation around ranking algorithms, hitting different type of indexes, and refining the results by merging results from different indexes. One thing that always works for me is reducing the search space of the documents that I am selecting in a very systematic manner. So like using some kind of hybrid search where someone does the embedding lookup first, and then does the keyword lookup, or vice versa, or does lookups parallel and then merges results together? Those kind of things where the search space is narrowed down always works for me.
+
+
+
+Demetrios:
+
+So I think one of the Qdrant team members would love to know because I've been talking to them quite frequently about this, the evaluating of retrieval. Have you found any tricks or tips around that and evaluating the quality of what is retrieved?
+
+
+
+Diptanu Gon Choudhury:
+
+So I haven't come across a golden one trick that fits every use case type thing like solution for evaluation. Evaluation is really hard. There are open source projects like ragas who are trying to solve it, and everyone is trying to solve various, various aspects of evaluating like rag exactly. Some of them try to evaluate how accurate the results are, some people are trying to evaluate how diverse the answers are, and so on. I think the most important thing that our design partners care about is factual accuracy and factual accuracy. One process that has worked really well is like having a critique model. So let the generation model generate some data and then have a critique model go and try to find citations and look up how accurate the data is, how accurate the generation is, and then feed that back into the system. One another thing like going back to the previous point is what tricks can someone use for doing rag really well? I feel like people don't fine tune embedding models that much.
+
+
+
+Diptanu Gon Choudhury:
+
+I think if people are using an embedding model, like sentence transformer or anything like off the shelf, they should look into fine tuning the embedding models on their data set that they are embedding. And I think a combination of fine tuning the embedding models and kind of like doing some factual accuracy checks lead to a long way in getting like rag working really well.
+
+
+
+Demetrios:
+
+Yeah, it's an interesting one. And I'll probably leave it here on the extra model that is basically checking factual accuracy. You've always got these trade offs that you're playing with, right? And one of the trade offs is going to be, maybe you're making another LLM call, which could be more costly, but you're gaining trust or you're gaining confidence that what it's outputting is actually what it says it is. And it's actually factually correct, as you said. So it's like, what price can you put on trust? And we're going back to that whole thing that I saw on Chevy's website where they were saying that a Tesla is better. It's like that hopefully doesn't happen anymore as people deploy this stuff and they recognize that humans are cunning when it comes to playing around with chat bots. So this has been fascinating, man. I appreciate you coming on here and chatting me with it.
+
+
+
+Demetrios:
+
+I encourage everyone to go and either reach out to you on LinkedIn, I know you are on there, and we'll leave a link to your LinkedIn in the chat too. And if not, check out Tensorleg, check out indexify, and we will be in touch. Man, this was great.
+
+
+
+Diptanu Gon Choudhury:
+
+Yeah, same. It was really great chatting with you about this, Demetrius, and thanks for having me today.
+
+
+
+Demetrios:
+
+Cheers. I'll talk to you later.
+",blog/indexify-unveiled-diptanu-gon-choudhury-vector-space-talk-009.md
+"---
+
+draft: false
+
+title: ""Qdrant Hybrid Cloud and DigitalOcean for Scalable and Secure AI Solutions""
+
+short_description: ""Enabling developers to deploy a managed vector database in their DigitalOcean Environment.""
+
+description: ""Enabling developers to deploy a managed vector database in their DigitalOcean Environment.""
+
+preview_image: /blog/hybrid-cloud-digitalocean/hybrid-cloud-digitalocean.png
+
+date: 2024-04-11T00:02:00Z
+
+author: Qdrant
+
+featured: false
+
+weight: 1010
+
+tags:
+
+ - Qdrant
+
+ - Vector Database
+
+---
+
+
+
+Developers are constantly seeking new ways to enhance their AI applications with new customer experiences. At the core of this are vector databases, as they enable the efficient handling of complex, unstructured data, making it possible to power applications with semantic search, personalized recommendation systems, and intelligent Q&A platforms. However, when deploying such new AI applications, especially those handling sensitive or personal user data, privacy becomes important.
+
+
+
+[DigitalOcean](https://www.digitalocean.com/) and Qdrant are actively addressing this with an integration that lets developers deploy a managed vector database in their existing DigitalOcean environments. With the recent launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can seamlessly deploy Qdrant on DigitalOcean Kubernetes (DOKS) clusters, making it easier for developers to handle vector databases without getting bogged down in the complexity of managing the underlying infrastructure.
+
+
+
+#### Unlocking the Power of Generative AI with Qdrant and DigitalOcean
+
+
+
+User data is a critical asset for a business, and user privacy should always be a top priority. This is why businesses require tools that enable them to leverage their user data as a valuable asset while respecting privacy. Qdrant Hybrid Cloud on DigitalOcean brings these capabilities directly into developers' hands, enhancing deployment flexibility and ensuring greater control over data.
+
+
+
+> *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean.
+
+
+
+#### Get Started with Qdrant on DigitalOcean
+
+
+
+DigitalOcean customers can easily deploy Qdrant on their DigitalOcean Kubernetes (DOKS) clusters through a simple Kubernetis-native “one-line” installment. This simplicity allows businesses to start small and scale efficiently.
+
+
+
+- **Simple Deployment**: Leveraging Kubernetes, deploying Qdrant Hybrid Cloud on DigitalOcean is streamlined, making the management of vector search workloads in the own environment more efficient.
+
+
+
+- **Own Infrastructure**: Hosting the vector database on your DigitalOcean infrastructure offers flexibility and allows you to manage the entire AI stack in one place.
+
+
+
+- **Data Control**: Deploying within the own DigitalOcean environment ensures data control, keeping sensitive information within its security perimeter.
+
+
+
+To get Qdrant Hybrid Cloud setup on DigitalOcean, just follow these steps:
+
+
+
+- **Hybrid Cloud Setup**: Begin by logging into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and activate **Hybrid Cloud** feature in the sidebar.
+
+
+
+- **Cluster Configuration**: From Hybrid Cloud settings, integrate your DigitalOcean Kubernetes clusters as a Hybrid Cloud Environment.
+
+
+
+- **Simplified Deployment**: Use the Qdrant Management Console to effortlessly establish and oversee your Qdrant clusters on DigitalOcean.
+
+
+
+#### Chat with PDF Documents with Qdrant Hybrid Cloud on DigitalOcean
+
+
+
+![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png)
+
+
+
+We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on DigitalOcean for a RAG application. It highlights practical steps to integrate vector search with Jina AI's LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. This specific system is tied together via the LlamaIndex framework.
+
+
+
+[Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/)
+
+
+
+For a comprehensive guide, our documentation provides detailed instructions on setting up Qdrant on DigitalOcean.
+
+
+
+[Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/)
+
+
+
+#### Ready to Get Started?
+
+
+
+Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-digitalocean.md
+"---
+
+draft: false
+
+title: Optimizing an Open Source Vector Database with Andrey Vasnetsov
+
+slug: open-source-vector-search-engine-vector-database
+
+short_description: CTO of Qdrant Andrey talks about Vector search engines and
+
+ the technical facets and challenges encountered in developing an open-source
+
+ vector database.
+
+description: Learn key strategies for optimizing vector search from Andrey Vasnetsov, CTO at Qdrant. Dive into techniques like efficient indexing for improved performance.
+
+preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png
+
+date: 2024-01-10T16:04:57.804Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Qdrant
+
+ - Vector Search Engine
+
+ - Vector Database
+
+---
+
+
+
+# Optimizing Open Source Vector Search: Strategies from Andrey Vasnetsov at Qdrant
+
+
+
+> *""For systems like Qdrant, scalability and performance in my opinion, is much more important than transactional consistency, so it should be treated as a search engine rather than database.""*\
+
+-- Andrey Vasnetsov
+
+>
+
+
+
+Discussing core differences between search engines and databases, Andrey underlined the importance of application needs and scalability in database selection for vector search tasks.
+
+
+
+Andrey Vasnetsov, CTO at Qdrant is an enthusiast of [Open Source](https://qdrant.tech/), machine learning, and vector search. He works on Open Source projects related to [Vector Similarity Search](https://qdrant.tech/articles/vector-similarity-beyond-search/) and Similarity Learning. He prefers practical over theoretical, working demo over arXiv paper.
+
+
+
+***You can watch this episode on [YouTube](https://www.youtube.com/watch?v=bU38Ovdh3NY).***
+
+
+
+
+
+
+
+***This episode is part of the [ML⇄DB Seminar Series](https://db.cs.cmu.edu/seminar2023/#) (Machine Learning for Databases + Databases for Machine Learning) of the Carnegie Mellon University Database Research Group.***
+
+
+
+## **Top Takeaways:**
+
+
+
+Dive into the intricacies of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) with Andrey as he unpacks Qdrant's approach to combining filtering and vector search, revealing how in-place filtering during graph traversal optimizes precision without sacrificing search exactness, even when scaling to billions of vectors.
+
+
+
+5 key insights you’ll learn:
+
+
+
+- 🧠 **The Strategy of Subgraphs:** Dive into how overlapping intervals and geo hash regions can enhance the precision and connectivity within vector search indices.
+
+
+
+- 🛠️ **Engine vs Database:** Discover the differences between search engines and relational databases and why considering your application's needs is crucial for scalability.
+
+
+
+- 🌐 **Combining Searches with Relational Data:** Get insights on integrating relational and vector search for improved efficiency and performance.
+
+
+
+- 🚅 **Speed and Precision Tactics:** Uncover the techniques for controlling search precision and speed by tweaking the beam size in HNSW indices.
+
+
+
+- 🔗 **Connected Graph Challenges:** Learn about navigating the difficulties of maintaining a connected graph while filtering during search operations.
+
+
+
+> Fun Fact: [The Qdrant system](https://qdrant.tech/) is capable of in-place filtering during graph traversal, which is a novel approach compared to traditional post-filtering methods, ensuring the correct quantity of results that meet the filtering conditions.
+
+>
+
+
+
+## Timestamps:
+
+
+
+00:00 Search professional with expertise in vectors and engines.\
+
+09:59 Elasticsearch: scalable, weak consistency, prefer vector search.\
+
+12:53 Optimize data structures for faster processing efficiency.\
+
+21:41 Vector indexes require special treatment, like HNSW's proximity graph and greedy search.\
+
+23:16 HNSW index: approximate, precision control, CPU intensive.\
+
+30:06 Post-filtering inefficient, prefiltering costly.\
+
+34:01 Metadata-based filters; creating additional connecting links.\
+
+41:41 Vector dimension impacts comparison speed, indexing complexity high.\
+
+46:53 Overlapping intervals and subgraphs for precision.\
+
+53:18 Postgres limits scalability, additional indexing engines provide faster queries.\
+
+59:55 Embedding models for time series data explained.\
+
+01:02:01 Cheaper system for serving billion vectors.
+
+
+
+## More Quotes from Andrey:
+
+
+
+*""It allows us to compress vector to a level where a single dimension is represented by just a single bit, which gives total of 32 times compression for the vector.""*\
+
+-- Andrey Vasnetsov on vector compression in AI
+
+
+
+*""We build overlapping intervals and we build these subgraphs with additional links for those intervals. And also we can do the same with, let's say, location data where we have geocoordinates, so latitude, longitude, we encode it into geo hashes and basically build this additional graph for overlapping geo hash regions.""*\
+
+-- Andrey Vasnetsov
+
+
+
+*""We can further compress data using such techniques as delta encoding, as variable byte encoding, and so on. And this total effect, total combined effect of this optimization can make immutable data structures order of minute more efficient than mutable ones.""*\
+
+-- Andrey Vasnetsov
+",blog/open-source-vector-search-engine-and-vector-database.md
+"---
+
+draft: false
+
+title: ""Integrating Qdrant and LangChain for Advanced Vector Similarity Search""
+
+short_description: Discover how Qdrant and LangChain can be integrated to enhance AI applications.
+
+description: Discover how Qdrant and LangChain can be integrated to enhance AI applications with advanced vector similarity search technology.
+
+preview_image: /blog/using-qdrant-and-langchain/qdrant-langchain.png
+
+date: 2024-03-12T09:00:00Z
+
+author: David Myriel
+
+featured: true
+
+tags:
+
+ - Qdrant
+
+ - LangChain
+
+ - LangChain integration
+
+ - Vector similarity search
+
+ - AI LLM (large language models)
+
+ - LangChain agents
+
+ - Large Language Models
+
+---
+
+
+
+> *""Building AI applications doesn't have to be complicated. You can leverage pre-trained models and support complex pipelines with a few lines of code. LangChain provides a unified interface, so that you can avoid writing boilerplate code and focus on the value you want to bring.""* Kacper Lukawski, Developer Advocate, Qdrant
+
+
+
+## Long-Term Memory for Your GenAI App
+
+
+
+Qdrant's vector database quickly grew due to its ability to make Generative AI more effective. On its own, an LLM can be used to build a process-altering invention. With Qdrant, you can turn this invention into a production-level app that brings real business value.
+
+
+
+The use of vector search in GenAI now has a name: **Retrieval Augmented Generation (RAG)**. [In our previous article](/articles/rag-is-dead/), we argued why RAG is an essential component of AI setups, and why large-scale AI can't operate without it. Numerous case studies explain that AI applications are simply too costly and resource-intensive to run using only LLMs.
+
+
+
+> Going forward, the solution is to leverage composite systems that use models and vector databases.
+
+
+
+**What is RAG?** Essentially, a RAG setup turns Qdrant into long-term memory storage for LLMs. As a vector database, Qdrant manages the efficient storage and retrieval of user data.
+
+
+
+Adding relevant context to LLMs can vastly improve user experience, leading to better retrieval accuracy, faster query speed and lower use of compute. Augmenting your AI application with vector search reduces hallucinations, a situation where AI models produce legitimate-sounding but made-up responses.
+
+
+
+Qdrant streamlines this process of retrieval augmentation, making it faster, easier to scale and efficient. When you are accessing vast amounts of data (hundreds or thousands of documents), vector search helps your sort through relevant context. **This makes RAG a primary candidate for enterprise-scale use cases.**
+
+
+
+## Why LangChain?
+
+
+
+Retrieval Augmented Generation is not without its challenges and limitations. One of the main setbacks for app developers is managing the entire setup. The integration of a retriever and a generator into a single model can lead to a raised level of complexity, thus increasing the computational resources required.
+
+
+
+[LangChain](https://www.langchain.com/) is a framework that makes developing RAG-based applications much easier. It unifies interfaces to different libraries, including major embedding providers like OpenAI or Cohere and vector stores like Qdrant. With LangChain, you can focus on creating tangible GenAI applications instead of writing your logic from the ground up.
+
+
+
+> Qdrant is one of the **top supported vector stores** on LangChain, with [extensive documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) and [examples](https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query).
+
+
+
+**How it Works:** LangChain receives a query and retrieves the query vector from an embedding model. Then, it dispatches the vector to a vector database, retrieving relevant documents. Finally, both the query and the retrieved documents are sent to the large language model to generate an answer.
+
+
+
+![qdrant-langchain-rag](/blog/using-qdrant-and-langchain/flow-diagram.png)
+
+
+
+When supported by LangChain, Qdrant can help you set up effective question-answer systems, detection systems and chatbots that leverage RAG to its full potential. When it comes to long-term memory storage, developers can use LangChain to easily add relevant documents, chat history memory & rich user data to LLM app prompts via Qdrant.
+
+
+
+## Common Use Cases
+
+
+
+Integrating Qdrant and LangChain can revolutionize your AI applications. Let's take a look at what this integration can do for you:
+
+
+
+*Enhance Natural Language Processing (NLP):*
+
+LangChain is great for developing question-answering **chatbots**, where Qdrant is used to contextualize and retrieve results for the LLM. We cover this in [our article](/articles/langchain-integration/), and in OpenAI's [cookbook examples](https://cookbook.openai.com/examples/vector_databases/qdrant/qa_with_langchain_qdrant_and_openai) that use LangChain and GPT to process natural language.
+
+
+
+*Improve Recommendation Systems:*
+
+Food delivery services thrive on indecisive customers. Businesses need to accomodate a multi-aim search process, where customers seek recommendations though semantic search. With LangChain you can build systems for **e-commerce, content sharing, or even dating apps**.
+
+
+
+*Advance Data Analysis and Insights:* Sometimes you just want to browse results that are not necessarily closest, but still relevant. Semantic search helps user discover products in **online stores**. Customers don't exactly know what they are looking for, but require constrained space in which a search is performed.
+
+
+
+*Offer Content Similarity Analysis:* Ever been stuck seeing the same recommendations on your **local news portal**? You may be held in a similarity bubble! As inputs get more complex, diversity becomes scarce, and it becomes harder to force the system to show something different. LangChain developers can use semantic search to develop further context.
+
+
+
+## Building a Chatbot with LangChain
+
+
+
+_Now that you know how Qdrant and LangChain work together - it's time to build something!_
+
+
+
+Follow Daniel Romero's video and create a RAG Chatbot completely from scratch. You will only use OpenAI, Qdrant and LangChain.
+
+Here is what this basic tutorial will teach you:
+
+
+
+**1. How to set up a chatbot using Qdrant and LangChain:** You will use LangChain to create a RAG pipeline that retrieves information from a dataset and generates output. This will demonstrate the difference between using an LLM by itself and leveraging a vector database like Qdrant for memory retrieval.
+
+
+
+**2. Preprocess and format data for use by the chatbot:** First, you will download a sample dataset based on some academic journals. Then, you will process this data into embeddings and store it as vectors inside of Qdrant.
+
+
+
+**3. Implement vector similarity search algorithms:** Second, you will create and test a chatbot that only uses the LLM. Then, you will enable the memory component offered by Qdrant. This will allow your chatbot to be modified and updated, giving it long-term memory.
+
+
+
+**4. Optimize the chatbot's performance:** In the last step, you will query the chatbot in two ways. First query will retrieve parametric data from the LLM, while the second one will get contexual data via Qdrant.
+
+
+
+The goal of this exercise is to show that RAG is simple to implement via LangChain and yields much better results than using LLMs by itself.
+
+
+
+
+
+
+
+## Scaling Qdrant and LangChain
+
+
+
+If you are looking to scale up and keep the same level of performance, Qdrant and LangChain are a rock-solid combination. Getting started with both is a breeze and the [documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) covers a broad number of cases. However, the main strength of Qdrant is that it can consistently support the user way past the prototyping and launch phases.
+
+
+
+> *""We are all-in on performance and reliability. Every release we make Qdrant faster, more stable and cost-effective for the user. When others focus on prototyping, we are already ready for production. Very soon, our users will build successful products and go to market. At this point, I anticipate a great need for a reliable vector store. Qdrant will be there for LangChain and the entire community.""*
+
+
+
+Whether you are building a bank fraud-detection system, RAG for e-commerce, or services for the federal government - you will need to leverage a scalable architecture for your product. Qdrant offers different features to help you considerably increase your application’s performance and lower your hosting costs.
+
+
+
+> Read more about out how we foster [best practices for large-scale deployments](/articles/multitenancy/).
+
+
+
+## Next Steps
+
+
+
+Now that you know how Qdrant and LangChain can elevate your setup - it's time to try us out.
+
+
+
+- Qdrant is open source and you can [quickstart locally](/documentation/quick-start/), [install it via Docker](/documentation/quick-start/), [or to Kubernetes](https://github.com/qdrant/qdrant-helm/).
+
+
+
+- We also offer [a free-tier of Qdrant Cloud](https://cloud.qdrant.io/) for prototyping and testing.
+
+
+
+- For best integration with LangChain, read the [official LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant/).
+
+
+
+- For all other cases, [Qdrant documentation](/documentation/integrations/langchain/) is the best place to get there.
+
+
+
+> We offer additional support tailored to your business needs. [Contact us](https://qdrant.to/contact-us) to learn more about implementation strategies and integrations that suit your company.
+
+
+
+
+
+
+",blog/using-qdrant-and-langchain.md
+"---
+
+draft: false
+
+title: Qdrant supports ARM architecture!
+
+slug: qdrant-supports-arm-architecture
+
+short_description: Qdrant announces ARM architecture support, expanding
+
+ accessibility and performance for their advanced data indexing technology.
+
+description: Qdrant's support for ARM architecture marks a pivotal step in
+
+ enhancing accessibility and performance. This development optimizes data
+
+ indexing and retrieval.
+
+preview_image: /blog/from_cms/docker-preview.png
+
+date: 2022-09-21T09:49:53.352Z
+
+author: Kacper Łukawski
+
+featured: false
+
+tags:
+
+ - Vector Search
+
+ - Vector Search Engine
+
+ - Embedding
+
+ - Neural Networks
+
+ - Database
+
+---
+
+The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud.
+
+
+
+![](/blog/from_cms/1_seaglc6jih2qknoshqbf1q.webp ""An image generated by Stable Diffusion with a query “two computer processors fightning against each other”"")
+
+
+
+In order to make an application available for ARM users, it has to be compiled for that platform. Otherwise, it has to be emulated by the device, which gives an additional overhead and reduces its performance. We decided to provide the [Docker images](https://hub.docker.com/r/qdrant/qdrant/) targeted especially at ARM users. Of course, using a limited set of processor instructions may impact the performance of your vector search, and that’s why we decided to test both architectures using a similar setup.
+
+
+
+## Test environments
+
+
+
+AWS offers ARM-based EC2 instances that are 20% cheaper than the x86 corresponding alternatives with a similar configuration. That estimate has been done for the eu-central-1 region (Frankfurt) and R6g/R6i instance families. For the purposes of this comparison, we used an r6i.large instance (Intel Xeon) and compared it to r6g.large one (AWS Graviton2). Both setups have 2 vCPUs and 16 GB of memory available and these were the smallest comparable instances available.
+
+
+
+## The results
+
+
+
+For the purposes of this test, we created some random vectors which were compared with cosine distance.
+
+
+
+### Vector search
+
+
+
+During our experiments, we performed 1000 search operations for both ARM64 and x86-based setups. We didn’t measure the network overhead, only the time measurements returned by the engine in the API response. The chart below shows the distribution of that time, separately for each architecture.
+
+
+
+![](/blog/from_cms/1_zvuef4ri6ztqjzbsocqj_w.webp ""The latency distribution of search requests: arm vs x86"")
+
+
+
+It seems that ARM64 might be an interesting alternative if you are on a budget. It is 10% slower on average, and 20% slower on the median, but the performance is more consistent. It seems like it won’t be randomly 2 times slower than the average, unlike x86. That makes ARM64 a cost-effective way of setting up vector search with Qdrant, keeping in mind it’s 20% cheaper on AWS. You do get less for less, but surprisingly more than expected.",blog/qdrant-supports-arm-architecture.md
+"---
+
+draft: false
+
+title: Advancements and Challenges in RAG Systems - Syed Asad | Vector Space Talks
+
+slug: rag-advancements-challenges
+
+short_description: Syed Asad talked about advanced rag systems and multimodal AI
+
+ projects, discussing challenges, technologies, and model evaluations in the
+
+ context of their work at Kiwi Tech.
+
+description: Syed Asad unfolds the challenges of developing multimodal RAG
+
+ systems at Kiwi Tech, detailing the balance between accuracy and
+
+ cost-efficiency, and exploring various tools and approaches like GPT 4 and
+
+ Mixtral to enhance family tree apps and financial chatbots while navigating
+
+ the hurdles of data privacy and infrastructure demands.
+
+preview_image: /blog/from_cms/syed-asad-cropped.png
+
+date: 2024-04-11T22:25:00.000Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Vector Search
+
+ - Retrieval Augmented Generation
+
+ - Generative AI
+
+ - KiwiTech
+
+---
+
+> *""The problem with many of the vector databases is that they work fine, they are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.”*\
+
+— Syed Asad
+
+>
+
+
+
+Syed Asad is an accomplished AI/ML Professional, specializing in LLM Operations and RAGs. With a focus on Image Processing and Massive Scale Vector Search Operations, he brings a wealth of expertise to the field. His dedication to advancing artificial intelligence and machine learning technologies has been instrumental in driving innovation and solving complex challenges. Syed continues to push the boundaries of AI/ML applications, contributing significantly to the ever-evolving landscape of the industry.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/4Gm4TQsO2PzOGBp5U6Cj2e?si=JrG0kHDpRTeb2gLi5zdi4Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top takeaways:**
+
+
+
+Prompt engineering is the new frontier in AI. Let’s find out about how critical its role is in controlling AI language models. In this episode, Demetrios and Syed gets to discuss about it.
+
+
+
+Syed also explores the retrieval augmented generation systems and machine learning technology at Kiwi Tech. This episode showcases the challenges and advancements in AI applications across various industries.
+
+
+
+Here are the highlights from this episode:
+
+
+
+1. **Digital Family Tree:** Learn about the family tree app project that brings the past to life through video interactions with loved ones long gone.
+
+2. **Multimodal Mayhem:** Discover the complexities of creating AI systems that can understand diverse accents and overcome transcription tribulations – all while being cost-effective!
+
+3. **The Perfect Match:** Find out how semantic chunking is revolutionizing job matching in radiology and why getting the context right is non-negotiable.
+
+4. **Quasar's Quantum Leap:** Syed shares the inside scoop on Quasar, a financial chatbot, and the AI magic that makes it tick.
+
+5. **The Privacy Paradox:** Delve into the ever-present conflict between powerful AI outcomes and the essential quest to preserve data privacy.
+
+
+
+> Fun Fact: Syed Asad and his team at Kiwi Tech use a GPU-based approach with GPT 4 for their AI system named Quasar, addressing challenges like temperature control and mitigating hallucinatory responses.
+
+>
+
+
+
+## Show notes:
+
+
+
+00:00 Clients seek engaging multimedia apps over chatbots.\
+
+06:03 Challenges in multimodal rags: accent, transcription, cost.\
+
+08:18 AWS credits crucial, but costs skyrocket quickly.\
+
+10:59 Accurate procedures crucial, Qdrant excels in search.\
+
+14:46 Embraces AI for monitoring and research.\
+
+19:47 Seeking insights on ineffective marketing models and solutions.\
+
+23:40 GPT 4 useful, prompts need tracking tools\
+
+25:28 Discussing data localization and privacy, favoring Ollama.\
+
+29:21 Hallucination control and pricing are major concerns.\
+
+32:47 DeepEval, AI testing, LLM, potential, open source.\
+
+35:24 Filter for appropriate embedding model based on use case and size.
+
+
+
+## More Quotes from Syed:
+
+
+
+*""Qdrant has the ease of use. I have trained people in my team who specializes with Qdrant, and they were initially using Weaviate and Pinecone.”*\
+
+— Syed Asad
+
+
+
+*""What's happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. They want their apps or their LLM apps to be more engaging rather than a mere chatbot.”*\
+
+— Syed Asad
+
+
+
+*""That is where the accuracy matters the most. And in this case, Qdrant has proved just commendable in giving excellent search results.”*\
+
+— Syed Asad in Advancements in Medical Imaging Search
+
+
+
+## Transcript:
+
+Demetrios:
+
+What is up, good people? How y'all doing? We are back for yet another vector space talks. I'm super excited to be with you today because we're gonna be talking about rags and rag systems. And from the most basic naive rag all the way to the most advanced rag, we've got it covered with our guest of honor, Asad. Where are you at, my man? There he is. What's going on, dude?
+
+
+
+Syed Asad:
+
+Yeah, everything is fine.
+
+
+
+Demetrios:
+
+Excellent, excellent. Well, I know we were talking before we went live, and you are currently in India. It is very late for you, so I appreciate you coming on here and doing this with us. You are also, for those who do not know, a senior engineer for AI and machine learning at Kiwi Tech. Can you break down what Kiwi tech is for us real fast?
+
+
+
+Syed Asad:
+
+Yeah, sure. Absolutely. So Kiwi tech is actually a software development, was actually a software development company focusing on software development, iOS and mobile apps. And right now we are in all focusing more on generative AI, machine learning and computer vision projects. So I am heading the AI part here. So. And we are having loads of projects here with, from basic to advanced rags, from naive to visual rags. So basically I'm doing rag in and out from morning to evening.
+
+
+
+Demetrios:
+
+Yeah, you can't get away from it, huh? Man, that is great.
+
+
+
+Syed Asad:
+
+Everywhere there is rag. Even, even the machine learning part, which was previously done by me, is all now into rags engineered AI. Yeah. Machine learning is just at the background now.
+
+
+
+Demetrios:
+
+Yeah, yeah, yeah. It's funny, I understand the demand for it because people are trying to see where they can get value in their companies with the new generative AI advancements.
+
+
+
+Syed Asad:
+
+Yeah.
+
+
+
+Demetrios:
+
+So I want to talk a lot about advance rags, considering the audience that we have. I would love to hear about the visual rags also, because that sounds very exciting. Can we start with the visual rags and what exactly you are doing, what you're working on when it comes to that?
+
+
+
+Syed Asad:
+
+Yeah, absolutely. So initially when I started working, so you all might be aware with the concept of frozen rags, the normal and the basic rag, there is a text retrieval system. You just query your data and all those things. So what is happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. So that is what is happening. So they want their apps or their LLM apps to be more engaging rather than a mere chatbot. Because. Because if we go on to the natural language or the normal english language, I mean, interacting by means of a video or interacting by means of a photo, like avatar, generation, anything like that.
+
+
+
+Syed Asad:
+
+So that has become more popular or, and is gaining more popularity. And if I talk about, specifically about visual rags. So the projects which I am working on is, say, for example, say, for example, there is a family tree type of app in which. In which you have an account right now. So, so you are recording day videos every day, right? Like whatever you are doing, for example, you are singing a song, you're walking in the park, you are eating anything like that, and you're recording those videos and just uploading them on that app. But what do you want? Like, your future generations can do some sort of query, like what, what was my grandfather like? What was my, my uncle like? Anything my friend like. And it was, it is not straight, restricted to a family. It can be friends also.
+
+
+
+Syed Asad:
+
+Anyway, so. And these are all us based projects, not indian based projects. Okay, so, so you, you go in query and it returns a video about your grandfather who has already died. He has not. You can see him speaking about that particular thing. So it becomes really engaging. So this is something which is called visual rag, which I am working right now on this.
+
+
+
+Demetrios:
+
+I love that use case. So basically it's, I get to be closer to my family that may or may not be here with us right now because the rag can pull writing that they had. It can pull video of other family members talking about it. It can pull videos of when my cousin was born, that type of stuff.
+
+
+
+Syed Asad:
+
+Anything, anything from cousin to family. You can add any numbers of members of your family. You can give access to any number of people who can have after you, after you're not there, like a sort of a nomination or a delegation live up thing. So that is, I mean, actually, it is a very big project, involves multiple transcription models, video transcription models. It also involves actually the databases, and I'm using Qdrant, proud of it. So, in that, so. And Qdrant is working seamlessly in that. So, I mean, at the end there is a vector search, but at the background there is more of more of visual rag, and people want to communicate through videos and photos.
+
+
+
+Syed Asad:
+
+So that is coming into picture more.
+
+
+
+Demetrios:
+
+Well, talk to me about multimodal rag. And I know it's a bit of a hairy situation because if you're trying to do vector search with videos, it can be a little bit more complicated than just vector search with text. Right. So what are some of the unique challenges that you've seen when it comes to multimodal rag?
+
+
+
+Syed Asad:
+
+The first challenge dealing with multimodal rags is actually the accent, because it can be varying accent. The problem with the transcription, one of the problems or the challenges which I have faced in this is that lack of proper transcription models, if you are, if you are able to get a proper transcription model, then if that, I want to deploy that model in the cloud, say for example, an AWS cloud. So that AWS cloud is costing heavy on the pockets. So managing infra is one of the part. I mean, I'm talking in a, in a, in a highly scalable production environment. I'm not talking about a research environment in which you can do anything on a collab notebook and just go with that. So whenever it comes to the client part or the delivery part, it becomes more critical. And even there, there were points then that we have to entirely overhaul the entire approach, which was working very fine when we were doing it on the dev environment, like the openais whisper.
+
+
+
+Syed Asad:
+
+We started with that OpenAI's whisper. It worked fine. The transcription was absolutely fantastic. But we couldn't go into the production.
+
+
+
+Demetrios:
+
+Part with that because it was too, the word error rate was too high, or because it was too slow. What made it not allow you to go into production?
+
+
+
+Syed Asad:
+
+It was, the word error rate was also high. It was very slow when it was being deployed on an AWS instance. And the thing is that the costing part, because usually these are startups, or mid startup, if I talk about the business point of view, not the tech point of view. So these companies usually offer these type of services for free, and on the basis of these services they try to raise funding. So they want something which is actually optimized, optimizing their cost as well. So what I personally feel, although AWS is massively scalable, but I don't prefer AWS at all until, unless there are various other options coming out, like salad. I had a call, I had some interactions with Titan machine learning also, but it was also fine. But salad is one of the best as of now.
+
+
+
+Demetrios:
+
+Yeah. Unless you get that free AWS credits from the startup program, it can get very expensive very quickly. And even if you do have the free AWS credits, it still gets very expensive very quickly. So I understand what you're saying is basically it was unusable because of the cost and the inability to figure out, it was more of a product problem if you could figure out how to properly monetize it. But then you had technical problems like word error rate being really high, the speed and latency was just unbearable. I can imagine. So unless somebody makes a query and they're ready to sit around for a few minutes and let that query come back to you, with a video or some documents, whatever it may be. Is that what I'm understanding on this? And again, this is for the family tree use case that you're talking about.
+
+
+
+Syed Asad:
+
+Yes, family tree use case. So what was happening in that, in that case is a video is uploaded, it goes to the admin for an approval actually. So I mean you can, that is where we, they were restricting the costing part as far as the project was concerned. It's because you cannot upload any random videos and they will select that. Just some sort of moderation was also there, as in when the admin approves those videos, that videos goes on to the transcription pipeline. They are transcripted via an, say a video to text model like the open eyes whisper. So what was happening initially, all the, all the research was done with Openais, but at the end when deployment came, we have to go with deep Gram and AssemblyAI. That was the place where these models were excelling far better than OpenAI.
+
+
+
+Syed Asad:
+
+And I'm a big advocate of open source models, so also I try to leverage those, but it was not pretty working in production environment.
+
+
+
+Demetrios:
+
+Fascinating. So you had that, that's one of your use cases, right? And that's very much the multimodal rag use case. Are all of your use cases multimodal or did you have, do you have other ones too?
+
+
+
+Syed Asad:
+
+No, all are not multimodal. There are few multimodal, there are few text based on naive rag also. So what, like for example, there is one use case coming which is sort of a job search which is happening. A job search for a radiology, radiology section. I mean a very specialized type of client it is. And they're doing some sort of job search matching the modalities and procedures. And it is sort of a temporary job. Like, like you have two shifts ready, two shifts begin, just some.
+
+
+
+Syed Asad:
+
+So, so that is, that is very critical when somebody is putting their procedures or what in. Like for example, they, they are specializing in x rays in, in some sort of medical procedures and that is matching with the, with the, with the, with the employers requirement. So that is where the accuracy matters the most. Accurate. And in this case, Qdrant has proved just commendable in giving excellent search results. The other way around is that in this case is there were some challenges related to the quality of results also because. So progressing from frozen rack to advanced rag like adopting methods like re ranking, semantic chunking. I have, I have started using semantic chunking.
+
+
+
+Syed Asad:
+
+So it has proved very beneficial as far as the quality of results is concerned.
+
+
+
+Demetrios:
+
+Well, talk to me more about. I'm trying to understand this use case and why a rag is useful for the job matching. You have doctors who have specialties and they understand, all right, they're, maybe it's an orthopedic surgeon who is very good at a certain type of surgery, and then you have different jobs that come online. They need to be matched with those different jobs. And so where does the rag come into play? Because it seems like it could be solved with machine learning as opposed to AI.
+
+
+
+Syed Asad:
+
+Yeah, it could have been solved through machine learning, but the type of modalities that are, the type of, say, the type of jobs which they were posting are too much specialized. So it needed some sort of contextual matching also. So there comes the use case for the rag. In this place, the contextual matching was required. Initially, an approach for machine learning was on the table, but it was done with, it was not working.
+
+
+
+Demetrios:
+
+I get it, I get it. So now talk to me. This is really important that you said accuracy needs to be very high in this use case. How did you make sure that the accuracy was high? Besides the, I think you said chunking, looking at the chunks, looking at how you were doing that, what were some other methods you took to make sure that the accuracy was high?
+
+
+
+Syed Asad:
+
+I mean, as far as the accuracy is concerned. So what I did was that my focus was on the embedding model, actually when I started with what type of embed, choice of embedding model. So initially my team started with open source model available readily on hugging face, looking at some sort of leaderboard metrics, some sort of model specializing in medical, say, data, all those things. But even I was curious that the large language, the embedding models which were specializing in medical data, they were also not returning good results and they were mismatching. When, when there was a tabular format, I created a visualization in which the cosine similarity of various models were compared. So all were lagging behind until I went ahead with cohere. Cohere re rankers. They were the best in that case, although they are not trained on that.
+
+
+
+Syed Asad:
+
+And just an API call was required rather than loading that whole model onto the local.
+
+
+
+Demetrios:
+
+Interesting. All right. And so then were you doing certain types, so you had the cohere re ranker that gave you a big up. Were you doing any kind of monitoring of the output also, or evaluation of the output and if so, how?
+
+
+
+Syed Asad:
+
+Yes, for evaluation, for monitoring we readily use arrays AI, because I am a, I'm a huge advocate of Llama index also because it has made everything so easier versus lang chain. I mean, if I talk about my personal preference, not regarding any bias, because I'm not linked with anybody, I'm not promoting it here, but they are having the best thing which I write, I like about Llama index and why I use it, is that anything which is coming into play as far as the new research is going on, like for example, a recent research paper was with the raft retrieval augmented fine tuning, which was released by the Microsoft, and it is right now available on archive. So barely few days after they just implemented it in the library, and you can readily start using it rather than creating your own structure. So, yeah, so it was. So one of my part is that I go through the research papers first, then coming on to a result. So a research based approach is required in actually selecting the models, because every day there is new advancement going on in rags and you cannot figure out what is, what would be fine for you, and you cannot do hit and trial the whole day.
+
+
+
+Demetrios:
+
+Yes, that is a great point. So then if we break down your tech stack, what does it look like? You're using Llama index, you're using arise for the monitoring, you're using Qdrant for your vector database. You have the, you have the coherent re ranker, you are using GPT 3.5.
+
+
+
+Syed Asad:
+
+No, it's GPT 4, not 3.5.
+
+
+
+Demetrios:
+
+You needed to go with GPT 4 because everything else wasn't good enough.
+
+
+
+Syed Asad:
+
+Yes, because one of the context length was one of the most things. But regarding our production, we have been readily using since the last one and a half months. I have been readily using Mixtril. I have been. I have been using because there's one more challenge coming onto the rack, because there's one more I'll give, I'll give you an example of one more use case. It is the I'll name the project also because I'm allowed by my company. It is a big project by the name of Quasar markets. It is a us based company and they are actually creating a financial market type of check chatbot.
+
+
+
+Syed Asad:
+
+Q u a s a r, quasar. You can search it also, and they give you access to various public databases also, and some paid databases also. They have a membership plan. So we are entirely handling the front end backend. I'm not handling the front end and the back end, I'm handling the AI part in that. So one of the challenges is the inference, timing, the timing in which the users are getting queries when it is hitting the database. Say for example, there is a database publicly available database called Fred of us government. So when user can select in that app and go and select the Fred database and want to ask some questions regarding that.
+
+
+
+Syed Asad:
+
+So that is in this place there is no vectors, there are no vector databases. It is going without that. So we are following some keyword approach. We are extracting keywords, classifying the queries in simple or complex, then hitting it again to the database, sending it on the live API, getting results. So there are multiple hits going on. So what happened? This all multiple hits which were going on. They reduced the timing and I mean the user experience was being badly affected as the time for the retrieval has gone up and user and if you're going any query and inputting any query it is giving you results in say 1 minute. You wouldn't be waiting for 1 minute for a result.
+
+
+
+Demetrios:
+
+Not at all.
+
+
+
+Syed Asad:
+
+So this is one of the challenge for a GPU based approach. And in, in the background everything was working on GPT 4 even, not 3.5. I mean the costliest.
+
+
+
+Demetrios:
+
+Yeah.
+
+
+
+Syed Asad:
+
+So, so here I started with the LPU approach, the Grok. I mean it's magical.
+
+
+
+Demetrios:
+
+Yeah.
+
+
+
+Syed Asad:
+
+I have been implementing proc since the last many days and it has been magical. The chatbots are running blazingly fast but there are some shortcomings also. You cannot control the temperature if you have lesser control on hallucination. That is one of the challenges which I am facing. So that is why I am not able to deploy Grok into production right now. Because hallucination is one of the concern for the client. Also for anybody who is having, who wants to have a rag on their own data, say, or AI on their own data, they won't, they won't expect you, the LLM, to be creative. So that is one of the challenges.
+
+
+
+Syed Asad:
+
+So what I found that although many of the tools that are available in the market right now day in and day out, there are more researches. But most of the things which are coming up in our feeds or more, I mean they are coming as a sort of a marketing gimmick. They're not working actually on the ground.
+
+
+
+Demetrios:
+
+Tell me, tell me more about that. What other stuff have you tried that's not working? Because I feel that same way. I've seen it and I also have seen what feels like some people, basically they release models for marketing purposes as opposed to actual valuable models going out there. So which ones? I mean Grok, knowing about Grok and where it excels and what some of the downfalls are is really useful. It feels like this idea of temperature being able to control the knob on the temperature and then trying to decrease the hallucinations is something that is fixable in the near future. So maybe it's like months that we'll have to deal with that type of thing for now. But I'd love to hear what other things you've tried that were not like you thought they were going to be when you were scrolling Twitter or LinkedIn.
+
+
+
+Syed Asad:
+
+Should I name them?
+
+
+
+Demetrios:
+
+Please. So we all know we don't have to spend our time on them.
+
+
+
+Syed Asad:
+
+I'll start with OpenAI. The clients don't like GPT 4 to be used in there just because the primary concern is the cost. Secondary concern is the data privacy. And the third is that, I mean, I'm talking from the client's perspective, not the tech stack perspective.
+
+
+
+Demetrios:
+
+Yeah, yeah, yeah.
+
+
+
+Syed Asad:
+
+They consider OpenAI as a more of a marketing gimmick. Although GPT 4 gives good results. I'm, I'm aware of that, but the clients are not in favor. But the thing is that I do agree that GPT 4 is still the king of llms right now. So they have no option, no option to get the better, better results. But Mixtral is performing very good as far as the hallucinations are concerned. Just keeping the parameter temperature is equal to zero in a python code does not makes the hallucination go off. It is one of my key takeaways.
+
+
+
+Syed Asad:
+
+I have been bogging my head. Just. I'll give you an example, a chat bot. There is a, there's one of the use case in which is there's a big publishing company. I cannot name that company right now. And they want the entire system of books since the last 2025 years to be just converted into a rack pipeline. And the people got query. The.
+
+
+
+Syed Asad:
+
+The basic problem which I was having is handling a hello. When a user types hello. So when you type in hello, it.
+
+
+
+Demetrios:
+
+Gives you back a book.
+
+
+
+Syed Asad:
+
+It gives you back a book even. It is giving you back sometimes. Hello, I am this, this, this. And then again, some information. What you have written in the prompt, it is giving you everything there. I will answer according to this. I will answer according to this. So, so even if the temperature is zero inside the code, even so that, that included lots of prompt engineering.
+
+
+
+Syed Asad:
+
+So prompt engineering is what I feel is one of the most important trades which will be popular, which is becoming popular. And somebody is having specialization in prompt engineering. I mean, they can control the way how an LLM behaves because it behaves weirdly. Like in this use case, I was using croc and Mixtral. So to control Mixtral in such a way. It was heck lot of work, although it, we made it at the end, but it was heck lot of work in prompt engineering part.
+
+
+
+Demetrios:
+
+And this was, this was Mixtral large.
+
+
+
+Syed Asad:
+
+Mixtral, seven bits, eight by seven bits.
+
+
+
+Demetrios:
+
+Yeah. I mean, yeah, that's the trade off that you have to deal with. And it wasn't fine tuned at all.
+
+
+
+Syed Asad:
+
+No, it was not fine tuned because we were constructing a rack pipeline, not a fine tuned application, because right now, right now, even the customers are not interested in getting a fine tune model because it cost them and they are more interested in a contextual, like a rag contextual pipeline.
+
+
+
+Demetrios:
+
+Yeah, yeah. Makes sense. So basically, this is very useful to think about. I think we all understand and we've all seen that GPT 4 does best if we can. We want to get off of it as soon as possible and see how we can, how far we can go down the line or how far we can go on the difficulty spectrum. Because as soon as you start getting off GPT 4, then you have to look at those kind of issues with like, okay, now it seems to be hallucinating a lot more. How do I figure this out? How can I prompt it? How can I tune my prompts? How can I have a lot of prompt templates or a prompt suite to make sure that things work? And so are you using any tools for keeping track of prompts? I know there's a ton out there.
+
+
+
+Syed Asad:
+
+We initially started with the parameter efficient fine tuning for prompts, but nothing is working 100% interesting. Nothing works 100% it is as far as the prompting is concerned. It goes on to a hit and trial at the end. Huge wastage of time in doing prompt engineering. Even if you are following the exact prompt template given on the hugging face given on the model card anywhere, it will, it will behave, it will act, but after some time.
+
+
+
+Demetrios:
+
+Yeah, yeah.
+
+
+
+Syed Asad:
+
+But mixed well. Is performing very good. Very, very good. Mixtral eight by seven bits. That's very good.
+
+
+
+Demetrios:
+
+Awesome.
+
+
+
+Syed Asad:
+
+The summarization part is very strong. It gives you responses at par with GPT 4.
+
+
+
+Demetrios:
+
+Nice. Okay. And you don't have to deal with any of those data concerns that your customers have.
+
+
+
+Syed Asad:
+
+Yeah, I'm coming on to that only. So the next part was the data concern. So they, they want either now or in future the localization of llms. I have been doing it with readily, with Llama, CPP and Ollama. Right now. Ollama is very good. I mean, I'm a huge, I'm a huge fan of Ollama right now, and it is performing very good as far as the localization and data privacy is concerned because, because at the end what you are selling, it makes things, I mean, at the end it is sales. So even if the client is having data of the customers, they want to make their customers assure that the data is safe.
+
+
+
+Syed Asad:
+
+So that is with the localization only. So they want to gradually go into that place. So I want to bring here a few things. To summarize what I said, localization of llms is one of the concern right now is a big market. Second is quantization of models.
+
+
+
+Demetrios:
+
+Oh, interesting.
+
+
+
+Syed Asad:
+
+In quantization of models, whatever. So I perform scalar quantization and binary quantization, both using bits and bytes. I various other techniques also, but the bits and bytes was the best. Scalar quantization is performing better. Binary quantization, I mean the maximum compression or maximum lossy function is there, so it is not, it is, it is giving poor results. Scalar quantization is working very fine. It, it runs on CPU also. It gives you good results because whatever projects which we are having right now or even in the markets also, they are not having huge corpus of data right now, but they will eventually scale.
+
+
+
+Syed Asad:
+
+So they want something right now so that quantization works. So quantization is one of the concerns. People want to dodge aws, they don't want to go to AWS, but it is there. They don't have any other way. So that is why they want aws.
+
+
+
+Demetrios:
+
+And is that because of costs lock in?
+
+
+
+Syed Asad:
+
+Yeah, cost is the main part.
+
+
+
+Demetrios:
+
+Yeah. They understand that things can get out of hand real quick if you're using AWS and you start using different services. I think it's also worth noting that when you're using different services on AWS, it may be a very similar service. But if you're using sagemaker endpoints on AWS, it's like a lot more expensive than just an EKS endpoint.
+
+
+
+Syed Asad:
+
+Minimum cost for a startup, for just the GPU, bare minimum is minimum. $450. Minimum. It's $450 even without just on the testing phases or the development phases, even when it has not gone into production. So that gives a dent to the client also.
+
+
+
+Demetrios:
+
+Wow. Yeah. Yeah. So it's also, and this is even including trying to use like tranium or inferencia and all of that stuff. You know those services?
+
+
+
+Syed Asad:
+
+I know those services, but I've not readily tried those services. I'm right now in the process of trying salad also for inference, and they are very, very cheap right now.
+
+
+
+Demetrios:
+
+Nice. Okay. Yeah, cool. So if you could wave your magic wand and have something be different when it comes to your work, your day in, day out, especially because you've been doing a lot of rags, a lot of different kinds of rags, a lot of different use cases with, with rags. Where do you think you would get the biggest uptick in your performance, your ability to just do what you need to do? How could rags be drastically changed? Is it something that you say, oh, the hallucinations. If we didn't have to deal with those, that would make my life so much easier. I didn't have to deal with prompts that would make my life infinitely easier. What are some things like where in five years do you want to see this field be?
+
+
+
+Syed Asad:
+
+Yeah, you figured it right. The hallucination part is one of the concerns, or biggest concerns with the client when it comes to the rag, because what we see on LinkedIn and what we see on places, it gives you a picture that it, it controls hallucination, and it gives you answer that. I don't know anything about this, as mentioned in the context, but it does not really happen when you come to the production. It gives you information like you are developing a rag for a publishing company, and it is giving you. Where is, how is New York like, it gives you information on that also, even if you have control and everything. So that is one of the things which needs to be toned down. As far as the rag is concerned, pricing is the biggest concern right now, because there are very few players in the market as far as the inference is concerned, and they are just dominating the market with their own rates. So this is one of the pain points.
+
+
+
+Syed Asad:
+
+And the. I'll also want to highlight the popular vector databases. There are many Pinecone weaviate, many things. So they are actually, the problem with many of the vector databases is that they work fine. They are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.
+
+
+
+Syed Asad:
+
+Not because Qdrant is sponsoring me, not because I am doing a job with Qdrant, but Qdrant is having the ease of use. And it, I have, I have trained people in my team who specialize with Qdrant, and they were initially using Weaviate and Pinecone. I mean, you can do also store vectors in those databases, but it is not especially the, especially the latest development with Pine, sorry, with Qdrant is the fast embed, which they just now released. And it made my work a lot easier by using the ONNX approach rather than a Pytorch based approach, because there was one of the projects in which we were deploying embedding model on an AWS server and it was running continuously. And minimum utilization of ram is 6gb. Even when it is not doing any sort of vector embedding so fast. Embed has so Qdrant is playing a huge role, I should acknowledge them. And one more thing which I would not like to use is LAN chain.
+
+
+
+Syed Asad:
+
+I have been using it. So. So I don't want to use that language because it is not, it did not serve any purpose for me, especially in the production. It serves purpose in the research phase. When you are releasing any notebook, say you have done this and does that. It is not. It does not works well in production, especially for me. Llama index works fine, works well.
+
+
+
+Demetrios:
+
+You haven't played around with anything else, have you? Like Haystack or.
+
+
+
+Syed Asad:
+
+Yeah, haystack. Haystack. I have been playing out around, but haystack is lacking functionalities. It is working well. I would say it is working well, but it lacks some functionalities. They need to add more things as compared to Llama index.
+
+
+
+Demetrios:
+
+And of course, the hottest one on the block right now is DSPY. Right? Have you messed around with that at all?
+
+
+
+Syed Asad:
+
+DSPy, actually DSPY. I have messed with DSPY. But the thing is that DSPY is right now, I have not experimented with that in the production thing, just in the research phase.
+
+
+
+Demetrios:
+
+Yeah.
+
+
+
+Syed Asad:
+
+So, and regarding the evaluation part, DeepEval, I heard you might have a DeepEval. So I've been using that. It is because one of the, one of the challenges is the testing for the AI. Also, what responses are large language model is generating the traditional testers or the manual tester software? They don't know, actually. So there's one more vertical which is waiting to be developed, is the testing for AI. It has a huge potential. And DeepEval, the LLM based approach on testing is very, is working fine and is open source also.
+
+
+
+Demetrios:
+
+And that's the DeepEval I haven't heard.
+
+
+
+Syed Asad:
+
+Let me just tell you the exact spelling. It is. Sorry. It is DeepEval. D E E P. Deep eval. I can.
+
+
+
+Demetrios:
+
+Yeah. Okay. I know DeepEval. All right. Yeah, for sure. Okay. Hi. I for some reason was understanding D Eval.
+
+
+
+Syed Asad:
+
+Yeah, actually I was pronouncing it wrong.
+
+
+
+Demetrios:
+
+Nice. So these are some of your favorite, non favorite, and that's very good to know. It is awesome to hear about all of this. Is there anything else that you want to say before we jump off? Anything that you can, any wisdom you can impart on us for your rag systems and how you have learned the hard way? So tell us so we don't have to learn that way.
+
+
+
+Syed Asad:
+
+Just go. Don't go with the marketing. Don't go with the marketing. Do your own research. Hugging face is a good, I mean, just fantastic. The leaderboard, although everything does not work in the leaderboard, also say, for example, I don't, I don't know about today and tomorrow, today and yesterday, but there was a model from Salesforce, the embedding model from Salesforce. It is still topping charts, I think, in the, on the MTEB. MTEB leaderboard for the embedding models.
+
+
+
+Syed Asad:
+
+But you cannot use it in the production. It is way too huge to implement it. So what's the use? Mixed bread AI. The mixed bread AI, they are very light based, lightweight, and they, they are working fine. They're not even on the leaderboard. They were on the leaderboard, but they're right, they might not. When I saw they were ranking on around seven or eight on the leaderboard, MTEB leaderboard, but they were working fine. So even on the leaderboard thing, it does not works.
+
+
+
+Demetrios:
+
+And right now it feels a little bit like, especially when it comes to embedding models, you just kind of go to the leaderboard and you close your eyes and then you pick one of them. Have you figured out a way to better test these or do you just find one and then try and use it everywhere?
+
+
+
+Syed Asad:
+
+No, no, that is not the case. Actually what I do is that I need to find the first, the embedding model. Try to find the embedding model based on my use case. Like if it is an embedding model on a medical use case more. So I try to find that. But the second factor to filter that is, is the size of that embedding model. Because at the end, if I am doing the entire POC or an entire research with that embedding model, what? And it has happened to me that we did entire research with embedding models, large language models, and then we have to remove everything just on the production part and it just went in smoke. Everything.
+
+
+
+Syed Asad:
+
+So a lightweight embedding model, especially the one which, which has started working recently, is that the cohere embedding models, and they have given a facility to call those embedding models in a quantized format. So that is also working and fast. Embed is one of the things which is by Qdrant, these two things are working in the production. I'm talking in the production for research. You can do anything.
+
+
+
+Demetrios:
+
+Brilliant, man. Well, this has been great. I really appreciate it. Asad, thank you for coming on here and for anybody else that would like to come on to the vector space talks, just let us know. In the meantime, don't get lost in vector space. We will see you all later. Have a great afternoon. Morning, evening, wherever you are.
+
+
+
+Demetrios:
+
+Asad, you taught me so much, bro. Thank you.
+",blog/advancements-and-challenges-in-rag-systems-syed-asad-vector-space-talks-021.md
+"---
+
+draft: false
+
+title: Talk with YouTube without paying a cent - Francesco Saverio Zuppichini |
+
+ Vector Space Talks
+
+slug: youtube-without-paying-cent
+
+short_description: A sneak peek into the tech world as Francesco shares his
+
+ ideas and processes on coding innovative solutions.
+
+description: Francesco Zuppichini outlines the process of converting YouTube
+
+ video subtitles into searchable vector databases, leveraging tools like
+
+ YouTube DL and Hugging Face, and addressing the challenges of coding without
+
+ conventional frameworks in machine learning engineering.
+
+preview_image: /blog/from_cms/francesco-saverio-zuppichini-bp-cropped.png
+
+date: 2024-03-27T12:37:55.643Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - embeddings
+
+ - LLMs
+
+ - Retrieval Augmented Generation
+
+ - Ollama
+
+---
+
+> *""Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data.”*\
+
+-- Francesco Saverio Zuppichini
+
+>
+
+
+
+Francesco Saverio Zuppichini is a Senior Full Stack Machine Learning Engineer at Zurich Insurance with experience in both large corporations and startups of various sizes. He is passionate about sharing knowledge, and building communities, and is known as a skilled practitioner in computer vision. He is proud of the community he built because of all the amazing people he got to know.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/7kVd5a64sz2ib26IxyUikO?si=mrOoVP3ISQ22kXrSUdOmQA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/56mFleo06LI).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top takeaways:**
+
+
+
+Curious about transforming YouTube content into searchable elements? Francesco Zuppichini unpacks the journey of coding a RAG by using subtitles as input, harnessing technologies like YouTube DL, Hugging Face, and Qdrant, while debating framework reliance and the fine art of selecting the right software tools.
+
+
+
+Here are some insights from this episode:
+
+
+
+1. **Behind the Code**: Francesco unravels how to create a RAG using YouTube videos. Get ready to geek out on the nuts and bolts that make this magic happen.
+
+2. **Vector Voodoo**: Ever wonder how embedding vectors carry out their similarity searches? Francesco's got you covered with his brilliant explanation of vector databases and the mind-bending distance method that seeks out those matches.
+
+3. **Function over Class**: The debate is as old as stardust. Francesco shares why he prefers using functions over classes for better code organization and demonstrates how this approach solidifies when running language models with Ollama.
+
+4. **Metadata Magic**: Find out how metadata isn't just a sidekick but plays a pivotal role in the realm of Qdrant and RAGs. Learn why Francesco values metadata as payload and the challenges it presents in developing domain-specific applications.
+
+5. **Tool Selection Tips**: Deciding on the right software tool can feel like navigating an asteroid belt. Francesco shares his criteria—ease of installation, robust documentation, and a little help from friends—to ensure a safe landing.
+
+
+
+> Fun Fact: Francesco confessed that his code for chunking subtitles was ""a little bit crappy"" because of laziness—proving that even pros take shortcuts to the stars now and then.
+
+>
+
+
+
+## Show notes:
+
+
+
+00:00 Intro to Francesco\
+
+05:36 Create YouTube rack for data retrieval.\
+
+09:10 Local web dev showcase without frameworks effectively.\
+
+11:12 Qdrant: converting video text to vectors.\
+
+13:43 Connect to vectordb, specify config, keep it simple.\
+
+17:59 Recreate, compare vectors, filter for right matches.\
+
+21:36 Use functions and share states for simpler coding.\
+
+29:32 Gemini Pro generates task-based outputs effectively.\
+
+32:36 Good documentation shows pride in the product.\
+
+35:38 Organizing different data types in separate collections.\
+
+38:36 Proactive approach to understanding code and scalability.\
+
+42:22 User feedback and statistics evaluation is crucial.\
+
+44:09 Consider user needs for chatbot accuracy and relevance.
+
+
+
+## More Quotes from Francesco:
+
+
+
+*""So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface.*”\
+
+-- Francesco Saverio Zuppichini
+
+
+
+*""It's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client.”*\
+
+-- Francesco Saverio Zuppichini
+
+
+
+*""So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice.”*\
+
+-- Francesco Saverio Zuppichini
+
+
+
+## Transcript:
+
+Demetrios:
+
+Folks, welcome to another vector space talks. I'm excited to be here and it is a special day because I've got a co host with me today. Sabrina, what's going on? How you doing?
+
+
+
+Sabrina Aquino:
+
+Let's go. Thank you so much, Demetrios, for having me here. I've always wanted to participate in vector space talks. Now it's finally my chance. So thank you so much.
+
+
+
+Demetrios:
+
+Your dream has come true and what a day for it to come true because we've got a special guest today. While we've got you here, Sabrina, I know you've been doing some excellent stuff on the Internet when it comes to other ways to engage with the Qdrant community. Can you break that down real fast before we jump into this?
+
+
+
+Sabrina Aquino:
+
+Absolutely. I think an announcement here is we're hosting our first discord office hours. We're going to be answering all your questions about Qdrant with Qdrant team members, where you can interact with us, with our community as well. And we're also going to be dropping a few insights on the next Qdrant release 1.8. So that's super exciting and also, we are. Sorry, I just have another thing going on here on the live.
+
+
+
+Demetrios:
+
+Music got in your ear.
+
+
+
+Sabrina Aquino:
+
+We're also having the vector voices on Twitter, the X Spaces roundtable, where we bring experts to talk about a topic with our team. And you can also jump in and ask questions on the AMA. So that's super exciting as well. And, yeah, see you guys there. And I'll drop a link of the discord in the comments so you guys can join our community and be a part of it.
+
+
+
+Demetrios:
+
+Exactly what I was about to say. So without further ado, let's bring on our guest of honor, Mr. Where are you at, dude?
+
+
+
+Francesco Zuppichini:
+
+Hi. Hello. How are you?
+
+
+
+Demetrios:
+
+I'm great. How are you doing?
+
+
+
+Francesco Zuppichini:
+
+Great.
+
+
+
+Demetrios:
+
+I've been seeing you all around the Internet and I am very excited to be able to chat with you today. I know you've got a bit of stuff planned for us. You've got a whole presentation, right?
+
+
+
+Francesco Zuppichini:
+
+Correct.
+
+
+
+Demetrios:
+
+But for those that do not know you, you're a full stack machine learning engineer at Zurich Insurance. I think you also are very vocal and you are fun to follow on LinkedIn is what I would say. And we're going to get to that at the end after you give your presentation. But once again, reminder for everybody, if you want to ask questions, hit us up with questions in the chat. As far as going through his presentation today, you're going to be talking to us all about some really cool stuff about rags. I'm going to let you get into it, man. And while you're sharing your screen, I'm going to tell people a little bit of a fun fact about you. That you put ketchup on your pizza, which I think is a little bit sacrilegious.
+
+
+
+Francesco Zuppichini:
+
+Yes. So that's 100% true. And I hope that the italian pizza police is not listening to this call or I can be in real trouble.
+
+
+
+Demetrios:
+
+I think we just lost a few viewers there, but it's all good.
+
+
+
+Sabrina Aquino:
+
+Italy viewers just dropped out.
+
+
+
+Demetrios:
+
+Yeah, the Italians just dropped, but it's all good. We will cut that part out in post production, my man. I'm going to share your screen and I'm going to let you get after it. I'll be hanging around in case any questions pop up with Sabrina in the background. And here you go, bro.
+
+
+
+Francesco Zuppichini:
+
+Wonderful. So you can see my screen, right?
+
+
+
+Demetrios:
+
+Yes, for sure.
+
+
+
+Francesco Zuppichini:
+
+That's perfect. Okay, so today we're going to talk about talk with YouTube without paying a cent, no framework bs. So the goal of today is to showcase how to code a RAG given as an input a YouTube video without using any framework like language, et cetera, et cetera. And I want to show you that it's straightforward, using a bunch of technologies and Qdrants as well. And you can do all of this without actually pay to any service. Right. So we are going to run our PEDro DB locally and also the language model. We are going to run our machines.
+
+
+
+Francesco Zuppichini:
+
+And yeah, it's going to be a technical talk, so I will kind of guide you through the code. Feel free to interrupt me at any time if you have questions, if you want to ask why I did that, et cetera, et cetera. So very quickly, before we get started, I just want you not to introduce myself. So yeah, senior full stack machine engineer. That's just a bunch of funny work to basically say that I do a little bit of everything. Start. So when I was working, I start as computer vision engineer, I work at PwC, then a bunch of startups, and now I sold my soul to insurance companies working at insurance. And before I was doing computer vision, now I'm doing due to Chat GPT, hyper language model, I'm doing more of that.
+
+
+
+Francesco Zuppichini:
+
+But I'm always involved in bringing the full product together. So from zero to something that is deployed and running. So I always be interested in web dev. I can also do website servers, a little bit of infrastructure as well. So now I'm just doing a little bit of everything. So this is why there is full stack there. Yeah. Okay, let's get started to something a little bit more interesting than myself.
+
+
+
+Francesco Zuppichini:
+
+So our goal is to create a full local YouTube rack. And if you don't want a rack, is, it's basically a system in which you take some data. In this case, we are going to take subtitles from YouTube videos and you're able to basically q a with your data. So you're able to use a language model, you ask questions, then we retrieve the relevant parts in the data that you provide, and hopefully you're going to get the right answer to your. So let's talk about the technologies that we're going to use. So to get the subtitles from a video, we're going to use YouTube DL and YouTube DL. It's a library that is available through Pip. So Python, I think at some point it was on GitHub and then I think it was removed because Google, they were a little bit beach about that.
+
+
+
+Francesco Zuppichini:
+
+So then they realized it on GitHub. And now I think it's on GitHub again, but you can just install it through Pip and it's very cool.
+
+
+
+Demetrios:
+
+One thing, man, are you sharing a slide? Because all I see is your. I think you shared a different screen.
+
+
+
+Francesco Zuppichini:
+
+Oh, boy.
+
+
+
+Demetrios:
+
+I just see the video of you. There we go.
+
+
+
+Francesco Zuppichini:
+
+Entire screen. Yeah. I'm sorry. Thank you so much.
+
+
+
+Demetrios:
+
+There we go.
+
+
+
+Francesco Zuppichini:
+
+Wonderful. Okay, so in order to get the embedding. So to translate from text to vectors, right, so we're going to use hugging face just an embedding model so we can actually get some vectors. Then as soon as we got our vectors, we need to store and search them. So we're going to use our beloved Qdrant to do so. We also need to keep a little bit of stage right because we need to know which video we have processed so we don't redo the old embeddings and the storing every time we see the same video. So for this part, I'm just going to use SQLite, which is just basically an SQL database in just a file. So very easy to use, very kind of lightweight, and it's only your computer, so it's safe to run the language model.
+
+
+
+Francesco Zuppichini:
+
+We're going to use Ollama. That is a very simple way and very well done way to just get a language model that is running on your computer. And you can also call it using the OpenAI Python library because they have implemented the same endpoint as. It's like, it's super convenient, super easy to use. If you already have some code that is calling OpenAI, you can just run a different language model using Ollama. And you just need to basically change two lines of code. So what we're going to do, basically, I'm going to take a video. So here it's a video from Fireship IO.
+
+
+
+Francesco Zuppichini:
+
+We're going to run our command line and we're going to ask some questions. Now, if you can still, in theory, you should be able to see my full screen. Yeah. So very quickly to showcase that to you, I already processed this video from the good sound YouTube channel and I have already here my command line. So I can already kind of see, you know, I can ask a question like what is the contact size of Germany? And we're going to get the reply. Yeah. And here we're going to get a reply. And now I want to walk you through how you can do something similar.
+
+
+
+Francesco Zuppichini:
+
+Now, the goal is not to create the best rack in the world. It's just to showcase like show zero to something that is actually working. How you can do that in a fully local way without using any framework so you can really understand what's going on under the hood. Because I think a lot of people, they try to copy, to just copy and paste stuff on Langchain and then they end up in a situation when they need to change something, but they don't really know where the stuff is. So this is why I just want to just show like Windfield zero to hero. So the first step will be I get a YouTube video and now I need to get the subtitle. So you could actually use a model to take the audio from the video and get the text. Like a whisper model from OpenAI, for example.
+
+
+
+Francesco Zuppichini:
+
+In this case, we are taking advantage that YouTube allow people to upload subtitles and YouTube will automatically generate the subtitles. So here using YouTube dial, I'm just going to get my video URL. I'm going to set up a bunch of options like the format they want, et cetera, et cetera. And then basically I'm going to download and get the subtitles. And they look something like this. Let me show you an example. Something similar to this one, right? We have the timestamps and we do have all text inside. Now the next step.
+
+
+
+Francesco Zuppichini:
+
+So we got our source of data, we have our text key. Next step is I need to translate my text to vectors. Now the easiest way to do so is just use sentence transformers for backing phase. So here I've installed it. I load in a model. In this case I'm using this model here. I have no idea what tat model is. I just default one tatted find and it seems to work fine.
+
+
+
+Francesco Zuppichini:
+
+And then in order to use it, I'm just providing a query and I'm getting back a list of vectors. So we have a way to take a video, take the text from the video, convert that to vectors with a semantic meaningful representation. And now we need to store them. Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data. So the way I'm running it is through Docker compose. So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface.
+
+
+
+Francesco Zuppichini:
+
+I'm going to show that to you because I think it's very cool. So here I've already some vectors inside here so I can just look in my collection, it's called embeddings, an original name. And we can see all the chunks that were embed with the metadata, in this case just the video id. A super cool thing, super useful to debug is go in the visualize part and see the embeddings, the projected embeddings. You can actually do a bounce of stuff. You can actually also go here and color them by some metadata. Like I can say I want to have a different color based on the video id. In this case I just have one video.
+
+
+
+Francesco Zuppichini:
+
+I will show that as soon as we add more videos. This is so cool, so useful. I will use this at work as well in which I have a lot of documents. And it's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client.
+
+
+
+Francesco Zuppichini:
+
+So you have a connection with a vectordb, you create a connection, you specify a name, you specify some configuration stuff. In this case I just specify the vector size because Qdrant, it needs to know how big the vectors are going to be and the distance I want to use. So I'm going to use the cosite distance in Qdrant documentation there are a lot of parameters. You can do a lot of crazy stuff here and just keep it very simple. And yeah, another important thing is that since we are going to embed more videos, when I ask a question to a video, I need to know which embedded are from that video. So we're going to create an index. So it's very efficient to filter my embedded based on that index, an index on the metadata video because when I store a chunk in Qdrant, I also going to include from which video is coming from. Very simple, very simple to set up.
+
+
+
+Francesco Zuppichini:
+
+You just need to do this once. I was very lazy so I just assumed that if this is going to fail, it means that it's because I've already created a collection. So I'm just going to pass it and call it a day. Okay, so this is basically all the preprocess this setup you need to do to have your Qdrant ready to store and search vectors. To store vectors. Straightforward, very straightforward as well. Just need again the client. So the connection to the database here I'm passing my embedding so sentence transformer model and I'm passing my chunks as a list of documents.
+
+
+
+Francesco Zuppichini:
+
+So documents in my code is just a type that will contain just this metadata here. Very simple. It's similar to Lang chain here. I just have attacked it because it's lightweight. To store them we call the upload records function. We encode them here. There is a little bit of bad variable names from my side which I replacing that. So you shouldn't do that.
+
+
+
+Francesco Zuppichini:
+
+Apologize about that and you just send the records. Another very cool thing about Qdrant. So the second things that I really like is that they have types for what you send through the library. So this models record is a Qdrant type. So you use it and you know immediately. So what you need to put inside. So let me give you an example. Right? So assuming that I'm programming, right, I'm going to say model record bank.
+
+
+
+Francesco Zuppichini:
+
+I know immediately. So what I have to put inside, right? So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice. Another cool thing is that if you're using fast API to build a web server, if you are going to return a Qdrant models type, it's actually going to be serialized automatically through pydantic. So you don't need to do weird stuff. It's all handled by the Qdrant APIs, by the product SDK. Super cool.
+
+
+
+Francesco Zuppichini:
+
+Now we have a way to store our chunks to embed them. So this is how they look like in the interface. I can see them, I can go to them, et cetera, et Cetera. Very nice. Now the missing part, right. So video subtitles. I chunked the subtitles. I haven't show you the chunking code.
+
+
+
+Francesco Zuppichini:
+
+It's a little bit crappy because I was very lazy. So I just like chunking by characters count and a little bit of overlapping. We have a way to store and embed our chunks and now we need a way to search. That's basically one of the missing steps. Now search straightforward as well. This is also a good example because I can show you how effective is to create filters using Qdrant. So what do we need to search with again the vector client, the embeddings, because we have a query, right. We need to run the query with the same embedding models.
+
+
+
+Francesco Zuppichini:
+
+We need to recreate to embed in a vector and then we need to compare with the vectors in the vector Db using a distance method, in this case considered similarity in order to get the right matches right, the closest one in our vector DB, in our vector search base. So passing a query string, I'm passing a video id and I pass in a label. So how many hits I want to get from the metadb. Now to create a filter again you're going to use the model package from the Qdrant framework. So here I'm just creating a filter class for the model and I'm saying okay, this filter must match this key, right? So metadata video id with this video id. So when we search, before we do the similarity search, we are going to filter away all the vectors that are not from that video. Wonderful. Now super easy as well.
+
+
+
+Francesco Zuppichini:
+
+We just call the DB search, right pass. Our collection name here is star coded. Apologies about that, I think I forgot to put the right global variable our coded, we create a query, we set the limit, we pass the query filter, we get the it back as a dictionary in the payload field of each it and we recreate our document a dictionary. I have types, right? So I know what this function is going to return. Now if you were to use a framework, right this part, it will be basically the same thing. If I were to use langchain and I want to specify a filter, I would have to write the same amount of code. So most of the times you don't really need to use a framework. One thing that is nice about not using a framework here is that I add control on the indexes.
+
+
+
+Francesco Zuppichini:
+
+Lang chain, for instance, will create the indexes only while you call a classmate like from document. And that is kind of cumbersome because sometimes I wasn't quoting bugs in which I was not understanding why one index was created before, after, et cetera, et cetera. So yes, just try to keep things simple and not always write on frameworks. Wonderful. Now I have a way to ask a query to get back the relative parts from that video. Now we need to translate this list of chunks to something that we can read as human. Before we do that, I was almost going to forget we need to keep state. Now, one of the last missing part is something in which I can store data.
+
+
+
+Francesco Zuppichini:
+
+Here I just have a setup function in which I'm going to create an SQL lite database, create a table called videos in which I have an id and a title. So later I can check, hey, is this video already in my database? Yes. I don't need to process that. I can just start immediately to QA on that video. If not, I'm going to do the chunking and embeddings. Got a couple of functions here to get video from Db to save video from and to save video to Db. So notice now I only use functions. I'm not using classes here.
+
+
+
+Francesco Zuppichini:
+
+I'm not a fan of object writing programming because it's very easy to kind of reach inheritance health in which we have like ten levels of inheritance. And here if a function needs to have state, here we do need to have state because we need a connection. So I will just have a function that initialize that state. I return tat to me, and me as a caller, I'm just going to call it and pass my state. Very simple tips allow you really to divide your code properly. You don't need to think about is my class to couple with another class, et cetera, et cetera. Very simple, very effective. So what I suggest when you're coding, just start with function and share states across just pass down state.
+
+
+
+Francesco Zuppichini:
+
+And when you realize that you can cluster a lot of function together with a common behavior, you can go ahead and put state in a class and have key function as methods. So try to not start first by trying to understand which class I need to use around how I connect them, because in my opinion it's just a waste of time. So just start with function and then try to cluster them together if you need to. Okay, last part, the juicy part as well. Language models. So we need the language model. Why do we need the language model? Because I'm going to ask a question, right. I'm going to get a bunch of relevant chunks from a video and the language model.
+
+
+
+Francesco Zuppichini:
+
+It needs to answer that to me. So it needs to get information from the chunks and reply that to me using that information as a context. To run language model, the easiest way in my opinion is using Ollama. There are a lot of models that are available. I put a link here and you can also bring your own model. There are a lot of videos and tutorial how to do that. You run this command as soon as you install it on Linux. It's a one line to install Ollama.
+
+
+
+Francesco Zuppichini:
+
+You run this command here, it's going to download Mistral 7B very good model and run it on your gpu if you have one, or your cpu if you don't have a gpu, run it on GPU. Here you can see it yet. It's around 6gb. So even with a low tier gpu, you should be able to run a seven minute model on your gpu. Okay, so this is the prompt just for also to show you how easy is this, this prompt was just very lazy. Copy and paste from langchain source code here prompt use the following piece of context to answer the question at the end. Blah blah blah variable to inject the context inside question variable to get question and then we're going to get an answer. How do we call it? Is it easy? I have a function here called getanswer passing a bunch of stuff, passing also the OpenAI from the OpenAI Python package model client passing a question, passing a vdb, my DB client, my embeddings, reading my prompt, getting my matching documents, calling the search function we have just seen before, creating my context.
+
+
+
+Francesco Zuppichini:
+
+So just joining the text in the chunks on a new line, calling the format function in Python. As simple as that. Just calling the format function in Python because the format function will look at a string and kitty will inject variables that match inside these parentheses. Passing context passing question using the OpenAI model client APIs and getting a reply back. Super easy. And here I'm returning the reply from the language model and also the list of documents. So this should be documents. I think I did a mistake.
+
+
+
+Francesco Zuppichini:
+
+When I copy and paste this to get this image and we are done right. We have a way to get some answers from a video by putting everything together. This can seem scary because there is no comment here, but I can show you tson code. I think it's easier so I can highlight stuff. I'm creating my embeddings, I'm getting my database, I'm getting my vector DB login, some stuff I'm getting my model client, I'm getting my vid. So here I'm defining the state that I need. You don't need comments because I get it straightforward. Like here I'm getting the vector db, good function name.
+
+
+
+Francesco Zuppichini:
+
+Then if I don't have the vector db, sorry. If I don't have the video id in a database, I'm going to get some information to the video. I'm going to download the subtitles, split the subtitles. I'm going to do the embeddings. In the end I'm going to save it to the betterDb. Finally I'm going to get my video back, printing something and start a while loop in which you can get an answer. So this is the full pipeline. Very simple, all function.
+
+
+
+Francesco Zuppichini:
+
+Also here fit function is very simple to divide things. Around here I have a file called RAG and here I just do all the RAG stuff. Right. It's all here similar. I have my file called crude. Here I'm doing everything I need to do with my database, et cetera, et cetera. Also a file called YouTube. So just try to split things based on what they do instead of what they are.
+
+
+
+Francesco Zuppichini:
+
+I think it's easier than to code. Yeah. So I can actually show you a demo in which we kind of embed a video from scratch. So let me kill this bad boy here. Let's get a juicy YouTube video from Sam. We can go with Gemma. We can go with Gemma. I think I haven't embedded that yet.
+
+
+
+Francesco Zuppichini:
+
+I'm sorry. My Eddie block is doing weird stuff over here. Okay, let me put this here.
+
+
+
+Demetrios:
+
+This is the moment that we need to all pray to the demo gods that this will work.
+
+
+
+Francesco Zuppichini:
+
+Oh yeah. I'm so sorry. I'm so sorry. I think it was already processed. So let me. I don't know this one. Also I noticed I'm seeing this very weird thing which I've just not seen that yesterday. So that's going to be interesting.
+
+
+
+Francesco Zuppichini:
+
+I think my poor Linux computer is giving up to running language models. Okay. Downloading ceramic logs, embeddings and we have it now before I forgot because I think that you guys spent some time doing this. So let's go on the visualize page and let's actually do the color by and let's do metadata, video id. Video id. Let's run it. Metadata, metadata, video meta. Oh my God.
+
+
+
+Francesco Zuppichini:
+
+Data video id. Why don't see the other one? I don't know. This is the beauty of live section.
+
+
+
+Demetrios:
+
+This is how we know it's real.
+
+
+
+Francesco Zuppichini:
+
+Yeah, I mean, this is working, right? This is called Chevroni Pro. That video. Yeah, I don't know about that. I don't know about that. It was working before. I can touch for sure. So probably I'm doing something wrong, probably later. Let's try that.
+
+
+
+Francesco Zuppichini:
+
+Let's see. I must be doing something wrong, so don't worry about that. But we are ready to ask questions, so maybe I can just say I don't know, what is Gemini pro? So let's see, Mr. Running on GPU is kind of fast, it doesn't take too much time. And here we can see we are 6gb, 1gb is for the embedding model. So 4gb, 5gb running the language model here it says Gemini pro is a colonized tool that can generate output based on given tasks. Blah, blah, blah, blah, blah, blah. Yeah, it seems to work.
+
+
+
+Francesco Zuppichini:
+
+Here you have it. Thanks. Of course. And I don't know if there are any questions about it.
+
+
+
+Demetrios:
+
+So many questions. There's a question that came through the chat that is a simple one that we can answer right away, which is can we access this code anywhere?
+
+
+
+Francesco Zuppichini:
+
+Yeah, so it's on my GitHub. Can I share a link with you in the chat? Maybe? So that should be YouTube. Can I put it here maybe?
+
+
+
+Demetrios:
+
+Yes, most definitely can. And we'll drop that into all of the spots so that we have it. Now. Next question from my side, while people are also asking, and you've got some fans in the chat right now, so.
+
+
+
+Francesco Zuppichini:
+
+Nice to everyone by the way.
+
+
+
+Demetrios:
+
+So from my side, I'm wondering, do you have any specific design decisions criteria that you use when you are building out your stack? Like you chose Mistral, you chose Ollama, you chose Qdrant. It sounds like with Qdrant you did some testing and you appreciated the capabilities. With Qdrant, was it similar with Ollama and Mistral?
+
+
+
+Francesco Zuppichini:
+
+So my test is how long it's going to take to install that tool. If it's taking too much time and it's hard to install because documentation is bad, so that it's a red flag, right? Because if it's hard to install and documentation is bad for the installation, that's the first thing people are going to read. So probably it's not going to be great for something down the road to use Olama. It took me two minutes, took me two minutes, it was incredible. But just install it, run it and it was done. Same thing with Qualent as well and same thing with the hacking phase library. So to me, usually as soon as if I see that something is easy to install, that's usually means that is good. And if the documentation to install it, it's good.
+
+
+
+Francesco Zuppichini:
+
+It means that people thought about it and they care about writing good documentation because they want people to use their tools. A lot of times for enterprises tools like cloud enterprise services, documentation is terrible because they know you're going to pay because you're an enterprise. And some manager has decided five years ago to use TatCloud provider, not the other. So I think know if you see recommendation that means that the people's company, startup enterprise behind that want you to use their software because they know and they're proud of it. Like they know that is good. So usually this is my way of going. And then of course I watch a lot of YouTube videos so I see people talking about different texts, et cetera. And if some youtuber which I trust say like I tried this seems to work well, I will note it down.
+
+
+
+Francesco Zuppichini:
+
+So then in the future I know hey, for these things I think I use ABC and this has already be tested by someone. I don't know I'm going to use it. Another important thing is reach out to your friends networks and say hey guys, I need to do this. Do you know if you have a good stock that you're already trying to experience with that?
+
+
+
+Demetrios:
+
+Yeah. With respect to the enterprise software type of tools, there was something that I saw that was hilarious. It was something along the lines of custom customer and user is not the same thing. Customer is the one who pays, user is the one who suffers.
+
+
+
+Francesco Zuppichini:
+
+That's really true for enterprise software, I need to tell you. So that's true.
+
+
+
+Demetrios:
+
+Yeah, we've all been through it. So there's another question coming through in the chat about would there be a collection for each embedded video based on your unique view video id?
+
+
+
+Francesco Zuppichini:
+
+No. What you want to do, I mean you could do that of course, but collection should encapsulate the project that you're doing more or less in my mind. So in this case I just call it embeddings. Maybe I should have called videos. So they are just going to be inside the same collection, they're just going to have different metadata. I think you need to correct me if I'm wrong that from your side, from the Qdrant code, searching things in the same collection, probably it's more effective to some degree. And imagine that if you have 1000 videos you need to create 1000 collection. And then I think cocoa wise collection are meant to have data coming from the same source, semantic value.
+
+
+
+Francesco Zuppichini:
+
+So in my case I have all videos. If I were to have different data, maybe from pdfs. Probably I would just create another collection, right, if I don't want them to be in the same part and search them. And one cool thing of having all the videos in the same collection is that I can just ask a question to all the videos at the same time if I want to, or I can change my filter and ask questions to two free videos. Specifically, you can do that if you have one collection per video, right? Like for instance at work I was embedding PDF and using qualitative and sometimes you need to talk with two pdf at the same time free, or just one, or maybe all the PDF in that folder. So I was just changing the filter, right? And that can only be done if they're all in the same collection.
+
+
+
+Sabrina Aquino:
+
+Yeah, that's a great explanation of collections. And I do love your approach of having everything locally and having everything in a structured way that you can really understand what you're doing. And I know you mentioned sometimes frameworks are not necessary. And I wonder also from your side, when do you think a framework would be necessary and does it have to do with scaling? What do you think?
+
+
+
+Francesco Zuppichini:
+
+So that's a great question. So what frameworks in theory should give you is good interfaces, right? So a good interface means that if I'm following that interface, I know that I can always call something that implements that interface in the same way. Like for instance in Langchain, if I call a betterdb, I can just swap the betterdb and I can call it in the same way. If the interfaces are good, the framework is useful. If you know that you are going to change stuff. In my case, I know from the beginning that I'm going to use Qdrant, I'm going to use Ollama, and I'm going to use SQL lite. So why should I go to the hello reading framework documentation? I install libraries, and then you need to install a bunch of packages from the framework that you don't even know why you need them. Maybe you have a conflict package, et cetera, et cetera.
+
+
+
+Francesco Zuppichini:
+
+If you know ready. So what you want to do then just code it and call it a day? Like in this case, I know I'm not going to change the vector DB. If you think that you're going to change something, even if it's a simple approach, it's fair enough, simple to change stuff. Like I will say that if you know that you want to change your vector DB providers, either you define your own interface or you use a framework with an already defined interface. But be careful because right too much on framework will. First of all, basically you don't know what's going on inside the hood for launching because it's so kudos to them. They were the first one. They are very smart people, et cetera, et cetera.
+
+
+
+Francesco Zuppichini:
+
+But they have inheritance held in that code. And in order to understand how to do certain stuff I had to look at in the source code, right. And try to figure it out. So which class is inherited from that? And going straight up in order to understand what behavior that class was supposed to have. If I pass this parameter, and sometimes defining an interface is straightforward, just maybe you want to define a couple of function in a class. You call it, you just need to define the inputs and the outputs and if you want to scale and you can just implement a new class called that interface. Yeah, that is at least like my take. I try to first try to do stuff and then if I need to scale, at least I have already something working and I can scale it instead of kind of try to do the perfect thing from the beginning.
+
+
+
+Francesco Zuppichini:
+
+Also because I hate reading documentation, so I try to avoid doing that in general.
+
+
+
+Sabrina Aquino:
+
+Yeah, I totally love this. It's about having like what's your end project? Do you actually need what you're going to build and understanding what you're building behind? I think it's super nice. We're also having another question which is I haven't used Qdrant yet. The metadata is also part of the embedding, I. E. Prepended to the chunk or so basically he's asking if the metadata is also embedded in the answer for that. Go ahead.
+
+
+
+Francesco Zuppichini:
+
+I think you have a good article about another search which you also probably embed the title. Yeah, I remember you have a good article in which you showcase having chunks with the title from, I think the section, right. And you first do a search, find the right title and then you do a search inside. So all the chunks from that paragraph, I think from that section, if I'm not mistaken. It really depends on the use case, though. If you have a document full of information, splitting a lot of paragraph, very long one, and you need to very be precise on what you want to fetch, you need to take advantage of the structure of the document, right?
+
+
+
+Sabrina Aquino:
+
+Yeah, absolutely. The metadata goes as payload in Qdrant. So basically it's like a JSON type of information attached to your data that's not embedded. We also have documentation on it. I will answer on the comments as well, I think another question I have for you, Franz, about the sort of evaluation and how would you perform a little evaluation on this rag that you created.
+
+
+
+Francesco Zuppichini:
+
+Okay, so that is an interesting question, because everybody talks about metrics and evaluation. Most of the times you don't really have that, right? So you have benchmarks, right. And everybody can use a benchmark to evaluate their pipeline. But when you have domain specific documents, like at work, for example, I'm doing RAG on insurance documents now. How do I create a data set from that in order to evaluate my RAG? It's going to be very time consuming. So what we are trying to do, so we get a bunch of people who knows these documents, catching some paragraph, try to ask a question, and that has the reply there and having basically a ground truth from their side. A lot of time the reply has to be composed from different part of the document. So, yeah, it's very hard.
+
+
+
+Francesco Zuppichini:
+
+It's very hard. So what I will kind of suggest is try to use no benchmark, or then you empirically try that. If you're building a RAG that users are going to use, always include a way to collect feedback and collect statistics. So collect the conversation, if that is okay with your privacy rules. Because in my opinion, it's always better to put something in production till you wait too much time, because you need to run all your metrics, et cetera, et cetera. And as soon as people start using that, you kind of see if it is good enough, maybe for language model itself, so that it's a different task, because you need to be sure that they don't say, we're stuck to the users. I don't really have the source of true answer here. It's very hard to evaluate them.
+
+
+
+Francesco Zuppichini:
+
+So what I know people also try to do, like, so they get some paragraph or some chunks, they ask GPD four to generate a question and the answer based on the paragraph, and they use that as an auto labeling way to create a data set to evaluate your RAG. That can also be effective, I guess 100%, yeah.
+
+
+
+Demetrios:
+
+And depending on your use case, you probably need more rigorous evaluation or less, like in this case, what you're doing, it might not need that rigor.
+
+
+
+Francesco Zuppichini:
+
+You can see, actually, I think was Canada Airlines, right?
+
+
+
+Demetrios:
+
+Yeah.
+
+
+
+Francesco Zuppichini:
+
+If you have something that is facing paying users, then think one of the times before that. In my case at all, I have something that is used by internal users and we communicate with them. So if my chat bot is saying something wrong, so they will tell me. And the worst thing that can happen is that they need to manually look for the answer. But as soon as your chatbot needs to do something that had people that are going to pay or medical stuff. You need to understand that for some use cases, you need to apply certain rules for others and you can be kind of more relaxed, I would say, based on the arm that your chatbot is going to generate.
+
+
+
+Demetrios:
+
+Yeah, I think that's all the questions we've got for now. Appreciate you coming on here and chatting with us. And I also appreciate everybody listening in. Anyone who is not following Fran, go give him a follow, at least for the laughs, the chuckles, and huge thanks to you, Sabrina, for joining us, too. It was a pleasure having you here. I look forward to doing many more of these.
+
+
+
+Sabrina Aquino:
+
+The pleasure is all mine, Demetrios, and it was a total pleasure. Fran, I learned a lot from your session today.
+
+
+
+Francesco Zuppichini:
+
+Thank you so much. Thank you so much. And also go ahead and follow the Qdrant on LinkedIn. They post a lot of cool stuff and read the Qdrant blogs. They're very good. They're very good.
+
+
+
+Demetrios:
+
+That's it. The team is going to love to hear that, I'm sure. So if you are doing anything cool with good old Qdrant, give us a ring so we can feature you in the vector space talks. Until next time, don't get lost in vector space. We will see you all later. Have a good one, y'all.
+",blog/talk-with-youtube-without-paying-a-cent-francesco-saverio-zuppichini-vector-space-talks.md
+"---
+
+draft: false
+
+title: The challenges in using LLM-as-a-Judge - Sourabh Agrawal | Vector Space Talks
+
+slug: llm-as-a-judge
+
+short_description: Sourabh Agrawal explores the world of AI chatbots.
+
+description: Everything you need to know about chatbots, Sourabh Agrawal goes in
+
+ to detail on evaluating their performance, from real-time to post-feedback
+
+ assessments, and introduces uptrendAI—an open-source tool for enhancing
+
+ chatbot interactions through customized and logical evaluations.
+
+preview_image: /blog/from_cms/sourabh-agrawal-bp-cropped.png
+
+date: 2024-03-19T15:05:02.986Z
+
+author: Demetrios Brinkmann
+
+featured: false
+
+tags:
+
+ - Vector Space Talks
+
+ - LLM
+
+ - retrieval augmented generation
+
+---
+
+> ""*You don't want to use an expensive model like GPT 4 for evaluation, because then the cost adds up and it does not work out. If you are spending more on evaluating the responses, you might as well just do something else, like have a human to generate the responses.*”\
+
+-- Sourabh Agrawal
+
+>
+
+
+
+Sourabh Agrawal, CEO & Co-Founder at UpTrain AI is a seasoned entrepreneur and AI/ML expert with a diverse background. He began his career at Goldman Sachs, where he developed machine learning models for financial markets. Later, he contributed to the autonomous driving team at Bosch/Mercedes, focusing on computer vision modules for scene understanding. In 2020, Sourabh ventured into entrepreneurship, founding an AI-powered fitness startup that gained over 150,000 users. Throughout his career, he encountered challenges in evaluating AI models, particularly Generative AI models. To address this issue, Sourabh is developing UpTrain, an open-source LLMOps tool designed to evaluate, test, and monitor LLM applications. UpTrain provides scores and offers insights to enhance LLM applications by performing root-cause analysis, identifying common patterns among failures, and providing automated suggestions for resolution.
+
+
+
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/1o7xdbdx32TiKe7OSjpZts?si=yCHU-FxcQCaJLpbotLk7AQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/vBJF2sy1Pyw).***
+
+
+
+
+
+
+
+
+
+
+
+## **Top takeaways:**
+
+
+
+Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. Fear, not! Sourabh will break it down for you.
+
+
+
+Check out the full conversation as they dive into the intricate world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics.
+
+
+
+Here are the key topics of this episode:
+
+
+
+1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction.
+
+2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions.
+
+3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration.
+
+4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments.
+
+5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios.
+
+
+
+> Fun Fact: Sourabh discussed the use of Uptrend, an innovative API that provides scores and explanations for various data checks, facilitating logical and informed decision-making when evaluating AI models.
+
+>
+
+
+
+## Show notes:
+
+
+
+00:00 Prototype evaluation subjective; scalability challenges emerge.\
+
+05:52 Use cheaper, smaller models for effective evaluation.\
+
+07:45 Use LLM objectively, avoid subjective biases.\
+
+10:31 Evaluate conversation quality and customization for AI.\
+
+15:43 Context matters for AI model performance.\
+
+19:35 Chat bot creates problems for car company.\
+
+20:45 Real-time user query evaluations, guardrails, and jailbreak.\
+
+27:27 Check relevance, monitor data, filter model failures.\
+
+28:09 Identify common themes, insights, experiment with settings.\
+
+32:27 Customize jailbreak check for specific app purposes.\
+
+37:42 Mitigate hallucination using evaluation data techniques.\
+
+38:59 Discussion on productizing hallucination mitigation techniques.\
+
+42:22 Experimentation is key for system improvement.
+
+
+
+## More Quotes from Sourabh:
+
+
+
+*""There are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose.*”\
+
+-- Sourabh Agrawal
+
+
+
+*""You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent.”*\
+
+-- Sourabh Agrawal
+
+
+
+*""Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes.”*\
+
+-- Sourabh Agrawal
+
+
+
+## Transcript:
+
+Demetrios:
+
+Sourabh, I've got you here from Uptrain. I think you have some notes that you wanted to present, but I also want to ask you a few questions because we are going to be diving into a topic that is near and dear to my heart and I think it's been coming up so much recently that is using LLMs as a judge. It is really hot these days. Some have even gone as far to say that it is the topic of 2024. I would love for you to dive in. Let's just get right to it, man. What are some of the key topics when you're talking about using LLMs to evaluate what key metrics are you using? How does this work? Can you break it down?
+
+
+
+Sourabh Agrawal:
+
+Yeah. First of all, thanks a lot for inviting me and no worries for hiccup. I guess I have never seen a demo or a talk which goes without any technical hiccups. It is bound to happen. Really excited to be here. Really excited to talk about LLM evaluations. And as you rightly pointed right, it's really a hot topic and rightly so. Right.
+
+
+
+Sourabh Agrawal:
+
+The way things have been panning out with LLMs and chat, GPT and GPT four and so on, is that people started building all these prototypes, right? And the way to evaluate them was just like eyeball them, just trust your gut feeling, go with the vibe. I guess they truly adopted the startup methodology, push things out to production and break things. But what people have been realizing is that it's not scalable, right? I mean, rightly so. It's highly subjective. It's a developer, it's a human who is looking at all the responses, someday he might like this, someday he might like something else. And it's not possible for them to kind of go over, just read through more than ten responses. And now the unique thing about production use cases is that they need continuous refinement. You need to keep on improving them, you need to keep on improving your prompt or your retrieval, your embedding model, your retrieval mechanisms and so on.
+
+
+
+Sourabh Agrawal:
+
+So that presents a case like you have to use a more scalable technique, you have to use LLMs as a judge because that's scalable. You can have an API call, and if that API call gives good quality results, it's a way you can mimic whatever your human is doing or in a way augment them which can truly act as their copilot.
+
+
+
+Demetrios:
+
+Yeah. So one question that's been coming through my head when I think about using LLMs as a judge and I get more into it, has been around when do we use those API calls. It's not in the moment that we're looking for this output. Is it like just to see if this output is real? And then before we show it to the user, it's kind of in bunches after we've gotten a bit of feedback from the user. So that means that certain use cases are automatically discarded from this, right? Like if we are thinking, all right, we're going to use LLMs as a judge to make sure that we're mitigating hallucinations or that we are evaluating better, it is not necessarily something that we can do in the moment, if I'm understanding it correctly. So can you break that down a little bit more? How does it actually look in practice?
+
+
+
+Sourabh Agrawal:
+
+Yeah, definitely. And that's a great point. The way I see it, there are three cases. Case one is what you mentioned in the moment before showing the response to the user. You want to check whether the response is good or not. In most of the scenarios you can't do that because obviously checking requires extra time and you don't want to add latency. But there are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose.
+
+
+
+Sourabh Agrawal:
+
+But most of the other evaluations like relevance, hallucinations, quality and so on, it has to be done. Post whatever you show to the users and then there you can do it in two ways. You can either experiment with use them to experiment with things, or you can run monitoring on your production and find out failure cases. And typically we are seeing like developers are adopting a combination of these two to find cases and then experiment and then improve their systems.
+
+
+
+Demetrios:
+
+Okay, so when you're doing it in parallel, that feels like something that is just asking you craft a prompt and as soon as. So you're basically sending out two prompts. Another piece that I have been thinking about is, doesn't this just add a bunch more cost to your system? Because there you're effectively doubling your cost. But then later on I can imagine you can craft a few different ways of making the evaluations and sending out the responses to the LLM better, I guess. And you can figure out how to trim some tokens off, or you can try and concatenate some of the responses and do tricks there. I'm sure there's all kinds of tricks that you know about that I don't, and I'd love to tell you to tell me about them, but definitely what kind of cost are we looking at? How much of an increase can we expect?
+
+
+
+Sourabh Agrawal:
+
+Yeah, so I think that's like a very valid limitation of evaluation. So that's why, let's say at uptrend, what we truly believe in is that you don't want to use an expensive model like GPT four for evaluation, because then the cost adds up and it does not work out. Right. If you are spending more on evaluating the responses, you may as well just do something else, like have a human to generate the responses. We rely on smaller models, on cheaper models for this. And secondly, the methodology which we adopt is that you don't want to evaluate everything on all the data points. Like maybe you have a higher level check, let's say, for jailbreak or let's say for the final response quality. And when you find cases where the quality is low, you run a battery of checks on these failures to figure out which part of the pipeline is exactly failing.
+
+
+
+Sourabh Agrawal:
+
+This is something what we call as like root cause analysis, where you take all these failure cases, which may be like 10% or 20% of the cases out of all what you are seeing in production. Take these 20% cases, run like a battery of checks on them. They might be exhaustive. You might run like five to ten checks on them. And then based on those checks, you can figure out that, what is the error mode? Is it a retrieval problem? Is it a citation problem? Is it a utilization problem? Is it hallucination? Is the query like the question asked by the user? Is it not clear enough? Is it like your embedding model is not appropriate? So that's how you can kind of take best of the two. Like, you can also improve the performance at the same time, make sure that you don't burn a hole in your pocket.
+
+
+
+Demetrios:
+
+I've also heard this before, and it's almost like you're using the LLMs as tests and they're helping you write. It's not that they're helping you write tests, it's that they are there and they're part of the tests that you're writing.
+
+
+
+Sourabh Agrawal:
+
+Yeah, I think the key here is that you have to use them objectively. What I have seen is a lot of people who are trying to do LLM evaluations, what they do is they ask the LLM that, okay, this is my response. Can you tell is it relevant or not? Or even, let's say, they go a step beyond and do like a grading thing, that is it highly relevant, somewhat relevant, highly irrelevant. But then it becomes very subjective, right? It depends upon the LLM to decide whether it's relevant or not. Rather than that you have to transform into an objective setting. You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent.
+
+
+
+Sourabh Agrawal:
+
+And I believe that's the key for making LLM evaluations work, because similar to LLM applications, even LLM evaluations, you have to put in a lot of efforts to make them really work and finally get some scores which align well with human expectations.
+
+
+
+Demetrios:
+
+It's funny how these LLMs mimic humans so much. They love the sound of their own voice, even. It's hilarious. Yeah, dude. Well, talk to me a bit more about how this looks in practice, because there's a lot of different techniques that you can do. Also, I do realize that when it comes to the use cases, it's very different, right. So if it's code generation use case, and you're evaluating that, it's going to be pretty clear, did the code run or did it not? And then you can go into some details on is this code actually more valuable? Is it a hacked way to do it? Et cetera, et cetera. But there's use cases that I would consider more sensitive and less sensitive.
+
+
+
+Demetrios:
+
+And so how do you look at that type of thing?
+
+
+
+Sourabh Agrawal:
+
+Yeah, I think so. The way even we think about evaluations is there's no one size fit all solution for different use cases. You need to look at different things. And even if you, let's say, looking at hallucinations, different use cases, or different businesses would look at evaluations from different lenses. Right. For someone, whatever, if they are focusing a lot on certain aspects of the correctness, someone else would focus less on those aspects and more on other aspects. The way we think about it is, know, we define different criteria for different use cases. So if you have A-Q-A bot, right? So you look at the quality of the response, the quality of the context.
+
+
+
+Sourabh Agrawal:
+
+If you have a conversational agent, then you look at the quality of the conversation as a whole. You look at whether the user is satisfied with that conversation. If you are writing long form content. Like, you look at coherence across the content, you look at the creativity or the sort of the interestingness of the content. If you have an AI agent, you look at how well they are able to plan, how well they were able to execute a particular task, and so on. How many steps do they take to achieve their objective? So there are a variety of these evaluation matrices, which are each one of which is more suitable for different use cases. And even there, I believe a good tool needs to provide certain customization abilities to their developers so that they can transform it, they can modify it in a way that it makes most sense for their business.
+
+
+
+Demetrios:
+
+Yeah. Is there certain ones that you feel like are more prevalent and that if I'm just thinking about this, I'm developing on the side and I'm thinking about this right now and I'm like, well, how could I start? What would you recommend?
+
+
+
+Sourabh Agrawal:
+
+Yeah, definitely. One of the biggest use case for LLMs today is rag. Applications for Rag. I think retrieval is the key. So I think the best starting points in terms of evaluations is like look at the response quality, so look at the relevance of the response, look at the completeness of the response, look at the context quality. So like context relevance, which judges the retrieval quality. Hallucinations, which judges whether the response is grounded by the context or not. If tone matters for your use case, look at the tonality and finally look at the conversation satisfaction, because at the end, whatever outputs you give, you also need to judge whether the end user is satisfied with these outputs.
+
+
+
+Sourabh Agrawal:
+
+So I would say these four or five matrices are the best way for any developer to start who is building on top of these LLMs. And from there you can understand how the behavior is going, and then you can go more deeper, look at more nuanced metrics, which can help you understand your systems even better.
+
+
+
+Demetrios:
+
+Yeah, I like that. Now, one thing that has also been coming up in my head a lot are like the custom metrics and custom evaluation and also proprietary data set, like evaluation data sets, because as we all know, the benchmarks get gamed. And you see on Twitter, oh wow, this new model just came out. It's so good. And then you try it and you're like, what are you talking about? This thing just was trained on the benchmarks. And so it seems like it's good, but it's not. And can you talk to us about creating these evaluation data sets? What have you seen as far as the best ways of going about it? What kind of size? Like how many do we need to actually make it valuable. And what is that? Give us a breakdown there?
+
+
+
+Sourabh Agrawal:
+
+Yeah, definitely. So, I mean, surprisingly, the answer is that you don't need that many to get started. We have seen cases where even if someone builds a test data sets of like 50 to 100 samples, that's actually like a very good starting point than where they were in terms of manual annotation and in terms of creation of this data set, I believe that the best data set is what actually your users are asking. You can look at public benchmarks, you can generate some synthetic data, but none of them matches the quality of what actually your end users are looking, because those are going to give you issues which you can never anticipate. Right. Even you're generating and synthetic data, you have to anticipate what issues can come up and generate data. Beyond that, if you're looking at public data sets, they're highly curated. There is always problems of them leaking into the training data and so on.
+
+
+
+Sourabh Agrawal:
+
+So those benchmarks becomes highly reliable. So look at your traffic, take 50 samples from them. If you are collecting user feedback. So the cases where the user has downvoted or the user has not accepted the response, I mean, they are very good cases to look at. Or if you're running some evaluations, quality checks on that cases which are failing, I think they are the best starting point for you to have a good quality test data sets and use that as a way to experiment with your prompts, experiment with your systems, experiment with your retrievals, and iteratively improve them.
+
+
+
+Demetrios:
+
+Are you weighing any metrics more than others? Because I've heard stories about how sometimes you'll see that a new model will come out, or you're testing out a new model, and it seems like on certain metrics, it's gone down. But then the golden metric that you have, it actually has gone up. And so have you seen which metrics are better for different use cases?
+
+
+
+Sourabh Agrawal:
+
+I think for here, there's no single answer. I think that metric depends upon the business. Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes. Especially like if you're using any of the bigger models, like any of the GPT or claudes, or to some extent even mistral, is highly performant. So if you're using any of these highly performant models, then if you give them the right context, the response more or less, it comes out to be good. So I think one thing which we are seeing people focusing a lot on, experimenting with different retrieval mechanisms, embedding models, and so on. But then again, the final golden key, I think many people we have seen, they annotate some data set so they have like a ground root response or a golden response, and they completely rely on just like how well their answer matches with that golden response, which I believe it's a very good starting point because now you know that, okay, if this is right and you're matching very highly with that, then obviously your response is also right.
+
+
+
+Demetrios:
+
+And what about those use cases where golden responses are very subjective?
+
+
+
+Sourabh Agrawal:
+
+Yeah, I think that's where the issues like. So I think in those scenarios, what we have seen is that one thing which people have been doing a lot is they try to see whether all information in the golden response is contained in the generated response. You don't miss out any of the important information in your ground truth response. And on top of that you want it to be concise, so you don't want it to be blabbering too much or giving highly verbose responses. So that is one way we are seeing where people are getting around this subjectivity issue of the responses by making sure that the key information is there. And then beyond that it's being highly concise and it's being to the point in terms of the task being asked.
+
+
+
+Demetrios:
+
+And so you kind of touched on this earlier, but can you say it again? Because I don't know if I fully grasped it. Where are all the places in the system that you are evaluating? Because it's not just the output. Right. And how do you look at evaluation as a system rather than just evaluating the output every once in a while?
+
+
+
+Sourabh Agrawal:
+
+Yeah, so I mean, what we do is we plug with every part. So even if you start with retrieval, so we have a high level check where we look at the quality of retrieved context. And then we also have evaluations for every part of this retrieval pipeline. So if you're doing query rewrite, if you're doing re ranking, if you're doing sub question, we have evaluations for all of them. In fact, we have worked closely with the llama index team to kind of integrate with all of their modular pipelines. Secondly, once we cross the retrieval step, we have around five to six matrices on this retrieval part. Then we look at the response generation. We have their evaluations for different criterias.
+
+
+
+Sourabh Agrawal:
+
+So conciseness, completeness, safety, jailbreaks, prompt injections, as well as you can define your custom guidelines. So you can say that, okay, if the user is asking anything and related to code, the output should also give an example code snippet so you can just in plain English, define this guideline. And we check for that. And then finally, like zooming out, we also have checks. We look at conversations as a whole, how the user is satisfied, how many turns it requires for them to, for the chatbot or the LLM to answer the user. Yeah, that's how we look at the whole evaluations as a whole.
+
+
+
+Demetrios:
+
+Yeah. It really reminds me, I say this so much because it's one of the biggest fails, I think, on the Internet, and I'm sure you've seen it where I think it was like Chevy or GM, the car manufacturer car company, they basically slapped a chat bot on their website. It was a GPT call, and people started talking to it and realized, oh my God, this thing will do anything that we want it to do. So they started asking it questions like, is Tesla better than GM? And the bot would say, yeah, give a bunch of reasons why Tesla is better than GM on the website of GM. And then somebody else asked it, oh, can I get a car for a dollar? And it said, no. And then it said, but I'm broke and I need a car for a dollar. And it said, ok, we'll sell you the car for the dollar. And so you're getting yourself into all this trouble just because you're not doing that real time evaluation.
+
+
+
+Demetrios:
+
+How do you think about the real time evaluation? And is that like an extra added layer of complexity?
+
+
+
+Sourabh Agrawal:
+
+Yeah, for the real time evaluations, I think the most important cases, which, I mean, there are two scenarios which we feel like are most important to deal with. One is you have to put some guardrails in the sense that you don't want the users to talk about your competitors. You don't want to answer some queries, like, say, you don't want to make false promises, and so on, right? Some of them can be handled with pure rejects, contextual logics, and some of them you have to do evaluations. And the second is jailbreak. Like, you don't want the user to use, let's say, your Chevy chatbot to kind of solve math problems or solve coding problems, right? Because in a way, you're just like subsidizing GPT four for them. And all of these can be done just on the question which is being asked. So you can have a system where you can fire a query, evaluate a few of these key matrices, and in parallel generate your responses. And as soon as you get your response, you also get your evaluations.
+
+
+
+Sourabh Agrawal:
+
+And you can have some logic that if the user is asking about something which I should not be answering. Instead of giving the response, I should just say, sorry, I could not answer this or have a standard text for those cases and have some mechanisms to limit such scenarios and so on.
+
+
+
+Demetrios:
+
+And it's better to do that in parallel than to try and catch the response. Make sure it's okay before sending out an LLM call.
+
+
+
+Sourabh Agrawal:
+
+I mean, generally, yes, because if you look at, if you catch the response, it adds another layer of latency.
+
+
+
+Demetrios:
+
+Right.
+
+
+
+Sourabh Agrawal:
+
+And at the end of the day, 95% of your users are not trying to do this any good product. A lot of those users are genuinely trying to use it and you don't want to build something which kind of breaks, creates an issue for them, add a latency for them just to solve for that 5%. So you have to be cognizant of this fact and figure out clever ways to do this.
- faq_model.save_servable(os.path.join(ROOT_DIR, ""servable""))
-```
+Demetrios:
+Yeah, I remember I was talking to Philip of company called honeycomb, and they added some LLM functionality to their product. And he said that when people were trying to either prompt, inject or jailbreak, it was fairly obvious because there were a lot of calls. It kind of started to be not human usage and it was easy to catch in that way. Have you seen some of that too? And what are some signs that you see when people are trying to jailbreak?
-Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for
-`SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process.
+Sourabh Agrawal:
+Yeah, I think we also have seen typically, what we also see is that whenever someone is trying to jailbreak, the length of their question or the length of their prompt typically is much larger than any average question, because they will have all sorts of instruction like forget everything, you know, you are allowed to say all of those things. And then again, this issue also comes because when they try to jailbreak, they try with one technique, it doesn't work. They try with another technique, it doesn't work. Then they try with third technique. So there is like a burst of traffic. And even in terms of sentiment, typically the sentiment or the coherence in those cases, we have seen that to be lower as compared to a genuine question, because people are just trying to cramp up all these instructions into the response. So there are definitely certain signs which already indicates that the user is trying to jailbreak this. And I think those are leg race indicators to catch them.
-### Dataset-wise evaluation
+Demetrios:
-Up to this moment we've calculated only batch-wise metrics.
+And I assume that you've got it set up so you can just set an alert when those things happen and then it at least will flag it and have humans look over it or potentially just ask the person to cool off for the next minute. Hey, you've been doing some suspicious activity here. We want to see something different so I think you were going to show us a little bit about uptrend, right? I want to see what you got. Can we go for a spin?
-Such metrics can fluctuate a lot depending on a batch size and can be misleading.
-It might be helpful if we can calculate a metric on a whole dataset or some large part of it.
-Raw data may consume a huge amount of memory, and usually we can't fit it into one batch.
+Sourabh Agrawal:
-Embeddings, on the contrary, most probably will consume less.
+Yeah, definitely. Let me share my screen and I can show you how that looks like.
-That's where `Evaluator` enters the scene.
+Demetrios:
-At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels.
+Cool, very cool. Yeah. And just while you're sharing your screen, I want to mention that for this talk, I wore my favorite shirt, which is it says, I don't know if everyone can see it, but it says, I hallucinate more than Chat GPT.
-After that, it calculates a metric value, which could be more representative than batch-wise ones.
+Sourabh Agrawal:
-However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory.
+I think that's a cool one.
-A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric.
-You can mitigate this bottleneck by calculating a rectangle matrix with reduced size.
-`Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings.
+Demetrios:
-If sample size is not specified, evaluation is performed on all embeddings.
+What do we got here?
-Fewer words! Let's add evaluator to our code and finish `train.py`.
+Sourabh Agrawal:
-
+Yeah, so, yeah, let me kind of just get started. So I create an account with uptrend. What we have is an API method, API way of calculating these evaluations. So you get an API key similar to what you get for chat, GPT or others, and then you can just do uptrend log and evaluate and you can tell give your data. So you can give whatever your question responses context, and you can define your checks which you want to evaluate for. So if I create an API key, I can just copy this code and I just already have it here. So I'll just show you. So we have two mechanisms.
-```python
-...
-from quaterion.eval.evaluator import Evaluator
+Sourabh Agrawal:
-from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision
+One is that you can just run evaluations so you can define like, okay, I want to run context relevance, I want to run response completeness. Similarly, I want to run jailbreak. I want to run for safety. I want to run for satisfaction of the users and so on. And then when you run it, it gives back you a score and it gives back you an explanation on why this particular score has been given for this particular question.
-from quaterion.eval.samplers.pair_sampler import PairSampler
-...
+Demetrios:
+Can you make that a little bit bigger? Yeah, just give us some plus. Yeah, there we.
-def train(model, train_dataset_path, val_dataset_path, params):
- ...
+Sourabh Agrawal:
+It'S, it's essentially an API call which takes the data, takes the list of checks which you want to run, and then it gives back and score and an explanation for that. So based on that score, you can have logics, right? If the jailbreak score is like more than 0.5, then you don't want to show it. Like you want to switch back to a default response and so on. And then you can also configure that we log all of these course, and we have dashboard where you can access them.
- metrics = {
- ""rrk"": RetrievalReciprocalRank(),
- ""rp@1"": RetrievalPrecision(k=1)
+Demetrios:
- }
+I was just going to ask if you have dashboards. Everybody loves a good dashboard. Let's see it. That's awesome.
- sampler = PairSampler()
- evaluator = Evaluator(metrics, sampler)
- results = Quaterion.evaluate(evaluator, val_dataset, model.model)
+Sourabh Agrawal:
- print(f""results: {results}"")
+So let's see. Okay, let's take this one. So in this case, I just ran some of this context relevance checks for some of the queries. So you can see how that changes on your data sets. If you're running the same. We also run this in a monitoring setting, so you can see how this varies over time. And then finally you have all of the data. So we provide all of the data, you can download it, run whatever analysis you want to run, and then you can also, one of the features which we have built recently and is getting very popular amongst our users is that you can filter cases where, let's say, the model is failing.
-```
+Sourabh Agrawal:
-### Train Results
+So let's say I take all the cases where the responses is zero and I can find common topics. So I can look at all these cases and I can find, okay, what's the common theme across them? Maybe, as you can see, they're all talking about France, Romeo Juliet and so on. So it can just pull out a common topic among these cases. So then this gives you some insights into where things are going wrong and what do you need to improve upon. And the second piece of the puzzle is the experiments. So, not just you can evaluate them, but also you can use it to experiment with different settings. So let's say. Let me just pull out an experiment I ran recently.
-At this point we can train our model, I do it via `python3 -m faq.train`.
+Demetrios:
+Yeah.
-
+Sourabh Agrawal:
+So let's say I want to compare two different models, right? So GPT 3.5 and clot two. So I can now see that, okay, clot two is giving more concise responses, but in terms of factual accuracy, like GPT 3.5 is more factually accurate. So I can now decide, based on my application, based on what my users want, I can now decide which of these criteria is more meaningful for me, it's more meaningful for my users, for my data, and decide which prompt or which model I want to go ahead with.
-|epoch|train_precision@1|train_reciprocal_rank|val_precision@1|val_reciprocal_rank|
-|-----|-----------------|---------------------|---------------|-------------------|
-|0 |0.650 |0.732 |0.659 |0.741 |
+Demetrios:
-|100 |0.665 |0.746 |0.673 |0.754 |
+This is totally what I was talking about earlier, where you get a new model and you're seeing on some metrics, it's doing worse. But then on your core metric that you're looking at, it's actually performing better. So you have to kind of explain to yourself, why is it doing better on those other metrics? I don't know if I'm understanding this correctly. We can set the metrics that we're looking at.
-|200 |0.677 |0.757 |0.682 |0.763 |
-|300 |0.686 |0.765 |0.688 |0.768 |
-|400 |0.695 |0.772 |0.694 |0.773 |
+Sourabh Agrawal:
-|500 |0.701 |0.778 |0.700 |0.777 |
+Yeah, actually, I'll show you the kind of metric. Also, I forgot to mention earlier, uptrend is like open source.
-
+Demetrios:
+Nice.
-Results obtained with `Evaluator`:
+Sourabh Agrawal:
+Yeah. So we have these pre configured checks, so you don't need to do anything. You can just say uptrend response completeness or uptrend prompt injection. So these are like, pre configured. So we did the hard work of getting all these scores and so on. And on top of that, we also have ways for you to customize these matrices so you can define a custom guideline. You can change the prompt which you want. You can even define a custom python function which you want to act as an evaluator.
-
+Sourabh Agrawal:
-| precision@1 | reciprocal_rank |
+So we provide all of those functionalities so that they can also take advantage of things which are already there, as well as they can create custom things which make sense for them and have a way to kind of truly understand how their systems are doing.
-|-------------|-----------------|
-| 0.577 | 0.675 |
+Demetrios:
+Oh, that's really cool. I really like the idea of custom, being able to set custom ones, but then also having some that just come right out of the box to make life easier on us.
-
+Sourabh Agrawal:
-After training all the metrics have been increased.
+Yeah. And I think both are needed because you want someplace to start, and as you advance, you also want to kind of like, you can't cover everything right, with pre configured. So you want to have a way to customize things.
-And this training was done in just 3 minutes on a single gpu!
-There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation.
+Demetrios:
+Yeah. And especially once you have data flowing, you'll start to see what other things you need to be evaluating exactly.
-## Model serving
+Sourabh Agrawal:
-As you could already notice, Quaterion framework is split into two separate libraries: `quaterion`
+Yeah, that's very true.
-and [quaterion-models](https://quaterion-models.qdrant.tech/).
-The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc.
-While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself.
+Demetrios:
+Just the random one. I'm not telling you how to build your product or anything, but have you thought about having a community sourced metric? So, like, all these custom ones that people are making, maybe there's a hub where we can add our custom?
-The reasons for this separation are:
+Sourabh Agrawal:
+Yeah, I think that's really interesting. This is something we also have been thinking a lot. It's not built out yet, but we plan to kind of go in that direction pretty soon. We want to kind of create, like a store kind of a thing where people can add their custom matrices. So. Yeah, you're right on. I think I also believe that's the way to go, and we will be releasing something on those fronts pretty soon.
-- less amount of entities you need to operate in a production environment
-- reduced memory footprint
+Demetrios:
+Nice. So drew's asking, how do you handle jailbreak for different types of applications? Jailbreak for a medical app would be different than one for a finance one, right? Yeah.
-It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated.
-Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage.
+Sourabh Agrawal:
+The way our jailbreak check is configured. So it takes something, what you call as a model purpose. So you define what is the purpose of your model? For a financial app, you need to say that, okay, this LLM application is designed to answer financial queries so and so on. From medical. You will have a different purpose, so you can configure what is the purpose of your app. And then when we take up a user query, we check whether the user query is under. Firstly, we check also for illegals activities and so on. And then we also check whether it's under the preview of this purpose.
-The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production.
+Sourabh Agrawal:
+If not, then we tag that as a scenario of jailbreak because the user is trying to do something other than the purpose so that's how we tackle it.
-In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in:
+Demetrios:
+Nice, dude. Well, this is awesome. Is there anything else you want to say before we jump off?
-```python
-import os
-import json
+Sourabh Agrawal:
+No, I mean, it was like, a great conversation. Really glad to be here and great talking to you.
-import torch
-from quaterion_models.model import SimilarityModel
+Demetrios:
-from quaterion.distances import Distance
+Yeah, I'm very happy that we got this working and you were able to show us a little bit of uptrend. Super cool that it's open source. So I would recommend everybody go check it out, get your LLMs working with confidence, and make sure that nobody is using your chatbot to be their GPT subsidy, like GM use case and. Yeah, it's great, dude. I appreciate.
-from faq.config import DATA_DIR, ROOT_DIR
+Sourabh Agrawal:
+Yeah, check us out like we are@GitHub.com. Slash uptrendai slashuptrend.
+Demetrios:
-if __name__ == ""__main__"":
+There we go. And if anybody else wants to come on to the vector space talks and talk to us about all the cool stuff that you're doing, hit us up and we'll see you all astronauts later. Don't get lost in vector space.
- device = ""cuda:0"" if torch.cuda.is_available() else ""cpu""
- model = SimilarityModel.load(os.path.join(ROOT_DIR, ""servable""))
- model.to(device)
+Sourabh Agrawal:
- dataset_path = os.path.join(DATA_DIR, ""val_cloud_faq_dataset.jsonl"")
+Yeah, thank you. Thanks a lot.
- with open(dataset_path) as fd:
+Demetrios:
- answers = [json.loads(json_line)[""answer""] for json_line in fd]
+All right, dude. There we go. We are good. I don't know how the hell I'm going to stop this one because I can't go through on my phone or I can't go through on my computer. It's so weird. So I'm not, like, technically there's nobody at the wheel right now. So I think if we both get off, it should stop working. Okay.
-
- # everything is ready, let's encode our answers
- answer_embeddings = model.encode(answers, to_numpy=False)
+Demetrios:
-
+Yeah, but that was awesome, man. This is super cool. I really like what you're doing, and it's so funny. I don't know if we're not connected on LinkedIn, are we? I literally just today posted a video of me going through a few different hallucination mitigation techniques. So it's, like, super timely that you talk about this. I think so many people have been thinking about this.
- # Some prepared questions and answers to ensure that our model works as intended
- questions = [
- ""what is the pricing of aws lambda functions powered by aws graviton2 processors?"",
+Sourabh Agrawal:
- ""can i run a cluster or job for a long time?"",
+Definitely with enterprises, it's like a big issue. Right? I mean, how do you make it safe? How do you make it production ready? So I'll definitely check out your video. Also would be super interesting.
- ""what is the dell open manage system administrator suite (omsa)?"",
- ""what are the differences between the event streams standard and event streams enterprise plans?"",
- ]
+Demetrios:
- ground_truth_answers = [
+Just go to my LinkedIn right now. It's just like LinkedIn.com dpbrinkm or just search for me. I think we are connected. We're connected. All right, cool. Yeah, so, yeah, check out the last video I just posted, because it's literally all about this. And there's a really cool paper that came out and you probably saw it. It's all like, mitigating AI hallucinations, and it breaks down all 32 techniques.
- ""aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions"",
- ""yes, you can run a cluster for as long as is required"",
- ""omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system"",
+Demetrios:
- ""to find out more information about the different event streams plans, see choosing your plan"",
+And I was talking with on another podcast that I do, I was literally talking with the guys from weights and biases yesterday, and I was talking about how I was like, man, these evaluation data sets as a service feels like something that nobody's doing. And I guess it's probably because, and you're the expert, so I would love to hear what you have to say about it, but I guess it's because you don't really need it that bad. With a relatively small amount of data, you can start getting some really good evaluation happening. So it's a lot better than paying somebody else.
- ]
-
- # encode our questions and find the closest to them answer embeddings
+Sourabh Agrawal:
- question_embeddings = model.encode(questions, to_numpy=False)
+And also, I think it doesn't make sense also for a service because some external person is not best suited to make a data set for your use case.
- distance = Distance.get_by_name(Distance.COSINE)
- question_answers_distances = distance.distance_matrix(
- question_embeddings, answer_embeddings
+Demetrios:
- )
+Right.
- answers_indices = question_answers_distances.min(dim=1)[1]
- for q_ind, a_ind in enumerate(answers_indices):
- print(""Q:"", questions[q_ind])
+Sourabh Agrawal:
- print(""A:"", answers[a_ind], end=""\n\n"")
+It's you. You have to look at what your users are asking to create a good data set. You can have a method, which is what optrain also does. We basically help you to sample and pick out the right cases from this data set based on the feedback of your users, based on the scores which are being generated. But it's difficult for someone external to craft really good questions or really good queries or really good cases which make sense for your business.
- assert (
- answers[a_ind] == ground_truth_answers[q_ind]
- ), f""<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>""
+Demetrios:
-```
+Because the other piece that kind of, like, spitballed off of that, the other piece of it was techniques. So let me see if I can place all this words into a coherent sentence for you. It's basically like, okay, evaluation data sets don't really make sense because you're the one who knows the most. With a relatively small amount of data, you're going to be able to get stuff going real quick. What I thought about is, what about these hallucination mitigation techniques so that you can almost have options. So in this paper, right, there's like 32 different kinds of techniques that they use, and some are very pertinent for rags. They have like, five different or four different types of techniques. When you're dealing with rags to mitigate hallucinations, then they have some like, okay, if you're distilling a model, here is how you can make sure that the new distilled model doesn't hallucinate as much.
-We stored our collection of answer embeddings in memory and perform search directly in Python.
+Demetrios:
-For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://qdrant.tech/).
+Blah, blah, blah. But what I was thinking is like, what about how can you get a product? Or can you productize these kind of techniques? So, all right, cool. They're in this paper, but in uptrain, can we just say, oh, you want to try this new mitigation technique? We make that really easy for you. You just have to select it as one of the hallucination mitigation techniques. And then we do the heavy lifting of, if it's like, there's one. Have you heard of fleek? That was one that I was talking about in the video. Fleek is like where there's a knowledge graph, LLM that is created, and it is specifically created to try and combat hallucinations. And the way that they do it is they say that LLM will try and identify anywhere in the prompt or the output.
-It provides durability, speed boost, and a bunch of other features.
+Demetrios:
-So far, we've implemented a whole training process, prepared model for serving and even applied a
+Sorry, the output. It will try and identify if there's anything that can be fact checked. And so if it says that humans landed on the moon in 1969, it will identify that. And then either through its knowledge graph or through just forming a search query that will go out and then search the Internet, it will verify if that fact is true in the output. So that's like one technique, right? And so what I'm thinking about is like, oh, man, wouldn't it be cool if you could have all these different techniques to be able to use really easily as opposed to, great, I read it in a paper. Now, how the fuck am I going to get my hands on one of these LLMs with a knowledge graph if I don't train it myself?
-trained model today with `Quaterion`.
+Sourabh Agrawal:
-Thank you for your time and attention!
+Shit, yeah, I think that's a great suggestion. I'll definitely check it out. One of the things which we also want to do is integrate with all these techniques because these are really good techniques and they help solve a lot of problems, but using them is not simple. Recently we integrated with Spade. It's basically like a technique where I.
-I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects.
+Demetrios:
-All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
+Did another video on spade, actually.
-Stay tuned!:)",articles/faq-question-answering.md
-"---
+Sourabh Agrawal:
-title: ""Discovery needs context"" #required
+Yeah, basically. I think I'll also check out these hallucinations. So right now what we do is based on this paper called fact score, which instead of checking on the Internet, it checks in the context only to verify this fact can be verified from the context or not. But I think it would be really cool if people can just play around with these techniques and just see whether it's actually working on their data or not.
-short_description: Discover points by constraining the space.
-description: Qdrant released a new functionality that lets you constrain the space in which a search is performed, relying only on vectors. #required
-social_preview_image: /articles_data/discovery-search/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required.
+Demetrios:
-small_preview_image: /articles_data/discovery-search/icon.svg # This image will be used in the list of articles at the footer, should be 40x40px
+That's kind of what I was thinking is like, oh, can you see? Does it give you a better result? And then the other piece is like, oh, wait a minute, does this actually, can I put like two or three of them in my system at the same time? Right. And maybe it's over engineering or maybe it's not. I don't know. So there's a lot of fun stuff that can go down there and it's fascinating to think about.
-preview_dir: /articles_data/discovery-search/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required.
-weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list.
-author: Luis Cossío # Author of the article. Required.
+Sourabh Agrawal:
-author_link: https://coszio.github.io # Link to the author's page. Required.
+Yeah, definitely. And I think experimentation is the key here, right? I mean, unless you try out them, you don't know what works. And if something works which improves your system, then definitely it was worth it.
-date: 2024-01-31T08:00:00-03:00 # Date of the article. Required.
-draft: false # If true, the article will not be published
-keywords: # Keywords for SEO
+Demetrios:
- - why use a vector database
+Thanks for that.
- - specialty
- - search
- - discovery
+Sourabh Agrawal:
- - state-of-the-art
+We'll check into it.
- - vector-search
----
+Demetrios:
+Dude, awesome. It's great chatting with you, bro. And I'll talk to you later, bro.
-When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for America. They were looking for a new route to India, and they were convinced that the Earth was round. They didn't know anything about America, but since they were going west, they stumbled upon it.
+Sourabh Agrawal:
-They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new ""discovery"" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep this concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__.
+Yeah, thanks a lot. Great speaking. See you. Bye.
+",blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talk-012.md
+"---
+draft: false
+title: Iveta Lohovska on Gen AI and Vector Search | Qdrant
-In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search.
+slug: gen-ai-and-vector-search
+short_description: Iveta talks about the importance of trustworthy AI,
+ particularly when implementing it within high-stakes enterprises like
-You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space.
+ governments and security agencies
+description: Discover valuable insights on generative AI, vector search, and ethical AI implementation from Iveta Lohovska, Chief Technologist at HPE.
+preview_image: /blog/from_cms/iveta-lohovska-bp-cropped.png
-This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it.
+date: 2024-04-11T22:12:00.000Z
+author: Demetrios Brinkmann
+featured: false
-![Discovery search visualization](/articles_data/discovery-search/discovery-search.png)
+tags:
+ - Vector Space Talks
+ - Vector Search
-While positive and negative vectors might suggest the use of the recommendation interface, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of _triplet loss_, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by ""moving"" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the ""anchors"", or result candidates, which fit this characteristic better.
+ - Retrieval Augmented Generation
+ - GenAI
+---
-![Triplet loss](/articles_data/discovery-search/triplet-loss.png)
+# Exploring Gen AI and Vector Search: Insights from Iveta Lohovska
-[__Discovery search__](#discovery-search), then, is made up of two main inputs:
+> *""In the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations.”*\
+— Iveta Lohovska
+>
-- __target__: the main point of interest
-- __context__: the pairs of positive and negative points we just defined.
+Iveta Lohovska serves as the Chief Technologist and Principal Data Scientist for AI and Supercomputing at [Hewlett Packard Enterprise (HPE)](https://www.hpe.com/us/en/home.html), where she champions the democratization of decision intelligence and the development of ethical AI solutions. An industry leader, her multifaceted expertise encompasses natural language processing, computer vision, and data mining. Committed to leveraging technology for societal benefit, Iveta is a distinguished technical advisor to the United Nations' AI for Good program and a Data Science lecturer at the Vienna University of Applied Sciences. Her career also includes impactful roles with the World Bank Group, focusing on open data initiatives and Sustainable Development Goals (SDGs), as well as collaborations with USAID and the Gates Foundation.
-However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search).
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/7f1RDwp5l2Ps9N7gKubl8S?si=kCSX4HGCR12-5emokZbRfw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RsRAUO-fNaA).***
-## Discovery search
+
-Let's talk about the first case: context with a target.
+
-To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images.
-CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing ""burger"" in the text input:
+## **Top takeaways:**
-![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png)
+In our continuous pursuit of knowledge and understanding, especially in the evolving landscape of AI and the vector space, we brought another great Vector Space Talk episode featuring Iveta Lohovska as she talks about generative AI and [vector search](https://qdrant.tech/).
-This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example:
+Iveta brings valuable insights from her work with the World Bank and as Chief Technologist at HPE, explaining the ins and outs of ethical AI implementation.
-![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png)
+Here are the episode highlights:
+- Exploring the critical role of trustworthiness and explainability in AI, especially within high confidentiality use cases like government and security agencies.
+- Discussing the importance of transparency in AI models and how it impacts the handling of data and understanding the foundational datasets for vector search.
-Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken?
+- Iveta shares her experiences implementing generative AI in high-stakes environments, including the energy sector and policy-making, emphasizing accuracy and source credibility.
+- Strategies for managing data privacy in high-stakes sectors, the superiority of on-premises solutions for control, and the implications of opting for cloud or hybrid infrastructure.
+- Iveta's take on the maturity levels of generative AI, the ongoing development of smaller, more focused models, and the evolving landscape of AI model licensing and open-source contributions.
-Turns out, multimodal encoders might not work how you expect them to. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text.
+> Fun Fact: The climate agent solution showcased by Iveta helps individuals benchmark their carbon footprint and assists policymakers in drafting policy recommendations based on scientifically accurate data.
-![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png)
+>
-This is where discovery excels, because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text).
+## Show notes:
-![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png)
+00:00 AI's vulnerabilities and ethical implications in practice.\
+06:28 Trust reliable sources for accurate climate data.\
+09:14 Vector database offers control and explainability.\
-Discovery also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for.
+13:21 On-prem vital for security and control.\
+16:47 Gen AI chat models at basic maturity.\
+19:28 Mature technical community, but slow enterprise adoption.\
-Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type ""pizza"", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza!
+23:34 Advocates for open source but highlights complexities.\
+25:38 Unreliable information, triangle of necessities, vector space.
-![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png)
+## More Quotes from Iveta:
-## Context search
+*""What we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from.”*\
+— Iveta Lohovska
-Now, second case: only providing context.
+*""Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things.”*\
-Ever been caught in the same recommendations on your favourite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different.
+— Iveta Lohovska
-![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png)
+*""Chat GPT for conversational purposes and individual help is something very cool but when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically.”*\
+— Iveta Lohovska
-__Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _""how much a point is closer to a negative than a positive vector?""_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets.
+## Transcript:
+Demetrios:
-![Context search visualization](/articles_data/discovery-search/context-search.png)
+Look at that. We are back for another vector space talks. I'm very excited to be doing this today with you all. I am joined by none other than Sabrina again. Where are you at, Sabrina? How's it going?
-Creating complex tastes in a high-dimensional space becomes easier, since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search ""category"" created just from the context in the input.
+Sabrina Aquino:
+Hey there, Demetrios. Amazing. Another episode and I'm super excited for this one. How are you doing?
-![A more complex context search](/articles_data/discovery-search/complex-context-search.png)
+Demetrios:
+I'm great. And we're going to bring out our guest of honor today. We are going to be talking a lot about trustworthy AI because Iveta has a background working with the World bank and focusing on the open data with that. But currently she is chief technologist and principal data scientist at HPE. And we were talking before we hit record before we went live. And we've got some hot takes that are coming up. So I'm going to bring Iveta to the stage. Where are you? There you are, our guest of honor.
-This way you can give refeshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs.
+Demetrios:
-## Wrapping up
+How you doing?
-Discovery search is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily close to the target, but are still relevant to the search. It can also be used to represent complex tastes, and break out of the similarity bubble. Check out the [documentation](/documentation/concepts/explore/#discovery-api) to learn more about the math behind it and how to use it.
-",articles/discovery-search.md
-"---
+Iveta Lohovska:
-title: ""FastEmbed: Fast and Lightweight Embedding Generation for Text""
+Good. I hope you can hear me well.
-short_description: ""FastEmbed: Quantized Embedding models for fast CPU Generation""
-description: ""FastEmbed is a Python library engineered for speed, efficiency, and accuracy""
-social_preview_image: /articles_data/fastembed/preview/social_preview.jpg
+Demetrios:
-small_preview_image: /articles_data/fastembed/preview/lightning.svg
+Loud and clear. Yes.
-preview_dir: /articles_data/fastembed/preview
-weight: -60
-author: Nirant Kasliwal
+Iveta Lohovska:
-author_link: https://nirantk.com/about/
+Happy to join here from Vienna and thank you for the invite.
-date: 2023-10-18T10:00:00+03:00
-draft: false
-keywords:
+Demetrios:
- - vector search
+Yes. So I'm very excited to talk with you today. I think it's probably worth getting the TLDR on your story and why you're so passionate about trustworthiness and explainability.
- - embedding models
- - Flag Embedding
- - OpenAI Ada
+Iveta Lohovska:
- - NLP
+Well, I think especially in the genaid context where if there any vulnerabilities around the solution or the training data set or any underlying context, either in the enterprise or in a smaller scale, it's just the scale that AI engine AI can achieve if it has any vulnerabilities or any weaknesses when it comes to explainability or trustworthiness or bias, it just goes explain nature. So it is to be considered and taken with high attention when it comes to those use cases. And most of my work is within an enterprise with high confidentiality use cases. So it plays a big role more than actually people will think it's on a high level. It just sounds like AI ethical principles or high level words that are very difficult to implement in technical terms. But in reality, when you hit the ground, when you hit the projects, when you work with in the context of, let's say, governments or organizations that deal with atomic energy, I see it in Vienna, the atomic agency is a neighboring one, or security agencies. Then you see the importance and the impact of those terms and the technical implications behind that.
- - embeddings
- - ONNX Runtime
- - quantized embedding model
+Sabrina Aquino:
----
+That's amazing. And can you talk a little bit more about the importance of the transparency of these models and what can happen if we don't know exactly what kind of data they are being trained on?
-Data Science and Machine Learning practitioners often find themselves navigating through a labyrinth of models, libraries, and frameworks. Which model to choose, what embedding size, how to approach tokenizing, these are just some questions you are faced with when starting your work. We understood how, for many data scientists, they wanted an easier and intuitive means to do their embedding work. This is why we built FastEmbed (docs: https://qdrant.github.io/fastembed/) —a Python library engineered for speed, efficiency, and above all, usability. We have created easy to use default workflows, handling the 80% use cases in NLP embedding.
+Iveta Lohovska:
+I mean, this is especially relevant under our context of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) and vector search. Because in the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations. So, so when it comes to implementing vector search or [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and knowing the distribution of the foundational data sets, you have better control if you introduce additional layers or additional components to have the control in your hands of where the information is coming from, where it's stored, [what are the embeddings](https://qdrant.tech/articles/what-are-embeddings/). So that helps, but it is actually quite important that you know what the foundational data sets are, so that you can predict any kind of weaknesses or vulnerabilities or penetrations that the solution or the use case of the model will face when it lands at the end user. Because we know with generative AI that is unpredictable, we know we can implement guardrails. They're already solutions.
-### Current State of Affairs for Generating Embeddings
+Iveta Lohovska:
+We know they're not 100, they don't give you 100% certainty, but they are definitely use cases and work where you need to hit the hundred percent certainty, especially intelligence, cybersecurity and healthcare.
-Usually you make embedding by utilizing PyTorch or TensorFlow models under the hood. But using these libraries comes at a cost in terms of ease of use and computational speed. This is at least in part because these are built for both: model inference and improvement e.g. via fine-tuning.
+Demetrios:
-To tackle these problems we built a small library focused on the task of quickly and efficiently creating text embeddings. We also decided to start with only a small sample of best in class transformer models. By keeping it small and focused on a particular use case, we could make our library focused without all the extraneous dependencies. We ship with limited models, quantize the model weights and seamlessly integrate them with the ONNX Runtime. FastEmbed strikes a balance between inference time, resource utilization and performance (recall/accuracy).
+Yeah, that's something that I wanted to dig into a little bit. More of these high stakes use cases feel like you can't. I don't know. I talk with a lot of people about at this current time, it's very risky to try and use specifically generative AI for those high stakes use cases. Have you seen people that are doing it well, and if so, how?
-### Quick Example
+Iveta Lohovska:
+Yeah, I'm in the business of high stakes use cases and yes, we do those kind of projects and work, which is very exciting and interesting, and you can see the impact. So I'm in the generative AI implementation into enterprise control. An enterprise context could mean critical infrastructure, could mean telco, could mean a government, could mean intelligence organizations. So those are just a few examples, but I could flip the coin and give you an alternative for a public one where I can share, let's say a good example is climate data. And we recently worked on, on building a knowledge worker, a climate agent that is trained, of course, his foundational knowledge, because all foundational models have prior knowledge they can refer to. But the key point here is to be an expert on climate data emissions gap country cards. Every country has a commitment to meet certain reduction emission reduction goals and then benchmarked and followed through the international supervisions of the world, like the United nations environmental program and similar entities. So when you're training this agent on climate data, they're competing ideas or several sources.
-Here is an example of how simple we have made embedding text documents:
+Iveta Lohovska:
+You can source your information from the local government that is incentivized to show progress to the nation and other stakeholders faster than the actual reality, the independent entities that provide information around the state of the world when it comes to progress towards certain climate goals. And there are also different parties. So for this kind of solution, we were very lucky to work with kind of the status co provider, the benchmark around climate data, around climate publications. And what we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from. If Germany performs better compared to Austria, and also the partner we work with was the United nations environmental program. So they want to make sure that they're the citadel scientific arm when it comes to giving information. And there's no compromise, could be a compromise on the structure of the answer, on the breadth and death of the information, but there should be no compromise on the exact fact fullness of the information and where it's coming from. And this is a concrete example because why, you oughta ask, why is this so important? Because it has two interfaces.
-```python
-documents: List[str] = [
- ""Hello, World!"",
+Iveta Lohovska:
- ""fastembed is supported by and maintained by Qdrant.""
+It has the public. You can go and benchmark your carbon footprint as an individual living in one country comparing to an individual living in another. But if you are a policymaker, which is the other interface of this application, who will write the policy recommendation of a country in their own country, or a country they're advising on, you might want to make sure that the scientific citations and the policy recommendations that you're making are correct and they are retrieved from the proper data sources. Because there will be a huge implication when you go public with those numbers or when you actually design a law that is reinforceable with legal terms and law enforcement.
-]
-embedding_model = DefaultEmbedding()
-embeddings: List[np.ndarray] = list(embedding_model.embed(documents))
+Sabrina Aquino:
-```
+That's very interesting, Iveta, and I think this is one of the great use cases for [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), for example. And I think if you can talk a little bit more about how vector search is playing into all of this, how it's helping organizations do this, this.
-These 3 lines of code do a lot of heavy lifting for you: They download the quantized model, load it using ONNXRuntime, and then run a batched embedding creation of your documents.
+Iveta Lohovska:
+Would be amazing in such specific use cases. I think the main differentiator is the traceability component, the first that you have full control on which data it will refer to, because if you deal with open source models, most of them are open, but the data it has been trained on has not been opened or given public so with vector database you introduce a step of control and explainability. Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things. So this is a major use case for us for those kind of high stake solution is that you have the explainability and traceability. Explainability. It could be as simple as a semantical similarity to the text, but also the traceability of where it's coming from and the exact link of where it's coming from. So it should be, it shouldn't be referred. You can close and you can cut the line of the model referring to its previous knowledge by introducing a [vector database](https://qdrant.tech/articles/what-is-a-vector-database/), for example.
-### Code Walkthrough
+Iveta Lohovska:
+So there could be many other implications and improvements in terms of speed and just handling huge amounts of data, yet also nice to have that come with this kind of technique, but the prior use case is actually not incentivized around those.
-Let’s delve into a more advanced example code snippet line-by-line:
+Demetrios:
-```python
+So if I'm hearing you correctly, it's like yet another reason why you should be thinking about using vector databases, because you need that ability to cite your work and it's becoming a very strong design pattern. Right. We all understand now, if you can't see where this data has been pulled from or you can't get, you can't trace back to the actual source, it's hard to trust what the output is.
-from fastembed.embedding import DefaultEmbedding
-```
+Iveta Lohovska:
+Yes, and the easiest way to kind of cluster the two groups. If you think of creative fields and marketing fields and design fields where you could go wild and crazy with the temperature on each model, how creative it could go and how much novelty it could bring to the answer are one family of use cases. But there is exactly the opposite type of use cases where this is a no go and you don't need any creativity, you just focus on, focus on the factfulness and explainability. So it's more of the speed and the accuracy of retrieving information with a high level of novelty, but not compromising on any kind of facts within the answer, because there will be legal implications and policy implications and societal implications based on the action taken on this answer, either policy recommendation or legal action. There's a lot to do with the intelligence agencies that retrieve information based on nearest neighbor or kind of a relational analysis that you can also execute with vector databases and generative AI.
-Here, we import the FlagEmbedding class from FastEmbed and alias it as Embedding. This is the core class responsible for generating embeddings based on your chosen text model. This is also the class which you can import directly as DefaultEmbedding which is [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5)
+Sabrina Aquino:
-```python
+And we know that for these high stakes sectors that data privacy is a huge concern. And when we're talking about using vector databases and storing that data somewhere, what are some of the principles or techniques that you use in terms of infrastructure, where should you store your vector database and how should you think about that part of your system?
-documents: List[str] = [
- ""passage: Hello, World!"",
- ""query: How is the World?"",
+Iveta Lohovska:
- ""passage: This is an example passage."",
+Yeah, so most of the cases, I would say 99% of the cases, is that if you have such a high requirements around security and explainability, security of the data, but those security of the whole use case and environment, and the explainability and trustworthiness of the answer, then it's very natural to have expectations that will be on prem and not in the cloud, because only on prem you have a full control of where your data sits, where your model sits, the full ownership of your IP, and then the full ownership of having less question marks of the implementation and architecture, but mainly the full ownership of the end to end solution. So when it comes to those use cases, RAG on Prem, with the whole infrastructure, with the whole software and platform layers, including models on Prem, not accessible through an API, through a service somewhere where you don't know where the guardrails is, who designed the guardrails, what are the guardrails? And we see those, this a lot with, for example, copilot, a lot of question marks around that. So it's a huge part of my work is just talking of it, just sorting out that.
- ""fastembed is supported by and maintained by Qdrant.""
-]
-```
+Sabrina Aquino:
+Exactly. You don't want to just give away your data to a cloud provider, because there's many implications that that comes with. And I think even your clients, they need certain certifications, then they need to make sure that nobody can access that data, something that you cannot. Exactly. I think ensure if you're just using a cloud provider somewhere, which is, I think something that's very important when you're thinking about these high stakes solutions. But also I think if you're going to maybe outsource some of the infrastructure, you also need to think about something that's similar to a [hybrid cloud solution](https://qdrant.tech/documentation/hybrid-cloud/) where you can keep your data and outsource the kind of management of infrastructure. So that's also a nice use case for that, right?
-In this list called documents, we define four text strings that we want to convert into embeddings.
+Iveta Lohovska:
+I mean, I work for HPE, so hybrid is like one of our biggest sacred words. Yeah, exactly. But actually like if you see the trends and if you see how expensive is to work to run some of those workloads in the cloud, either for training for national model or fine tuning. And no one talks about inference, inference not in ten users, but inference in hundred users with big organizations. This itself is not sustainable. Honestly, when you do the simple Linux, algebra or math of the exponential cost around this. That's why everything is hybrid. And there are use cases that make sense to be fast and speedy and easy to play with, low risk in the cloud to try.
-Note the use of prefixes “passage” and “query” to differentiate the types of embeddings to be generated. This is inherited from the cross-encoder implementation of the BAAI/bge series of models themselves. This is particularly useful for retrieval and we strongly recommend using this as well.
+Iveta Lohovska:
-The use of text prefixes like “query” and “passage” isn’t merely syntactic sugar; it informs the algorithm on how to treat the text for embedding generation. A “query” prefix often triggers the model to generate embeddings that are optimized for similarity comparisons, while “passage” embeddings are fine-tuned for contextual understanding. If you omit the prefix, the default behavior is applied, although specifying it is recommended for more nuanced results.
+But when it comes to actual GenAI work and LLM models, yeah, the answer is never straightforward when it comes to the infrastructure and the environment where you are hosting it, for many reasons, not just cost, but any other.
-Next, we initialize the Embedding model with the default model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5).
+Demetrios:
+So there's something that I've been thinking about a lot lately that I would love to get your take on, especially because you deal with this day in and day out, and it is the maturity levels of the current state of Gen AI and where we are at for chat GPT or just llms and foundational models feel like they just came out. And so we're almost in the basic, basic, basic maturity levels. And when you work with customers, how do you like kind of signal that, hey, this is where we are right now, but you should be very conscientious that you're going to need to potentially work with a lot of breaking changes or you're going to have to be constantly updating. And this isn't going to be set it and forget it type of thing. This is going to be a lot of work to make sure that you're staying up to date, even just like trying to stay up to date with the news as we were talking about. So I would love to hear your take on on the different maturity levels that you've been seeing and what that looks like.
-```python
-embedding_model = DefaultEmbedding()
+Iveta Lohovska:
-```
+So I have huge exposure to GenAI for the enterprise, and there's a huge component expectation management. Why? Because chat GPT for conversational purposes and individual help is something very cool. But when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically. So end users who are used to expect level of forgiveness as conversational chatbots have, is very different of what you will get into actual, let's say, knowledge worker type of context, or summarization type of context into the enterprise. And it's not so much to the performance of the models, but we have something called modalities of the models. And I don't think there will be ultimately one model with all the capabilities possible, let's say cult generation or image generation, voice generational, or just being very chatty and loving and so on. There will be multiple mini models out there for those. Modalities in actual architecture with reasonable cost are very difficult to handle.
-The default model and several other models have a context window of maximum 512 tokens. This maximum limit comes from the embedding model training and design itself.If you'd like to embed sequences larger than that, we'd recommend using some pooling strategy to get a single vector out of the sequence. For example, you can use the mean of the embeddings of different chunks of a document. This is also what the [SBERT Paper recommends](https://lilianweng.github.io/posts/2021-05-31-contrastive/#sentence-bert)
+Iveta Lohovska:
+So I would say the technical community feels we are very mature and very fast. The enterprise adoption is a totally different topic, and it's a couple of years behind, but also the society type of technologists like me, who try to keep up with the development and we know where we stand at this point, but they're the legal side and the regulations coming in, like the EU act and Biden trying to regulate the compute power, but also how societies react to this and how they adapt. And I think especially on the third one, we are far behind understanding and the implications of this technology, also adopting it at scale and understanding the vulnerabilities. That's why I enjoy so much my enterprise work is because it's a reality check. When you put the price tag attached to actual Gen AI use case in production with the inference cost and the expected performance, it's different situation when you just have an app on the phone and you chat with it and it pulls you interesting links. So yes, I think that there's a bridge to be built between the two worlds.
-This model strikes a balance between speed and accuracy, ideal for real-world applications.
+Demetrios:
+Yeah. And I find it really interesting too, because it feels to me like since it is so new, people are more willing to explore and not necessarily have that instant return of the ROI, but when it comes to more traditional ML or predictive ML, it is a bit more mature and so there's less patience for that type of exploration. Or, hey, is this use case? If you can't by now show the ROI of a predictive ML use case, then that's a little bit more dangerous. But if you can't with a Gen AI use case, it is not that big of a deal.
-```python
-embeddings: List[np.ndarray] = list(embedding_model.embed(documents))
-```
+Iveta Lohovska:
+Yeah, it's basically a technology growing up in front of our eyes. It's a kind of a flying a plane while building it type of situation. We are seeing it in the real time, and I agree with you. So that the maturity around ML is one thing, but around generative AI, and they will be a model of kind of mini disappointment or decline, in my opinion, before actually maturing product. This kind of powerful technology in a sustainable way. Sustainable ways mean you can afford it, but also it proves your business case and use case. Otherwise it's just doing for the sake of doing it because everyone else is doing it.
-Finally, we call the `embed()` method on our embedding_model object, passing in the documents list. The method returns a Python generator, so we convert it to a list to get all the embeddings. These embeddings are NumPy arrays, optimized for fast mathematical operations.
+Demetrios:
+Yeah, yeah, 100%. So I know we're bumping up against time here. I do feel like there was a bit of a topic that we wanted to discuss with the licenses and how that plays into basically trustworthiness and explainability. And so we were talking about how, yeah, the best is to run your own model, and it probably isn't going to be this gigantic model that can do everything. It's the, it seems like the trends are going into smaller models. And from your point of view though, we are getting new models like every week. It feels like. Yeah, especially.
-The `embed()` method returns a list of NumPy arrays, each corresponding to the embedding of a document in your original documents list. The dimensions of these arrays are determined by the model you chose e.g. for “BAAI/bge-small-en-v1.5” it’s a 384-dimensional vector.
+Demetrios:
-You can easily parse these NumPy arrays for any downstream application—be it clustering, similarity comparison, or feeding them into a machine learning model for further analysis.
+I mean, we were just talking about this before we went live again, like databricks just released there. What is it? DBRX Yesterday you had Mistral releasing like a new base model over the weekend, and then Llama 3 is probably going to come out in the flash of an eye. So where do you stand in regards to that? It feels like there's a lot of movement in open source, but it is a little bit of, as you mentioned, like, to be cautious with the open source movement.
-## Key Features
+Iveta Lohovska:
+So I think it feels like there's a lot of open source, but that. So I'm totally for open sourcing and giving the people and the communities the power to be able to innovate, to do R & D in different labs so it's not locked to the view. Elite big tech companies that can afford this kind of technology. So kudos to meta for trying compared to the other equal players in the space. But open source comes with a lot of ecosystem in our world, especially for the more powerful models, which is something I don't like because it becomes like just, it immediately translates into legal fees type of conversation. It's like there are too many if else statements in those open source licensing terms where it becomes difficult to navigate, for technologists to understand what exactly this means, and then you have to bring the legal people to articulate it to you or to put additional clauses. So it's becoming a very complex environment to handle and less and less open, because there are not so many open source and small startup players that can afford to train foundational models that are powerful and useful. So it becomes a bit of a game logged to a view, and I think everyone needs to be a bit worried about that.
-FastEmbed is built for inference speed, without sacrificing (too much) performance:
+Iveta Lohovska:
+So we can use the equivalents from the past, but I don't think we are doing well enough in terms of open sourcing. The three main core components of LLM model, which is the model itself, the data it has been trained on, and the data sets, and most of the times, at least in one of those, is restricted or missing. So it's difficult space to navigate.
-1. 50% faster than PyTorch Transformers
-2. Better performance than Sentence Transformers and OpenAI Ada-002
-3. Cosine similarity of quantized and original model vectors is 0.92
+Demetrios:
+Yeah, yeah. You can't really call it trustworthy, or you can't really get the information that you need and that you would hope for if you're missing one of those three. I do like that little triangle of the necessities. So, Iveta, this has been awesome. I really appreciate you coming on here. Thank you, Sabrina, for joining us. And for everyone else that is watching, remember, don't get lost in vector space. This has been another vector space talk.
-We use `BAAI/bge-small-en-v1.5` as our DefaultEmbedding, hence we've chosen that for comparison:
+Demetrios:
+We are out. Have a great weekend, everyone.
-![](/articles_data/fastembed/throughput.png)
+Iveta Lohovska:
-## Under the Hood
+Thank you. Bye. Thank you. Bye.
+",blog/gen-ai-and-vector-search-iveta-lohovska-vector-space-talks.md
+"---
+draft: false
+title: ""Qdrant and OVHcloud Bring Vector Search to All Enterprises""
-**Quantized Models**: We quantize the models for CPU (and Mac Metal) – giving you the best buck for your compute model. Our default model is so small, you can run this in AWS Lambda if you’d like!
+short_description: ""Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy.""
+description: ""Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy.""
+preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud.png
-Shout out to Huggingface's [Optimum](https://github.com/huggingface/optimum) – which made it easier to quantize models.
+date: 2024-04-10T00:05:00Z
+author: Qdrant
+featured: false
-**Reduced Installation Time**:
+weight: 1004
+tags:
+ - Qdrant
-FastEmbed sets itself apart by maintaining a low minimum RAM/Disk usage.
+ - Vector Database
+---
-It’s designed to be agile and fast, useful for businesses looking to integrate text embedding for production usage. For FastEmbed, the list of dependencies is refreshingly brief:
+With the official release of [Qdrant Hybrid Cloud](/hybrid-cloud/), businesses running their data infrastructure on [OVHcloud](https://ovhcloud.com/) are now able to deploy a fully managed vector database in their existing OVHcloud environment. We are excited about this partnership, which has been established through the [OVHcloud Open Trusted Cloud](https://opentrustedcloud.ovhcloud.com/en/) program, as it is based on our shared understanding of the importance of trust, control, and data privacy in the context of the emerging landscape of enterprise-grade AI applications. As part of this collaboration, we are also providing a detailed use case tutorial on building a recommendation system that demonstrates the benefits of running Qdrant Hybrid Cloud on OVHcloud.
-> - onnx: Version ^1.11 – We’ll try to drop this also in the future if we can!
-> - onnxruntime: Version ^1.15
+Deploying Qdrant Hybrid Cloud on OVHcloud's infrastructure represents a significant leap for European businesses invested in AI-driven projects, as this collaboration underscores the commitment to meeting the rigorous requirements for data privacy and control of European startups and enterprises building AI solutions. As businesses are progressing on their AI journey, they require dedicated solutions that allow them to make their data accessible for machine learning and AI projects, without having it leave the company's security perimeter. Prioritizing data sovereignty, a crucial aspect in today's digital landscape, will help startups and enterprises accelerate their AI agendas and build even more differentiating AI-enabled applications. The ability of running Qdrant Hybrid Cloud on OVHcloud not only underscores the commitment to innovative, secure AI solutions but also ensures that companies can navigate the complexities of AI and machine learning workloads with the flexibility and security required.
-> - tqdm: Version ^4.65 – used only at Download
-> - requests: Version ^2.31 – used only at Download
-> - tokenizers: Version ^0.13
+> *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely.“* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud
-This minimized list serves two purposes. First, it significantly reduces the installation time, allowing for quicker deployments. Second, it limits the amount of disk space required, making it a viable option even for environments with storage limitations.
+#### Qdrant & OVHcloud: High Performance Vector Search With Full Data Control
-Notably absent from the dependency list are bulky libraries like PyTorch, and there’s no requirement for CUDA drivers. This is intentional. FastEmbed is engineered to deliver optimal performance right on your CPU, eliminating the need for specialized hardware or complex setups.
+Through the seamless integration between Qdrant Hybrid Cloud and OVHcloud, developers and businesses are able to deploy the fully managed vector database within their existing OVHcloud setups in minutes, enabling faster, more accurate AI-driven insights.
-**ONNXRuntime**: The ONNXRuntime gives us the ability to support multiple providers. The quantization we do is limited for CPU (Intel), but we intend to support GPU versions of the same in future as well. This allows for greater customization and optimization, further aligning with your specific performance and computational requirements.
+- **Simple setup:** With the seamless “one-click” installation, developers are able to deploy Qdrant’s fully managed vector database to their existing OVHcloud environment.
-## Current Models
+- **Trust and data sovereignty**: Deploying Qdrant Hybrid Cloud on OVHcloud enables developers with vector search that prioritizes data sovereignty, a crucial aspect in today's AI landscape where data privacy and control are essential. True to its “Sovereign by design” DNA, OVHcloud guarantees that all the data stored are immune to extraterritorial laws and comply with the highest security standards.
-We’ve started with a small set of supported models:
+- **Open standards and open ecosystem**: OVHcloud’s commitment to open standards and an open ecosystem not only facilitates the easy integration of Qdrant Hybrid Cloud with OVHcloud’s AI services and GPU-powered instances but also ensures compatibility with a wide range of external services and applications, enabling seamless data workflows across the modern AI stack.
-All the models we support are [quantized](https://pytorch.org/docs/stable/quantization.html) to enable even faster computation!
+- **Cost efficient sector search:** By leveraging Qdrant's quantization for efficient data handling and pairing it with OVHcloud's eco-friendly, water-cooled infrastructure, known for its superior price/performance ratio, this collaboration provides a strong foundation for cost efficient vector search.
-If you're using FastEmbed and you've got ideas or need certain features, feel free to let us know. Just drop an issue on our GitHub page. That's where we look first when we're deciding what to work on next. Here's where you can do it: [FastEmbed GitHub Issues](https://github.com/qdrant/fastembed/issues).
+#### Build a RAG-Based System with Qdrant Hybrid Cloud and OVHcloud
-When it comes to FastEmbed's DefaultEmbedding model, we're committed to supporting the best Open Source models.
+![hybrid-cloud-ovhcloud-tutorial](/blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png)
-If anything changes, you'll see a new version number pop up, like going from 0.0.6 to 0.1. So, it's a good idea to lock in the FastEmbed version you're using to avoid surprises.
+To show how Qdrant Hybrid Cloud deployed on OVHcloud allows developers to leverage the benefits of an AI use case that is completely run within the existing infrastructure, we put together a comprehensive use case tutorial. This tutorial guides you through creating a recommendation system using collaborative filtering and sparse vectors with Qdrant Hybrid Cloud on OVHcloud. It employs the Movielens dataset for practical application, providing insights into building efficient, scalable recommendation engines suitable for developers and data scientists looking to leverage advanced vector search technologies within a secure, GDPR-compliant European cloud infrastructure.
-## Usage with Qdrant
+[Try the Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/)
-Qdrant is a Vector Store, offering a comprehensive, efficient, and scalable solution for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant vector solution, or specialized quantization methods – [Qdrant is engineered](https://qdrant.tech/documentation/overview/) to meet those demands head-on.
+#### Get Started Today and Leverage the Benefits of Qdrant Hybrid Cloud
-The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant.
+Setting up Qdrant Hybrid Cloud on OVHcloud is straightforward and quick, thanks to the intuitive integration with Kubernetes. Here's how:
-Below is a detailed guide on how to get started with FastEmbed in conjunction with Qdrant.
+- **Hybrid Cloud Activation**: Log into your Qdrant account and enable 'Hybrid Cloud'.
-### Installation
+- **Cluster Integration**: Add your OVHcloud Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud settings.
-Before diving into the code, the initial step involves installing the Qdrant Client along with the FastEmbed library. This can be done using pip:
+- **Effortless Deployment**: Use the Qdrant Management Console for easy deployment and management of Qdrant clusters on OVHcloud.
-```
+[Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/)
-pip install qdrant-client[fastembed]
-```
+#### Ready to Get Started?
-For those using zsh as their shell, you might encounter syntax issues. In such cases, wrap the package name in quotes:
+Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-ovhcloud.md
+"---
+draft: false
-```
+title: Full-text filter and index are already available!
-pip install 'qdrant-client[fastembed]'
+slug: qdrant-introduces-full-text-filters-and-indexes
-```
+short_description: Qdrant v0.10 introduced full-text filters
+description: Qdrant v0.10 introduced full-text filters and indexes to enable
+ more search capabilities for those working with textual data.
-### Initializing the Qdrant Client
+preview_image: /blog/from_cms/andrey.vasnetsov_black_hole_sucking_up_the_word_tag_cloud_f349586d-3e51-43c5-9e5e-92abf9a9e871.png
+date: 2022-11-16T09:53:05.860Z
+author: Kacper Łukawski
-After successful installation, the next step involves initializing the Qdrant Client. This can be done either in-memory or by specifying a database path:
+featured: false
+tags:
+ - Information Retrieval
-```python
+ - Database
-from qdrant_client import QdrantClient
+ - Open Source
-# Initialize the client
+ - Vector Search Database
-client = QdrantClient("":memory:"") # or QdrantClient(path=""path/to/db"")
+---
-```
+Qdrant is designed as an efficient vector database, allowing for a quick search of the nearest neighbours. But, you may find yourself in need of applying some extra filtering on top of the semantic search. Up to version 0.10, Qdrant was offering support for keywords only. Since 0.10, there is a possibility to apply full-text constraints as well. There is a new type of filter that you can use to do that, also combined with every other filter type.
-### Preparing Documents, Metadata, and IDs
+## Using full-text filters without the payload index
-Once the client is initialized, prepare the text documents you wish to embed, along with any associated metadata and unique IDs:
+Full-text filters without the index created on a field will return only those entries which contain all the terms included in the query. That is effectively a substring match on all the individual terms but **not a substring on a whole query**.
-```python
+![](/blog/from_cms/1_ek61_uvtyn89duqtmqqztq.webp ""An example of how to search for “long_sleeves” in a “detail_desc” payload field."")
-docs = [
- ""Qdrant has Langchain integrations"",
- ""Qdrant also has Llama Index integrations""
+## Full-text search behaviour on an indexed payload field
-]
-metadata = [
- {""source"": ""Langchain-docs""},
+There are more options if you create a full-text index on a field you will filter by.
- {""source"": ""LlamaIndex-docs""},
-]
-ids = [42, 2]
+![](/blog/from_cms/1_pohx4eznqpgoxak6ppzypq.webp ""Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by."")
-```
+First and foremost, you can choose the tokenizer. It defines how Qdrant should split the text into tokens. There are three options available:
-Note that the add method we’ll use is overloaded: If you skip the ids, we’ll generate those for you. metadata is obviously optional. So, you can simply use this too:
+* **word** — spaces, punctuation marks and special characters define the token boundaries
-```python
+* **whitespace** — token boundaries defined by whitespace characters
-docs = [
+* **prefix** — token boundaries are the same as for the “word” tokenizer, but in addition to that, there are prefixes created for every single token. As a result, “Qdrant” will be indexed as “Q”, “Qd”, “Qdr”, “Qdra”, “Qdran”, and “Qdrant”.
- ""Qdrant has Langchain integrations"",
- ""Qdrant also has Llama Index integrations""
-]
+There are also some additional parameters you can provide, such as
-```
+* **min_token_len** — minimal length of the token
-### Adding Documents to a Collection
+* **max_token_len** — maximal length of the token
+* **lowercase** — if set to *true*, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase
-With your documents, metadata, and IDs ready, you can proceed to add these to a specified collection within Qdrant using the add method:
+## Using text filters in practice
-```python
-client.add(
+![](/blog/from_cms/1_pbtd2tzqtjqqlbi61r8czg.webp ""There are also some additional parameters you can provide, such as min_token_len — minimal length of the token max_token_len — maximal length of the token lowercase — if set to true, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase Using text filters in practice"")
- collection_name=""demo_collection"",
- documents=docs,
- metadata=metadata,
+The main difference between using full-text filters on the indexed vs non-indexed field is the performance of such query. In a simple benchmark, performed on the [H&M dataset](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations) (with over 105k examples), the average query time looks as follows (n=1000):
- ids=ids
-)
-```
+![](/blog/from_cms/screenshot_31.png)
-Inside this function, Qdrant Client uses FastEmbed to make the text embedding, generate ids if they’re missing and then adding them to the index with metadata. This uses the DefaultEmbedding model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5)
+It is evident that creating a filter on a field that we’ll query often, may lead us to substantial performance gains without much effort.",blog/full-text-filter-and-index-are-already-available.md
+"---
+draft: false
+preview_image: /blog/from_cms/docarray.png
-![INDEX TIME: Sequence Diagram for Qdrant and FastEmbed](/articles_data/fastembed/generate-embeddings-from-docs.png)
+sitemapExclude: true
+title: ""Qdrant and Jina integration: storage backend support for DocArray""
+slug: qdrant-and-jina-integration
-### Performing Queries
+short_description: ""One more way to use Qdrant: Jina's DocArray is now
+ supporting Qdrant as a storage backend.""
+description: We are happy to announce that Jina.AI integrates Qdrant engine as a
-Finally, you can perform queries on your stored documents. Qdrant offers a robust querying capability, and the query results can be easily retrieved as follows:
+ storage backend to their DocArray solution.
+date: 2022-03-15T15:00:00+03:00
+author: Alyona Kavyerina
-```python
+featured: false
-search_result = client.query(
+author_link: https://medium.com/@alyona.kavyerina
- collection_name=""demo_collection"",
+tags:
- query_text=""This is a query document""
+ - jina integration
-)
+ - docarray
-print(search_result)
+categories:
-```
+ - News
+---
+We are happy to announce that [Jina.AI](https://jina.ai/) integrates Qdrant engine as a storage backend to their [DocArray](https://docarray.jina.ai/) solution.
-Behind the scenes, we first convert the query_text to the embedding and use that to query the vector index.
+Now you can experience the convenience of Pythonic API and Rust performance in a single workflow.
-![QUERY TIME: Sequence Diagram for Qdrant and FastEmbed integration](/articles_data/fastembed/generate-embeddings-query.png)
+DocArray library defines a structure for the unstructured data and simplifies processing a collection of documents,
-By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks.
+including audio, video, text, and other data types. Qdrant engine empowers scaling of its vector search and storage.
-Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like binary and scalar quantization for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency.
+Read more about the integration by this [link](/documentation/install/#docarray)
+",blog/qdrant_and_jina_integration.md
+"---
+title: ""Qdrant Attains SOC 2 Type II Audit Report""
+draft: false
-## Summary
+slug: qdrant-soc2-type2-audit # Change this slug to your page slug if needed
+short_description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, Processing Integrity, Confidentiality, and Privacy.
+description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, and Confidentiality.
-If you're curious about how FastEmbed and Qdrant can make your search tasks a breeze, why not take it for a spin? You get a real feel for what it can do. Here are two easy ways to get started:
+preview_image: /blog/soc2-type2-report/soc2-preview.jpeg #
-1. **Cloud**: Get started with a free plan on the [Qdrant Cloud](https://qdrant.to/cloud?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article).
+social_preview_image: /blog/soc2-type2-report/soc2-preview.jpeg
-2. **Docker Container**: If you're the DIY type, you can set everything up on your own machine. Here's a quick guide to help you out: [Quick Start with Docker](https://qdrant.tech/documentation/quick-start/?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article).
+date: 2024-05-23T20:26:20-03:00
+author: Sabrina Aquino # Change this
+featured: false # if true, this post will be featured on the blog page
-So, go ahead, take it for a test drive. We're excited to hear what you think!
+tags: # Change this, related by tags posts will be shown on the blog page
+ - soc2
+ - audit
-Lastly, If you find FastEmbed useful and want to keep up with what we're doing, giving our GitHub repo a star would mean a lot to us. Here's the link to [star the repository](https://github.com/qdrant/fastembed).
+ - security
+ - confidenciality
+ - data privacy
-If you ever have questions about FastEmbed, please ask them on the Qdrant Discord: [https://discord.gg/Qy6HCJK9Dc](https://discord.gg/Qy6HCJK9Dc)
-",articles/fastembed.md
-"---
+ - soc2 type 2
-title: ""Qdrant under the hood: Product Quantization""
-short_description: ""Vector search with low memory? Try out our brand-new Product Quantization!""
-description: ""Vector search with low memory? Try out our brand-new Product Quantization!""
+---
-social_preview_image: /articles_data/product-quantization/social_preview.png
-small_preview_image: /articles_data/product-quantization/product-quantization-icon.svg
-preview_dir: /articles_data/product-quantization/preview
+At Qdrant, we are happy to announce the successful completion our the SOC 2 Type II Audit. This achievement underscores our unwavering commitment to upholding the highest standards of security, availability, and confidentiality for our services and our customers’ data.
-weight: 4
-author: Kacper Łukawski
-author_link: https://medium.com/@lukawskikacper
-date: 2023-05-30T09:45:00+02:00
-draft: false
+## SOC 2 Type II: What Is It?
-keywords:
- - vector search
- - product quantization
+SOC 2 Type II certification is an examination of an organization's controls in reference to the American Institute of Certified Public Accountants [(AICPA) Trust Services criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022). It evaluates not only our written policies but also their practical implementation, ensuring alignment between our stated objectives and operational practices. Unlike Type I, which is a snapshot in time, Type II verifies over several months that the company has lived up to those controls. The report represents thorough auditing of our security procedures throughout this examination period: January 1, 2024 to April 7, 2024.
- - memory optimization
-aliases: [ /articles/product_quantization/ ]
----
+## Key Audit Findings
-Qdrant 1.1.0 brought the support of [Scalar Quantization](/articles/scalar-quantization/),
-a technique of reducing the memory footprint by even four times, by using `int8` to represent
-the values that would be normally represented by `float32`.
+The audit ensured with no exceptions noted the effectiveness of our systems and controls on the following Trust Service Criteria:
-The memory usage in vector search might be reduced even further! Please welcome **Product
-Quantization**, a brand-new feature of Qdrant 1.2.0!
-## Product Quantization
+* Security
+* Confidentiality
+* Availability
-Product Quantization converts floating-point numbers into integers like every other quantization
-method. However, the process is slightly more complicated than Scalar Quantization and is more
-customizable, so you can find the sweet spot between memory usage and search precision. This article
+These certifications are available today and automatically apply to your existing workloads. The full SOC 2 Type II report is available to customers and stakeholders upon request through the [Trust Center](https://app.drata.com/trust/9cbbb75b-0c38-11ee-865f-029d78a187d9).
-covers all the steps required to perform Product Quantization and the way it's implemented in Qdrant.
-Let’s assume we have a few vectors being added to the collection and that our optimizer decided
-to start creating a new segment.
+## Future Compliance
-![A list of raw vectors](/articles_data/product-quantization/raw-vectors.png)
+Going forward, Qdrant will maintain SOC 2 Type II compliance by conducting continuous, annual audits to ensure our security practices remain aligned with industry standards and evolving risks.
-### Cutting the vector into pieces
+Recognizing the critical importance of data security and the trust our clients place in us, achieving SOC 2 Type II compliance underscores our ongoing commitment to prioritize data protection with the utmost integrity and reliability.
-First of all, our vectors are going to be divided into **chunks** aka **subvectors**. The number
-of chunks is configurable, but as a rule of thumb - the lower it is, the higher the compression rate.
-That also comes with reduced search precision, but in some cases, you may prefer to keep the memory
+## About Qdrant
-usage as low as possible.
+Qdrant is a vector database designed to handle large-scale, high-dimensional data efficiently. It allows for fast and accurate similarity searches in complex datasets. Qdrant strives to achieve seamless and scalable vector search capabilities for various applications.
-![A list of chunked vectors](/articles_data/product-quantization/chunked-vectors.png)
+For more information about Qdrant and our security practices, please visit our [website](http://qdrant.tech) or [reach out to our team directly](https://qdrant.tech/contact-us/).
+",blog/soc2-type2-report.md
+"---
-Qdrant API allows choosing the compression ratio from 4x up to 64x. In our example, we selected 16x,
+draft: false
-so each subvector will consist of 4 floats (16 bytes), and it will eventually be represented by
+title: Binary Quantization - Andrey Vasnetsov | Vector Space Talks
-a single byte.
+slug: binary-quantization
+short_description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of
+ binary quantization and its applications in vector indexing.
-### Clustering
+description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary
+ quantization and its benefits in vector indexing, including the challenges and
+ potential future developments of this technique.
-The chunks of our vectors are then used as input for clustering. Qdrant uses the K-means algorithm,
+preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png
-with $ K = 256 $. It was selected a priori, as this is the maximum number of values a single byte
+date: 2024-01-09T10:30:10.952Z
-represents. As a result, we receive a list of 256 centroids for each chunk and assign each of them
+author: Demetrios Brinkmann
-a unique id. **The clustering is done separately for each group of chunks.**
+featured: false
+tags:
+ - Vector Space Talks
-![Clustered chunks of vectors](/articles_data/product-quantization/chunks-clustering.png)
+ - Binary Quantization
+ - Qdrant
+---
-Each chunk of a vector might now be mapped to the closest centroid. That’s where we lose the precision,
-as a single point will only represent a whole subspace. Instead of using a subvector, we can store
-the id of the closest centroid. If we repeat that for each chunk, we can approximate the original
+> *""Everything changed when we actually tried binary quantization with OpenAI model.”*\
-embedding as a vector of subsequent ids of the centroids. The dimensionality of the created vector
+> -- Andrey Vasnetsov
-is equal to the number of chunks, in our case 2.
+Ever wonder why we need quantization for vector indexes? Andrey Vasnetsov explains the complexities and challenges of searching through proximity graphs. Binary quantization reduces storage size and boosts speed by 30x, but not all models are compatible.
-![A new vector built from the ids of the centroids](/articles_data/product-quantization/vector-of-ids.png)
+Andrey worked as a Machine Learning Engineer most of his career. He prefers practical over theoretical, working demo over arXiv paper. He is currently working as the CTO at Qdrant a Vector Similarity Search Engine, which can be used for semantic search, similarity matching of text, images or even videos, and also recommendations.
-### Full process
+***Listen to the episode on [Spotify](https://open.spotify.com/episode/7dPOm3x4rDBwSFkGZuwaMq?si=Ip77WCa_RCCYebeHX6DTMQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/4aUq5VnR_VI).***
-All those steps build the following pipeline of Product Quantization:
+
-![Full process of Product Quantization](/articles_data/product-quantization/full-process.png)
+
-## Measuring the distance
+## Top Takeaways:
-Vector search relies on the distances between the points. Enabling Product Quantization slightly changes
-the way it has to be calculated. The query vector is divided into chunks, and then we figure the overall
-distance as a sum of distances between the subvectors and the centroids assigned to the specific id of
+Discover how oversampling optimizes precision in real-time, enhancing the accuracy without altering stored data structures in our very first episode of the Vector Space Talks by Qdrant, with none other than the CTO of Qdrant, Andrey Vasnetsov.
-the vector we compare to. We know the coordinates of the centroids, so that's easy.
+In this episode, Andrey shares invaluable insights into the world of binary quantization and its profound impact on Vector Space technology.
-![Calculating the distance of between the query and the stored vector](/articles_data/product-quantization/distance-calculation.png)
+5 Keys to Learning from the Episode:
-#### Qdrant implementation
+1. The necessity of quantization and the complex challenges it helps to overcome.
-Search operation requires calculating the distance to multiple points. Since we calculate the
+2. The transformative effects of binary quantization on processing speed and storage size reduction.
-distance to a finite set of centroids, those might be precomputed and reused. Qdrant creates
+3. A detailed exploration of oversampling and its real-time precision control in query search.
-a lookup table for each query, so it can then simply sum up several terms to measure the
+4. Understanding the simplicity and effectiveness of binary quantization, especially when compared to more intricate quantization methods.
-distance between a query and all the centroids.
+5. The ongoing research and potential impact of binary quantization on future models.
-| | Centroid 0 | Centroid 1 | ... |
+> Fun Fact: Binary quantization can deliver processing speeds over 30 times faster than traditional quantization methods, which is a revolutionary advancement in Vector Space technology.
-|-------------|------------|------------|-----|
+>
-| **Chunk 0** | 0.14213 | 0.51242 | |
-| **Chunk 1** | 0.08421 | 0.00142 | |
-| **...** | ... | ... | ... |
+## Show Notes:
-## Benchmarks
+00:00 Overview of HNSW vector index.\
+03:57 Efficient storage needed for large vector sizes.\
+07:49 Oversampling controls precision in real-time search.\
-Product Quantization comes with a cost - there are some additional operations to perform so
+12:21 Comparison of vectors using dot production.\
-that the performance might be reduced. However, memory usage might be reduced drastically as
+15:20 Experimenting with models, OpenAI has compatibility.\
-well. As usual, we did some benchmarks to give you a brief understanding of what you may expect.
+18:29 Qdrant architecture doesn't support removing original vectors.
-Again, we reused the same pipeline as in [the other benchmarks we published](/benchmarks). We
+## More Quotes from Andrey:
-selected [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets)
-and [Glove-100](https://github.com/erikbern/ann-benchmarks/) datasets to measure the impact
-of Product Quantization on precision and time. Both experiments were launched with $ EF = 128 $.
+*""Inside Qdrant we use HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors.”*\
-The results are summarized in the tables:
+-- Andrey Vasnetsov
-#### Glove-100
+*""The main idea is that we convert the float point elements of the vector into binary representation. So, it's either zero or one, depending if the original element is positive or negative.”*\
+-- Andrey Vasnetsov
-
-
+*""We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI.”*\
-