path
stringlengths
9
135
content
stringlengths
8
143k
README.md
# Running locally ## Prerequisites ### Required - [Hugo](https://gohugo.io/getting-started/installing/) ### Needed only for development - [Node.js](https://nodejs.org/en/download/) - [npm](https://www.npmjs.com/get-npm) - [sass](https://sass-lang.com/install) ## Run ```bash cd qdrant-landing hugo serve ``` Open http://localhost:1313/ in your browser. ### Run with drafts If your changes are not shown on the site, check if your markdown file has `draft: true` in the header. Drafts are not shown by default. To see drafts, run the following command: ```bash cd qdrant-landing hugo serve -D ``` ## Build css from scss If you are **going to change scss files**, you need to run the following commands in a separate terminal window. Install sass if you don't have it: ```bash npm install -g sass ``` Install dependencies and run sass watcher: ``` bash cd qdrant-landing npm install sass --watch --style=compressed ./themes/qdrant/static/css/main.scss ./themes/qdrant/static/css/main.css ``` # Content Management To add new content to the site, you need to add a markdown file to the corresponding directory. The file should have a header with metadata. See examples below. Do not push changes to the `master` branch directly. Create a new branch and make a pull request. If you want to make your changes live, you need to merge your pull request to the `master` branch. After that, the changes will be automatically deployed to the site. ## Main Page ### Customers/Partners Logos To add a customer logo to the marquee on the main page: 1. Add a logo to `/qdrant-landing/static/content/images/logos` directory. The logo should be in png format and have a transparent background and width 200px. The color of the logo should be `#B6C0E4`. 2. Add a markdown file to `content/stack` directory using next command (replace `customer-name` with the name of the customer): ``` bash cd qdrant-landing hugo new --kind customer-logo stack/customer-name.md ``` Edit the file if needed. 3. If total number of slides changed - update `static/css/main.scss` file. Find line: ```scss @include marquee.base(80px, 200px, 13, 6, 20px, false, 50s); ``` and change 13 to the number of logos. Rebuild css from scss (see instructions [above](#build-css-from-scss)). 4. To change order of the logos - add or change `weight` parameter in the markdown files in `/qdrant-landing/content/stack` directory. ## Articles ### Metadata Articles are written in markdown and stored in `content/articles` directory. Each article has a header with metadata: ```yaml --- title: Here goes the title of the article #required short_description: Short description of the article description: This is a longer description of the article, you can get a little bit more wordly here. Try to keep it under 140 characters. #required social_preview_image: /articles_data/cars-recognition/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required. small_preview_image: /articles_data/cars-recognition/icon.svg # This image will be used in the list of articles at the footer, should be 40x40px preview_dir: /articles_data/cars-recognition/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. author: Yusuf Sarıgöz # Author of the article. Required. author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Required. date: 2022-06-28T13:00:00+03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - vector databases comparative benchmark - benchmark - performance - latency --- ``` ### Preview image mechanism Preview image for each page is selected based from the following places in the following order: - If document has param `social_preview_image` - it will be used as preview image - If there is a file `static/<path-to-section>/<file-name>-social-preview.png` - it will be used as preview image - Global `preview_image = "/images/social_preview.png"` will be used as preview image ### Article preview Article preview is a set of images that will be used in the article preview. They can be generated from one image. To generate preview images, you need to have [ImageMagick](https://imagemagick.org/index.php) and [cwebp](https://developers.google.com/speed/webp/download) installed. You can install `cwebp` with the following command: ```bash curl -s https://raw.githubusercontent.com/Intervox/node-webp/latest/bin/install_webp | sudo bash ``` #### Prepare preview image For the preview use image with the aspect ratio 3 to 1 in jpg or png format. With resolution not smaller than 1200x630px. The image should illustrate in some way the article's core idea. Fill free got creative. Check out that most important part of the image is in the center. #### Generating preview images To generate preview images, run the following command from the root of project: ```bash bash -x automation/process-article-img.sh <path-to-image> <alias-for-the-article> ``` For example: ```bash bash -x automation/process-article-img.sh ~/Pictures/my_preview.jpg filtrable-hnsw ``` This command will create a directory `preview` in `static/article_data/filtrable-hnsw` and generate preview images in it. If the directory `static/article_data/filtrable-hnsw` doesn't exist, it will be created. If it exists, only files in children `preview` directory will be affected. In this case preview images will be overwritten. Your original image will not be affected. #### Preview images set Preview images set consists of the following images: `preview.jpg` - 530x145px (used on the article preview card **for browsers, not supporting webp**) `preview.webp` - 530x145px (used on the article preview card **for browsers, supporting webp**) `title.jpg` - 898x300px (used on the article's page as the main image before the article title **for browsers, not supporting webp**) `title.webp` - 898x300px (used on the article's page as the main image before the article title **for browsers, supporting webp**) `social_preview.jpg` - 1200x630px (used in social media previews) ## Documentation ### Metadata Documentation pages are written in markdown and stored in `content/documentation` directory. Each page has a header with metadata: ```yaml --- title: Here goes the title of the page #required weight: 10 # This is the order of the page in the sidebar. The lower the number, the higher the page will be in the sidebar. canonicalUrl: https://qdrant.io/documentation/ # Optional. This is the canonical url of the page. hideInSidebar: true # Optional. If true, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md). --- ``` ### Preview images for documentation pages Branded individual preview images for documentation pages might be auto-generated using the following command: (from the root of the project) ```bash bash -x automation/generate-all-docs-preview.sh ``` It will automatically insert documentation Section name and Title of the page into the preview. If there is a custom background for the image - it should be placed in the `static/documentation/<section-name>/<page>-bg.png`. <!-- (Use midjourney and one of the styles https://www.notion.so/qdrant/Midjourney-styles-a8dbc94761a74bb287a8a8ad05d593d1 to generate the background) --> If there is no custom background - random default background will be used. Generated images will be placed in the `static/documentation/<section-name>/<page>-social-preview.png`. To re-generate preview image, remove the previously generated one and run the command again. ### Documentation sidebar #### Delimiter To create a delimiter in the sidebar, use the following command: ``` bash cd qdrant-landing hugo new --kind delimiter documentation/<delimiter-title>.md ``` It will create a file `content/documentation/<delimiter-title>.md`. To put a delimiter to desired place in the sidebar, set the `weight` parameter to the desired value. The lower the value, the higher the delimiter will be in the sidebar. #### External link To create an external link in the sidebar, use the following command: ``` bash cd qdrant-landing hugo new --kind external-link documentation/<link-title>.md ``` It will create a file `content/documentation/<link-title>.md`. Open it and set the `external_link` parameter to the desired value. #### Params Additionally, to the standard hugo front matter params, we have the following params: ```yaml hideInSidebar: true ``` If `true`, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md). ## Blog To add a new blog post, run the following commands: ``` bash cd qdrant-landing hugo new --kind blog-post blog/<post-title>.md ``` You'll see a file named `content/blog/<post-title>.md`. Open it and edit the front matter. ### Images Store images for blog posts in the following subdirectory: `static/blog/<post-title>`. You can add nested directories if needed. For social media previews, use images of at least 1200x600px. In the blog post file, you'll see: - `preview_image`: The image that appears with the blog post. If you want different images for social media, the blog post title, or the preview, use the following properties: - `social_preview_image` - `title_preview_image` - `small_preview_image` - ### Important notes - Add tags. While they're not shown on the blog post page, they are used to display related posts. - If post has `featured: true` property in the front matter this post will appear in the "Features and News" blog section. Only the last 4 featured posts will be displayed in this section. Featured posts will not appear in the regular post list. - If there are more than 4 `featured: true` posts (where `draft: false`), the oldest post disappears from /blog. ## Marketing Landing Pages ### Build styles From the root of the project: ```bash sass --watch --style=compressed ./qdrant-landing/themes/qdrant/static/css/pages/marketing-landing.scss ./qdrant-landing/themes/qdrant/static/css/marketing-landing.css ``` ## SEO ### Structured data (Schema.org, JSON-LD) Structured data is a standardized format for providing information about a page and classifying the page content. It is used by search engines to understand the content of the page and to display rich snippets in search results. We use JSON-LD format for structured data. Data is stored in JSON files in the `/assets/schema` directory. If no specific schema is provided for a page, the default schema is used based on the page type as defined in the `qdrant-landing/themes/qdrant/layouts/partials/seo_schema.html` file. To add specific schema to a specific page, use the `seo_schema` or `seo_schema_json` parameter in the front matter of content markdown files (directory `content`). To add json directly to the page, use the `seo_schema` parameter. The value should be a JSON object. Example: ```yaml seo_schema: { "@context": "https://schema.org", "@type": "Organization", "name": "Qdrant", "url": "https://qdrant.io", "logo": "https://qdrant.io/images/logo.png", "sameAs": [ "https://www.linkedin.com/company/qdrant", "https://twitter.com/qdrant" ] } ``` To add a path to a JSON files with schema data, use the `seo_schema_json` parameter. This parameter should contain a list of paths to JSON files. The path should be relative to the `qdrant-landing/assets` directory. Example: ```yaml seo_schema_json: - schema/schema-organization.json - schema/product-schema.json ``` If you want to add a new schema, create a new JSON file in the `qdrant-landing/assets/schema` directory and add the path to the `seo_schema_json` parameter. When use `seo_schema` and `seo_schema_json` together, `seo_schema` will be used additionally to `seo_schema_json` adding the second <script> tag with the `seo_schema` value. Use `seo_schema_json` if you want to reuse the same schema for multiple pages to avoid duplication and make it easier to maintain.
qdrant-landing/GrammarLinter.md
# English grammar linter (Vale) This repository includes `beta` rules based on the [Vale grammar linter](https://vale.sh). While the [installation instructions](https://vale.sh/docs/vale-cli/installation/#package-managers) cover Mac and Windows, I've installed Vale on Ubuntu Linux. Vale includes installation binaries in one of their [Git repositories](https://github.com/errata-ai/vale/releases). You can integrate [Vale as a plugin](https://vale.sh/docs/integrations/guide/) with several different IDEs. This README illustrates integration between Vale and VSCode. Vale pulls rules from YAML files in the `styles/` subdirectory. They include grammar rules in the following subdirectories: - Modified rules from GitLab in the `styles/Qdrant/` subdirectory - [Google Developer Style Guide](https://github.com/errata-ai/Google) rules, customized for Vale, in the `styles/Google` subdirectory - Rules associated with the [write-good](https://github.com/btford/write-good) grammar linter These rules are a "Work in Progress"; we may overrule/modify them as we use them to review Qdrant content. For example, if you find a common word / acronym that we use, you're welcome to add it (with a PR) to our `styles/cobalt/spelling-exceptions.txt` file. For more information, see the [Vale documentation](https://vale.sh/). ## Vale configuration The Vale configuration file is .vale.ini. In this file, we see: - The `StylesPath` points to rules in the `styles/` subdirectory. - The `BasedOnStyles` parameter specifies style subdirectories. - The `IgnoredScopes` tells Vale to ignore content such as code samples, as described in [Vale Documentation](https://vale.sh/docs/topics/config/#ignoredscopes). Tip: If you want Vale to ignore code, surround it with code sample marks such as: - `Vale_ignores_this` ``` Vale also ignores this ``` ## Use Vale in your IDE You can set up Vale with several different IDEs. For more information, see the [Integrations](https://vale.sh/docs/integrations/guide/) section of the Vale documentation. For example, you can set up a Vale plugin with the VSCode IDE, per https://github.com/chrischinchilla/vale-vscode. If you have problems with Vale in VSCode, you may need to: - Restart VSCode - Disable / re-enable the Vale plugin - Save changes to the Markdown file that you're analyzing If you're successful, you'll see linting messages similar to what's shown in the following screenshot: <p align="center"> <img src="static/VSCodeDemo.png"> </p> ## Use Vale at the command line To review your content against the given style guide rules, first navigate to the `qdrant-landing/` directory for this repository. Then run the following command: ``` vale /path/to/your/filename.md ``` As long as you're in the `qdrant-landing/` directory, you can use Vale at the command line to lint Markdown files in any local directory. ## Potential future options - Include Vale in CI/CD jobs - Set up a GitHub action - Apply vale to articles and blog posts - Guess: we need different rules. Default rules for documentation suggest: - Use "second person" - Avoid future tense - Don't use exclamation points - Avoid words like "easy" and "simple" These rules generally do not apply to articles or blogs.
qdrant-landing/archetypes/blog-post.md
--- title: "{{ replace .Name "-" " " | title }}" draft: false slug: {{ .Name }} # Change this slug to your page slug if needed short_description: This is a blog post # Change this description: This is a blog post # Change this preview_image: /blog/Article-Image.png # Change this # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: {{ .Date }} author: John Doe # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - news - blog weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- Here is your blog post content. You can use markdown syntax here. # Header 1 ## Header 2 ### Header 3 #### Header 4 ##### Header 5 ###### Header 6 <aside role="alert"> You can add a note to your page using this aside block. </aside> <aside role="status"> This is a warning message. </aside> > This is a blockquote following a header. Table: | Header 1 | Header 2 | Header 3 | Header 4 | | -------- | -------- | -------- | -------- | | Cell 1 | Cell 2 | Cell 3 | Cell 4 | | Cell 3 | Cell 4 | Cell 5 | Cell 6 | - List item 1 - Nested list item 1 - Nested list item 2 - List item 2 - List item 3 1. Numbered list item 1 1. Nested numbered list item 1 2. Nested numbered list item 2 2. Numbered list item 2 3. Numbered list item 3
qdrant-landing/archetypes/default.md
--- title: "{{ replace .Name "-" " " | title }}" date: {{ .Date }} draft: true ---
qdrant-landing/archetypes/delimiter.md
--- #Delimiter files are used to separate the list of documentation pages into sections. title: "{{ replace .Name "-" " " | title }}" type: delimiter weight: 0 # Change this weight to change order of sections sitemapExclude: True ---
qdrant-landing/archetypes/external-link.md
--- # External link template title: "{{ replace .Name "-" " " | title }}" type: external-link external_url: https://github.com/qdrant/qdrant # Change this link to your external link sitemapExclude: True ---
qdrant-landing/content/about-us/_index.md
--- title: About Us ---
qdrant-landing/content/advanced-search/_index.md
--- title: advanced-search description: advanced-search build: render: always cascade: - build: list: local publishResources: false render: never ---
qdrant-landing/content/advanced-search/advanced-search-features.md
--- title: Search with Qdrant description: Qdrant enhances search, offering semantic, similarity, multimodal, and hybrid search capabilities for accurate, user-centric results, serving applications in different industries like e-commerce to healthcare. features: - id: 0 icon: src: /icons/outline/similarity-blue.svg alt: Similarity title: Semantic Search description: Qdrant optimizes similarity search, identifying the closest database items to any query vector for applications like recommendation systems, RAG and image retrieval, enhancing accuracy and user experience. link: text: Learn More url: /documentation/concepts/search/ - id: 1 icon: src: /icons/outline/search-text-blue.svg alt: Search text title: Hybrid Search for Text description: By combining dense vector embeddings with sparse vectors e.g. BM25, Qdrant powers semantic search to deliver context-aware results, transcending traditional keyword search by understanding the deeper meaning of data. link: text: Learn More url: /documentation/tutorials/hybrid-search-fastembed/ - id: 2 icon: src: /icons/outline/selection-blue.svg alt: Selection title: Multimodal Search description: Qdrant's capability extends to multi-modal search, indexing and retrieving various data forms (text, images, audio) once vectorized, facilitating a comprehensive search experience. link: text: View Tutorial url: /documentation/tutorials/aleph-alpha-search/ - id: 3 icon: src: /icons/outline/filter-blue.svg alt: Filter title: Single Stage filtering that Works description: Qdrant enhances search speeds and control and context understanding through filtering on any nested entry in our payload. Unique architecture allows Qdrant to avoid expensive pre-filtering and post-filtering stages, making search faster and accurate. link: text: Learn More url: /articles/filtrable-hnsw/ sitemapExclude: true ---
qdrant-landing/content/advanced-search/advanced-search-hero.md
--- title: Advanced Search description: Dive into next-gen search capabilities with Qdrant, offering a smarter way to deliver precise and tailored content to users, enhancing interaction accuracy and depth. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-0.svg alt: Advanced search sitemapExclude: true ---
qdrant-landing/content/advanced-search/advanced-search-use-cases.md
--- title: Learn how to get started with Qdrant for your search use case features: - id: 0 image: src: /img/advanced-search-use-cases/startup-semantic-search.svg alt: Startup Semantic Search title: Startup Semantic Search Demo description: The demo showcases semantic search for startup descriptions through SentenceTransformer and Qdrant, comparing neural search's accuracy with traditional searches for better content discovery. link: text: View Demo url: https://demo.qdrant.tech/ - id: 1 image: src: /img/advanced-search-use-cases/multimodal-semantic-search.svg alt: Multimodal Semantic Search title: Multimodal Semantic Search with Aleph Alpha description: This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks. link: text: View Tutorial url: /documentation/examples/aleph-alpha-search/ - id: 2 image: src: /img/advanced-search-use-cases/simple-neural-search.svg alt: Simple Neural Search title: Create a Simple Neural Search Service description: This tutorial shows you how to build and deploy your own neural search service. link: text: View Tutorial url: /documentation/tutorials/neural-search/ - id: 3 image: src: /img/advanced-search-use-cases/image-classification.svg alt: Image Classification title: Image Classification with Qdrant Vector Semantic Search description: In this tutorial, you will learn how a semantic search engine for images can help diagnose different types of skin conditions. link: text: View Tutorial url: https://www.youtube.com/watch?v=sNFmN16AM1o - id: 4 image: src: /img/advanced-search-use-cases/semantic-search-101.svg alt: Semantic Search 101 title: Semantic Search 101 description: Build a semantic search engine for science fiction books in 5 mins. link: text: View Tutorial url: /documentation/tutorials/search-beginners/ - id: 5 image: src: /img/advanced-search-use-cases/hybrid-search-service-fastembed.svg alt: Create a Hybrid Search Service with Fastembed title: Create a Hybrid Search Service with Fastembed description: This tutorial guides you through building and deploying your own hybrid search service using Fastembed. link: text: View Tutorial url: /documentation/tutorials/hybrid-search-fastembed/ sitemapExclude: true ---
qdrant-landing/content/articles/_index.md
--- title: Qdrant Articles page_title: Articles about Vector Search description: Articles about vector search and similarity larning related topics. Latest updates on Qdrant vector search engine. section_title: Check out our latest publications subtitle: Check out our latest publications img: /articles_data/title-img.png ---
qdrant-landing/content/articles/binary-quantization-openai.md
--- title: "Optimizing OpenAI Embeddings: Enhance Efficiency with Qdrant's Binary Quantization" draft: false slug: binary-quantization-openai short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings description: Explore how Qdrant's Binary Quantization can significantly improve the efficiency and performance of OpenAI's Ada-003 embeddings. Learn best practices for real-time search applications. preview_dir: /articles_data/binary-quantization-openai/preview preview_image: /articles-data/binary-quantization-openai/Article-Image.png small_preview_image: /articles_data/binary-quantization-openai/icon.svg social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp date: 2024-02-21T13:12:08-08:00 author: Nirant Kasliwal author_link: https://nirantk.com/about/ featured: false tags: - OpenAI - binary quantization - embeddings weight: -130 aliases: [ /blog/binary-quantization-openai/ ] --- OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings. In this post, we discuss: - The significance of OpenAI embeddings and real-world challenges. - Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings - Results of an experiment that highlights improvements in search efficiency and accuracy - Implications of these findings for real-world applications - Best practices for leveraging Binary Quantization to enhance OpenAI embeddings If you're new to Binary Quantization, consider reading our article which walks you through the concept and [how to use it with Qdrant](/articles/binary-quantization/) You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks. ## New OpenAI embeddings: performance and changes As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates). These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL). #### Matryoshka representation learning The new OpenAI models have been trained with a novel approach called "[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size. Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models. ## Enhanced performance and efficiency with binary quantization By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets. The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications. ![](/blog/openai/Accuracy_Models.png) The efficiency gains from Binary Quantization are as follows: - Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost. - Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches. - Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases. ### Experiment setup: OpenAI embeddings in focus To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature. This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny. #### Dataset The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset. #### Parameters: oversampling, rescoring, and search limits For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters: - **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources. - **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance. - **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements. Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs. ### Results: binary quantization's impact on OpenAI embeddings To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query. #### Rescoring ![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png) Here are some key observations, which analyzes the impact of rescoring (`True` or `False`): 1. **Significantly Improved Accuracy**: - Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`). - The improvement in accuracy is true across various search limits (10, 20, 50, 100). 2. **Model and Dimension Specific Observations**: - For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate. - The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results. - With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions. In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces. 3. **Influence of Search Limit**: - The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered. In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction. ### Dataset combinations For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes: 1. **Model Name**: Signifying the specific text embedding model variant, such as "text-embedding-3-large" or "text-embedding-3-small". This distinction correlates with the model's capacity, with "large" models offering more detailed embeddings at the cost of increased computational resources. 2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant. Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results. ```python dataset_combinations = [ { "model_name": "text-embedding-3-large", "dimensions": 3072, }, { "model_name": "text-embedding-3-large", "dimensions": 1024, }, { "model_name": "text-embedding-3-large", "dimensions": 1536, }, { "model_name": "text-embedding-3-small", "dimensions": 512, }, { "model_name": "text-embedding-3-small", "dimensions": 1024, }, { "model_name": "text-embedding-3-small", "dimensions": 1536, }, ] ``` #### Exploring dataset combinations and their impacts on model performance The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step. Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup. After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring. ```python import pandas as pd for combination in dataset_combinations: model_name = combination["model_name"] dimensions = combination["dimensions"] print(f"Model: {model_name}, dimensions: {dimensions}") results = pd.read_json(f"../results/results-{model_name}-{dimensions}.json", lines=True) average_accuracy = results[results["limit"] != 1] average_accuracy = average_accuracy[average_accuracy["limit"] != 5] average_accuracy = average_accuracy.groupby(["oversampling", "rescore", "limit"])[ "accuracy" ].mean() average_accuracy = average_accuracy.reset_index() acc = average_accuracy.pivot( index="limit", columns=["oversampling", "rescore"], values="accuracy" ) print(acc) ``` Here is a selected slice of these results, with `rescore=True`: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large (highest MTEB score from the table) |3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| #### Impact of oversampling You can use oversampling in machine learning to counteract imbalances in datasets. It works well when one class significantly outnumbers others. This imbalance can skew the performance of models, which favors the majority class at the expense of others. By creating additional samples from the minority classes, oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios. The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one. Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects. ![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png) ### Leveraging binary quantization: best practices We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings: 1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested. 2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages. 3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications. 4. Rescoring: Enable rescoring to improve the accuracy of search results. 5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible. ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant)
qdrant-landing/content/articles/binary-quantization.md
--- title: "Binary Quantization - Vector Search, 40x Faster " short_description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance" description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance" social_preview_image: /articles_data/binary-quantization/social_preview.png small_preview_image: /articles_data/binary-quantization/binary-quantization-icon.svg preview_dir: /articles_data/binary-quantization/preview weight: -40 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-09-18T13:00:00+03:00 draft: false keywords: - vector search - binary quantization - memory optimization --- # Optimizing High-Dimensional Vectors with Binary Quantization Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points. Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x. As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation. The rest of this article will cover: 1. The importance of binary quantization 2. Basic implementation using our Python client 3. Benchmark analysis and usage recommendations ## What is Binary Quantization? Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison. ![What is binary quantization](/articles_data/binary-quantization/bq-2.png) **This binarization function is how we convert a range to binary values. All numbers greater than zero are marked as 1. If it's zero or less, they become 0.** The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. In exchange for reducing our 32 bit embeddings to 1 bit embeddings we can see up to a 40x retrieval speed up gain! One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector. For example, The 1536 dimension OpenAI embedding is worse than Open Source counterparts of 384 dimension at retrieval and ranking. Specifically, it scores 49.25 on the same [Embedding Retrieval Benchmark](https://huggingface.co/spaces/mteb/leaderboard) where the Open Source `bge-small` scores 51.82. This 2.57 points difference adds up quite soon. Our implementation of quantization achieves a good balance between full, large vectors at ranking time and binary vectors at search and retrieval time. It also has the ability for you to adjust this balance depending on your use case. ## Faster search and retrieval Unlike product quantization, binary quantization does not rely on reducing the search space for each probe. Instead, we build a binary index that helps us achieve large increases in search speed. ![Speed by quantization method](/articles_data/binary-quantization/bq-3.png) HNSW is the approximate nearest neighbor search. This means our accuracy improves up to a point of diminishing returns, as we check the index for more similar candidates. In the context of binary quantization, this is referred to as the **oversampling rate**. For example, if `oversampling=2.0` and the `limit=100`, then 200 vectors will first be selected using a quantized index. For those 200 vectors, the full 32 bit vector will be used with their HNSW index to a much more accurate 100 item result set. As opposed to doing a full HNSW search, we oversample a preliminary search and then only do the full search on this much smaller set of vectors. ## Improved storage efficiency The following diagram shows the binarization function, whereby we reduce 32 bits storage to 1 bit information. Text embeddings can be over 1024 elements of floating point 32 bit numbers. For example, remember that OpenAI embeddings are 1536 element vectors. This means each vector is 6kB for just storing the vector. ![Improved storage efficiency](/articles_data/binary-quantization/bq-4.png) In addition to storing the vector, we also need to maintain an index for faster search and retrieval. Qdrant’s formula to estimate overall memory consumption is: `memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes` For 100K OpenAI Embedding (`ada-002`) vectors we would need 900 Megabytes of RAM and disk space. This consumption can start to add up rapidly as you create multiple collections or add more items to the database. **With binary quantization, those same 100K OpenAI vectors only require 128 MB of RAM.** We benchmarked this result using methods similar to those covered in our [Scalar Quantization memory estimation](/articles/scalar-quantization/#benchmarks). This reduction in RAM needed is achieved through the compression that happens in the binary conversion. Instead of putting the HNSW index for the full vectors into RAM, we just put the binary vectors into RAM, use them for the initial oversampled search, and then use the HNSW full index of the oversampled results for the final precise search. All of this happens under the hoods without any intervention needed on your part. ### When should you not use BQ? Since this method exploits the over-parameterization of embedding, you can expect poorer results for small embeddings i.e. less than 1024 dimensions. With the smaller number of elements, there is not enough information maintained in the binary vector to achieve good results. You will still get faster boolean operations and reduced RAM usage, but the accuracy degradation might be too high. ## Sample implementation Now that we have introduced you to binary quantization, let’s try our a basic implementation. In this example, we will be using OpenAI and Cohere with Qdrant. #### Create a collection with Binary Quantization enabled Here is what you should do at indexing time when you create the collection: 1. We store all the "full" vectors on disk. 2. Then we set the binary embeddings to be in RAM. By default, both the full vectors and BQ get stored in RAM. We move the full vectors to disk because this saves us memory and allows us to store more vectors in RAM. By doing this, we explicitly move the binary vectors to memory by setting `always_ram=True`. ```python from qdrant_client import QdrantClient #collect to our Qdrant Server client = QdrantClient( url="http://localhost:6333", prefer_grpc=True, ) #Create the collection to hold our embeddings # on_disk=True and the quantization_config are the areas to focus on collection_name = "binary-quantization" client.recreate_collection( collection_name=f"{collection_name}", vectors_config=models.VectorParams( size=1536, distance=models.Distance.DOT, on_disk=True, ), optimizers_config=models.OptimizersConfigDiff( default_segment_number=5, indexing_threshold=0, ), quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig(always_ram=True), ), ) ``` #### What is happening in the OptimizerConfig? We're setting `indexing_threshold` to 0 i.e. disabling the indexing to zero. This allows faster uploads of vectors and payloads. We will turn it back on down below, once all the data is loaded #### Next, we upload our vectors to this and then enable indexing: ```python batch_size = 10000 client.upload_collection( collection_name=collection_name, ids=range(len(dataset)), vectors=dataset["openai"], payload=[ {"text": x} for x in dataset["text"] ], parallel=10, # based on the machine ) ``` Enable indexing again: ```python client.update_collection( collection_name=f"{collection_name}", optimizer_config=models.OptimizersConfigDiff( indexing_threshold=20000 ) ) ``` #### Configure the search parameters: When setting search parameters, we specify that we want to use `oversampling` and `rescore`. Here is an example snippet: ```python client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7, ...], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.0, ) ) ) ``` After Qdrant pulls the oversampled vectors set, the full vectors which will be, say 1536 dimensions for OpenAI will then be pulled up from disk. Qdrant computes the nearest neighbor with the query vector and returns the accurate, rescored order. This method produces much more accurate results. We enabled this by setting `rescore=True`. These two parameters are how you are going to balance speed versus accuracy. The larger the size of your oversample, the more items you need to read from disk and the more elements you have to search with the relatively slower full vector index. On the other hand, doing this will produce more accurate results. If you have lower accuracy requirements you can even try doing a small oversample without rescoring. Or maybe, for your data set combined with your accuracy versus speed requirements you can just search the binary index and no rescoring, i.e. leaving those two parameters out of the search query. ## Benchmark results We retrieved some early results on the relationship between limit and oversampling using the the DBPedia OpenAI 1M vector dataset. We ran all these experiments on a Qdrant instance where 100K vectors were indexed and used 100 random queries. We varied the 3 parameters that will affect query time and accuracy: limit, rescore and oversampling. We offer these as an initial exploration of this new feature. You are highly encouraged to reproduce these experiments with your data sets. > Aside: Since this is a new innovation in vector databases, we are keen to hear feedback and results. [Join our Discord server](https://discord.gg/Qy6HCJK9Dc) for further discussion! **Oversampling:** In the figure below, we illustrate the relationship between recall and number of candidates: ![Correct vs candidates](/articles_data/binary-quantization/bq-5.png) We see that "correct" results i.e. recall increases as the number of potential "candidates" increase (limit x oversampling). To highlight the impact of changing the `limit`, different limit values are broken apart into different curves. For example, we see that the lowest recall for limit 50 is around 94 correct, with 100 candidates. This also implies we used an oversampling of 2.0 As oversampling increases, we see a general improvement in results – but that does not hold in every case. **Rescore:** As expected, rescoring increases the time it takes to return a query. We also repeated the experiment with oversampling except this time we looked at how rescore impacted result accuracy. ![Relationship between limit and rescore on correct](/articles_data/binary-quantization/bq-7.png) **Limit:** We experiment with limits from Top 1 to Top 50 and we are able to get to 100% recall at limit 50, with rescore=True, in an index with 100K vectors. ## Recommendations Quantization gives you the option to make tradeoffs against other parameters: Dimension count/embedding size Throughput and Latency requirements Recall requirements If you're working with OpenAI or Cohere embeddings, we recommend the following oversampling settings: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| |Cohere AI embed-english-v2.0|4096|[Wikipedia](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) 1M|0.98|2x| |OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x| |Gemini|768|No Open Data| 0.9563|3x| |Mistral Embed|768|No Open Data| 0.9445 |3x| If you determine that binary quantization is appropriate for your datasets and queries then we suggest the following: - Binary Quantization with always_ram=True - Vectors stored on disk - Oversampling=2.0 (or more) - Rescore=True ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). If you have any feedback, drop us a note on Twitter or LinkedIn to tell us about your results. [Join our lively Discord Server](https://discord.gg/Qy6HCJK9Dc) if you want to discuss BQ with like-minded people!
qdrant-landing/content/articles/cars-recognition.md
--- title: Fine Tuning Similar Cars Search short_description: "How to use similarity learning to search for similar cars" description: Learn how to train a similarity model that can retrieve similar car images in novel categories. social_preview_image: /articles_data/cars-recognition/preview/social_preview.jpg small_preview_image: /articles_data/cars-recognition/icon.svg preview_dir: /articles_data/cars-recognition/preview weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-06-28T13:00:00+03:00 draft: false # aliases: [ /articles/cars-recognition/ ] --- Supervised classification is one of the most widely used training objectives in machine learning, but not every task can be defined as such. For example, 1. Your classes may change quickly —e.g., new classes may be added over time, 2. You may not have samples from every possible category, 3. It may be impossible to enumerate all the possible classes during the training time, 4. You may have an essentially different task, e.g., search or retrieval. All such problems may be efficiently solved with similarity learning. N.B.: If you are new to the similarity learning concept, checkout the [awesome-metric-learning](https://github.com/qdrant/awesome-metric-learning) repo for great resources and use case examples. However, similarity learning comes with its own difficulties such as: 1. Need for larger batch sizes usually, 2. More sophisticated loss functions, 3. Changing architectures between training and inference. Quaterion is a fine tuning framework built to tackle such problems in similarity learning. It uses [PyTorch Lightning](https://www.pytorchlightning.ai/) as a backend, which is advertized with the motto, "spend more time on research, less on engineering." This is also true for Quaterion, and it includes: 1. Trainable and servable model classes, 2. Annotated built-in loss functions, and a wrapper over [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) when you need even more, 3. Sample, dataset and data loader classes to make it easier to work with similarity learning data, 4. A caching mechanism for faster iterations and less memory footprint. ## A closer look at Quaterion Let's break down some important modules: - `TrainableModel`: A subclass of `pl.LightNingModule` that has additional hook methods such as `configure_encoders`, `configure_head`, `configure_metrics` and others to define objects needed for training and evaluation —see below to learn more on these. - `SimilarityModel`: An inference-only export method to boost code transfer and lower dependencies during the inference time. In fact, Quaterion is composed of two packages: 1. `quaterion_models`: package that you need for inference. 2. `quaterion`: package that defines objects needed for training and also depends on `quaterion_models`. - `Encoder` and `EncoderHead`: Two objects that form a `SimilarityModel`. In most of the cases, you may use a frozen pretrained encoder, e.g., ResNets from `torchvision`, or language modelling models from `transformers`, with a trainable `EncoderHead` stacked on top of it. `quaterion_models` offers several ready-to-use `EncoderHead` implementations, but you may also create your own by subclassing a parent class or easily listing PyTorch modules in a `SequentialHead`. Quaterion has other objects such as distance functions, evaluation metrics, evaluators, convenient dataset and data loader classes, but these are mostly self-explanatory. Thus, they will not be explained in detail in this article for brevity. However, you can always go check out the [documentation](https://quaterion.qdrant.tech) to learn more about them. The focus of this tutorial is a step-by-step solution to a similarity learning problem with Quaterion. This will also help us better understand how the abovementioned objects fit together in a real project. Let's start walking through some of the important parts of the code. If you are looking for the complete source code instead, you can find it under the [examples](https://github.com/qdrant/quaterion/tree/master/examples/cars) directory in the Quaterion repo. ## Dataset In this tutorial, we will use the [Stanford Cars](https://pytorch.org/vision/main/generated/torchvision.datasets.StanfordCars.html) dataset. {{< figure src=https://storage.googleapis.com/quaterion/docs/class_montage.jpg caption="Stanford Cars Dataset" >}} It has 16185 images of cars from 196 classes, and it is split into training and testing subsets with almost a 50-50% split. To make things even more interesting, however, we will first merge training and testing subsets, then we will split it into two again in such a way that the half of the 196 classes will be put into the training set and the other half will be in the testing set. This will let us test our model with samples from novel classes that it has never seen in the training phase, which is what supervised classification cannot achieve but similarity learning can. In the following code borrowed from [`data.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/data.py): - `get_datasets()` function performs the splitting task described above. - `get_dataloaders()` function creates `GroupSimilarityDataLoader` instances from training and testing datasets. - Datasets are regular PyTorch datasets that emit `SimilarityGroupSample` instances. N.B.: Currently, Quaterion has two data types to represent samples in a dataset. To learn more about `SimilarityPairSample`, check out the [NLP tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python import numpy as np import os import tqdm from torch.utils.data import Dataset, Subset from torchvision import datasets, transforms from typing import Callable from pytorch_lightning import seed_everything from quaterion.dataset import ( GroupSimilarityDataLoader, SimilarityGroupSample, ) # set seed to deterministically sample train and test categories later on seed_everything(seed=42) # dataset will be downloaded to this directory under local directory dataset_path = os.path.join(".", "torchvision", "datasets") def get_datasets(input_size: int): # Use Mean and std values for the ImageNet dataset as the base model was pretrained on it. # taken from https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/ mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] # create train and test transforms transform = transforms.Compose( [ transforms.Resize((input_size, input_size)), transforms.ToTensor(), transforms.Normalize(mean, std), ] ) # we need to merge train and test splits into a full dataset first, # and then we will split it to two subsets again with each one composed of distinct labels. full_dataset = datasets.StanfordCars( root=dataset_path, split="train", download=True ) + datasets.StanfordCars(root=dataset_path, split="test", download=True) # full_dataset contains examples from 196 categories labeled with an integer from 0 to 195 # randomly sample half of it to be used for training train_categories = np.random.choice(a=196, size=196 // 2, replace=False) # get a list of labels for all samples in the dataset labels_list = np.array([label for _, label in tqdm.tqdm(full_dataset)]) # get a mask for indices where label is included in train_categories labels_mask = np.isin(labels_list, train_categories) # get a list of indices to be used as train samples train_indices = np.argwhere(labels_mask).squeeze() # others will be used as test samples test_indices = np.argwhere(np.logical_not(labels_mask)).squeeze() # now that we have distinct indices for train and test sets, we can use `Subset` to create new datasets # from `full_dataset`, which contain only the samples at given indices. # finally, we apply transformations created above. train_dataset = CarsDataset( Subset(full_dataset, train_indices), transform=transform ) test_dataset = CarsDataset( Subset(full_dataset, test_indices), transform=transform ) return train_dataset, test_dataset def get_dataloaders( batch_size: int, input_size: int, shuffle: bool = False, ): train_dataset, test_dataset = get_datasets(input_size) train_dataloader = GroupSimilarityDataLoader( train_dataset, batch_size=batch_size, shuffle=shuffle ) test_dataloader = GroupSimilarityDataLoader( test_dataset, batch_size=batch_size, shuffle=False ) return train_dataloader, test_dataloader class CarsDataset(Dataset): def __init__(self, dataset: Dataset, transform: Callable): self._dataset = dataset self._transform = transform def __len__(self) -> int: return len(self._dataset) def __getitem__(self, index) -> SimilarityGroupSample: image, label = self._dataset[index] image = self._transform(image) return SimilarityGroupSample(obj=image, group=label) ``` ## Trainable Model Now it's time to review one of the most exciting building blocks of Quaterion: [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#module-quaterion.train.trainable_model). It is the base class for models you would like to configure for training, and it provides several hook methods starting with `configure_` to set up every aspect of the training phase just like [`pl.LightningModule`](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.LightningModule.html), its own base class. It is central to fine tuning with Quaterion, so we will break down this essential code in [`models.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) and review each method separately. Let's begin with the imports: ```python import torch import torchvision from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead, SkipConnectionHead from torch import nn from typing import Dict, Union, Optional, List from quaterion import TrainableModel from quaterion.eval.attached_metric import AttachedMetric from quaterion.eval.group import RetrievalRPrecision from quaterion.loss import SimilarityLoss, TripletLoss from quaterion.train.cache import CacheConfig, CacheType from .encoders import CarsEncoder ``` In the following code snippet, we subclass `TrainableModel`. You may use `__init__()` to store some attributes to be used in various `configure_*` methods later on. The more interesting part is, however, in the [`configure_encoders()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_encoders) method. We need to return an instance of [`Encoder`](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) (or a dictionary with `Encoder` instances as values) from this method. In our case, it is an instance of `CarsEncoders`, which we will review soon. Notice now how it is created with a pretrained ResNet152 model whose classification layer is replaced by an identity function. ```python class Model(TrainableModel): def __init__(self, lr: float, mining: str): self._lr = lr self._mining = mining super().__init__() def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet152(pretrained=True) pre_trained_encoder.fc = nn.Identity() return CarsEncoder(pre_trained_encoder) ``` In Quaterion, a [`SimilarityModel`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel) is composed of one or more `Encoder`s and an [`EncoderHead`](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). `quaterion_models` has [several `EncoderHead` implementations](https://quaterion-models.qdrant.tech/quaterion_models.heads.html#module-quaterion_models.heads) with a unified API such as a configurable dropout value. You may use one of them or create your own subclass of `EncoderHead`. In either case, you need to return an instance of it from [`configure_head`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_head) In this example, we will use a `SkipConnectionHead`, which is lightweight and more resistant to overfitting. ```python def configure_head(self, input_embedding_size) -> EncoderHead: return SkipConnectionHead(input_embedding_size, dropout=0.1) ``` Quaterion has implementations of [some popular loss functions](https://quaterion.qdrant.tech/quaterion.loss.html) for similarity learning, all of which subclass either [`GroupLoss`](https://quaterion.qdrant.tech/quaterion.loss.group_loss.html#quaterion.loss.group_loss.GroupLoss) or [`PairwiseLoss`](https://quaterion.qdrant.tech/quaterion.loss.pairwise_loss.html#quaterion.loss.pairwise_loss.PairwiseLoss). In this example, we will use [`TripletLoss`](https://quaterion.qdrant.tech/quaterion.loss.triplet_loss.html#quaterion.loss.triplet_loss.TripletLoss), which is a subclass of `GroupLoss`. In general, subclasses of `GroupLoss` are used with datasets in which samples are assigned with some group (or label). In our example label is a make of the car. Those datasets should emit `SimilarityGroupSample`. Other alternatives are implementations of `PairwiseLoss`, which consume `SimilarityPairSample` - pair of objects for which similarity is specified individually. To see an example of the latter, you may need to check out the [NLP Tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python def configure_loss(self) -> SimilarityLoss: return TripletLoss(mining=self._mining, margin=0.5) ``` `configure_optimizers()` may be familiar to PyTorch Lightning users, but there is a novel `self.model` used inside that method. It is an instance of `SimilarityModel` and is automatically created by Quaterion from the return values of `configure_encoders()` and `configure_head()`. ```python def configure_optimizers(self): optimizer = torch.optim.Adam(self.model.parameters(), self._lr) return optimizer ``` Caching in Quaterion is used for avoiding calculation of outputs of a frozen pretrained `Encoder` in every epoch. When it is configured, outputs will be computed once and cached in the preferred device for direct usage later on. It provides both a considerable speedup and less memory footprint. However, it is quite a bit versatile and has several knobs to tune. To get the most out of its potential, it's recommended that you check out the [cache tutorial](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html). For the sake of making this article self-contained, you need to return a [`CacheConfig`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig) instance from [`configure_caches()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_caches) to specify cache-related preferences such as: - [`CacheType`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType), i.e., whether to store caches on CPU or GPU, - `save_dir`, i.e., where to persist caches for subsequent runs, - `batch_size`, i.e., batch size to be used only when creating caches - the batch size to be used during the actual training might be different. ```python def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig( cache_type=CacheType.AUTO, save_dir="./cache_dir", batch_size=32 ) ``` We have just configured the training-related settings of a `TrainableModel`. However, evaluation is an integral part of experimentation in machine learning, and you may configure evaluation metrics by returning one or more [`AttachedMetric`](https://quaterion.qdrant.tech/quaterion.eval.attached_metric.html#quaterion.eval.attached_metric.AttachedMetric) instances from `configure_metrics()`. Quaterion has several built-in [group](https://quaterion.qdrant.tech/quaterion.eval.group.html) and [pairwise](https://quaterion.qdrant.tech/quaterion.eval.pair.html) evaluation metrics. ```python def configure_metrics(self) -> Union[AttachedMetric, List[AttachedMetric]]: return AttachedMetric( "rrp", metric=RetrievalRPrecision(), prog_bar=True, on_epoch=True, on_step=False, ) ``` ## Encoder As previously stated, a `SimilarityModel` is composed of one or more `Encoder`s and an `EncoderHead`. Even if we freeze pretrained `Encoder` instances, `EncoderHead` is still trainable and has enough parameters to adapt to the new task at hand. It is recommended that you set the `trainable` property to `False` whenever possible, as it lets you benefit from the caching mechanism described above. Another important property is `embedding_size`, which will be passed to `TrainableModel.configure_head()` as `input_embedding_size` to let you properly initialize the head layer. Let's see how an `Encoder` is implemented in the following code borrowed from [`encoders.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/encoders.py): ```python import os import torch import torch.nn as nn from quaterion_models.encoders import Encoder class CarsEncoder(Encoder): def __init__(self, encoder_model: nn.Module): super().__init__() self._encoder = encoder_model self._embedding_size = 2048 # last dimension from the ResNet model @property def trainable(self) -> bool: return False @property def embedding_size(self) -> int: return self._embedding_size ``` An `Encoder` is a regular `torch.nn.Module` subclass, and we need to implement the forward pass logic in the `forward` method. Depending on how you create your submodules, this method may be more complex; however, we simply pass the input through a pretrained ResNet152 backbone in this example: ```python def forward(self, images): embeddings = self._encoder.forward(images) return embeddings ``` An important step of machine learning development is proper saving and loading of models. Quaterion lets you save your `SimilarityModel` with [`TrainableModel.save_servable()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.save_servable) and restore it with [`SimilarityModel.load()`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel.load). To be able to use these two methods, you need to implement `save()` and `load()` methods in your `Encoder`. Additionally, it is also important that you define your subclass of `Encoder` outside the `__main__` namespace, i.e., in a separate file from your main entry point. It may not be restored properly otherwise. ```python def save(self, output_path: str): os.makedirs(output_path, exist_ok=True) torch.save(self._encoder, os.path.join(output_path, "encoder.pth")) @classmethod def load(cls, input_path): encoder_model = torch.load(os.path.join(input_path, "encoder.pth")) return CarsEncoder(encoder_model) ``` ## Training With all essential objects implemented, it is easy to bring them all together and run a training loop with the [`Quaterion.fit()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.fit) method. It expects: - A `TrainableModel`, - A [`pl.Trainer`](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html), - A [`SimilarityDataLoader`](https://quaterion.qdrant.tech/quaterion.dataset.similarity_data_loader.html#quaterion.dataset.similarity_data_loader.SimilarityDataLoader) for training data, - And optionally, another `SimilarityDataLoader` for evaluation data. We need to import a few objects to prepare all of these: ```python import os import pytorch_lightning as pl import torch from pytorch_lightning.callbacks import EarlyStopping, ModelSummary from quaterion import Quaterion from .data import get_dataloaders from .models import Model ``` The `train()` function in the following code snippet expects several hyperparameter values as arguments. They can be defined in a `config.py` or passed from the command line. However, that part of the code is omitted for brevity. Instead let's focus on how all the building blocks are initialized and passed to `Quaterion.fit()`, which is responsible for running the whole loop. When the training loop is complete, you can simply call `TrainableModel.save_servable()` to save the current state of the `SimilarityModel` instance: ```python def train( lr: float, mining: str, batch_size: int, epochs: int, input_size: int, shuffle: bool, save_dir: str, ): model = Model( lr=lr, mining=mining, ) train_dataloader, val_dataloader = get_dataloaders( batch_size=batch_size, input_size=input_size, shuffle=shuffle ) early_stopping = EarlyStopping( monitor="validation_loss", patience=50, ) trainer = pl.Trainer( gpus=1 if torch.cuda.is_available() else 0, max_epochs=epochs, callbacks=[early_stopping, ModelSummary(max_depth=3)], enable_checkpointing=False, log_every_n_steps=1, ) Quaterion.fit( trainable_model=model, trainer=trainer, train_dataloader=train_dataloader, val_dataloader=val_dataloader, ) model.save_servable(save_dir) ``` ## Evaluation Let's see what we have achieved with these simple steps. [`evaluate.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/evaluate.py) has two functions to evaluate both the baseline model and the tuned similarity model. We will review only the latter for brevity. In addition to the ease of restoring a `SimilarityModel`, this code snippet also shows how to use [`Evaluator`](https://quaterion.qdrant.tech/quaterion.eval.evaluator.html#quaterion.eval.evaluator.Evaluator) to evaluate the performance of a `SimilarityModel` on a given dataset by given evaluation metrics. {{< figure src=https://storage.googleapis.com/quaterion/docs/original_vs_tuned_cars.png caption="Comparison of original and tuned models for retrieval" >}} Full evaluation of a dataset usually grows exponentially, and thus you may want to perform a partial evaluation on a sampled subset. In this case, you may use [samplers](https://quaterion.qdrant.tech/quaterion.eval.samplers.html) to limit the evaluation. Similar to `Quaterion.fit()` used for training, [`Quaterion.evaluate()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.evaluate) runs a complete evaluation loop. It takes the following as arguments: - An `Evaluator` instance created with given evaluation metrics and a `Sampler`, - The `SimilarityModel` to be evaluated, - And the evaluation dataset. ```python def eval_tuned_encoder(dataset, device): print("Evaluating tuned encoder...") tuned_cars_model = SimilarityModel.load( os.path.join(os.path.dirname(__file__), "cars_encoders") ).to(device) tuned_cars_model.eval() result = Quaterion.evaluate( evaluator=Evaluator( metrics=RetrievalRPrecision(), sampler=GroupSampler(sample_size=1000, device=device, log_progress=True), ), model=tuned_cars_model, dataset=dataset, ) print(result) ``` ## Conclusion In this tutorial, we trained a similarity model to search for similar cars from novel categories unseen in the training phase. Then, we evaluated it on a test dataset by the Retrieval R-Precision metric. The base model scored 0.1207, and our tuned model hit 0.2540, a twice higher score. These scores can be seen in the following figure: {{< figure src=/articles_data/cars-recognition/cars_metrics.png caption="Metrics for the base and tuned models" >}}
qdrant-landing/content/articles/chatgpt-plugin.md
--- title: Extending ChatGPT with a Qdrant-based knowledge base short_description: "ChatGPT factuality might be improved with semantic search. Here is how." description: "ChatGPT factuality might be improved with semantic search. Here is how." social_preview_image: /articles_data/chatgpt-plugin/social_preview.jpg small_preview_image: /articles_data/chatgpt-plugin/chatgpt-plugin-icon.svg preview_dir: /articles_data/chatgpt-plugin/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-23T18:01:00+01:00 draft: false keywords: - openai - chatgpt - chatgpt plugin - knowledge base - similarity search --- In recent months, ChatGPT has revolutionised the way we communicate, learn, and interact with technology. Our social platforms got flooded with prompts, responses to them, whole articles and countless other examples of using Large Language Models to generate content unrecognisable from the one written by a human. Despite their numerous benefits, these models have flaws, as evidenced by the phenomenon of hallucination - the generation of incorrect or nonsensical information in response to user input. This issue, which can compromise the reliability and credibility of AI-generated content, has become a growing concern among researchers and users alike. Those concerns started another wave of entirely new libraries, such as Langchain, trying to overcome those issues, for example, by combining tools like vector databases to bring the required context into the prompts. And that is, so far, the best way to incorporate new and rapidly changing knowledge into the neural model. So good that OpenAI decided to introduce a way to extend the model capabilities with external plugins at the model level. These plugins, designed to enhance the model's performance, serve as modular extensions that seamlessly interface with the core system. By adding a knowledge base plugin to ChatGPT, we can effectively provide the AI with a curated, trustworthy source of information, ensuring that the generated content is more accurate and relevant. Qdrant may act as a vector database where all the facts will be stored and served to the model upon request. If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or emails, starting with the official [ChatGPT retrieval plugin repository](https://github.com/openai/chatgpt-retrieval-plugin) is the easiest way. Qdrant is already integrated, so that you can use it right away. In the following sections, we will guide you through setting up the knowledge base using Qdrant and demonstrate how this powerful combination can significantly improve ChatGPT's performance and output quality. ## Implementing a knowledge base with Qdrant The official ChatGPT retrieval plugin uses a vector database to build your knowledge base. Your documents are chunked and vectorized with the OpenAI's text-embedding-ada-002 model to be stored in Qdrant. That enables semantic search capabilities. So, whenever ChatGPT thinks it might be relevant to check the knowledge base, it forms a query and sends it to the plugin to incorporate the results into its response. You can now modify the knowledge base, and ChatGPT will always know the most recent facts. No model fine-tuning is required. Let’s implement that for your documents. In our case, this will be Qdrant’s documentation, so you can ask even technical questions about Qdrant directly in ChatGPT. Everything starts with cloning the plugin's repository. ```bash git clone git@github.com:openai/chatgpt-retrieval-plugin.git ``` Please use your favourite IDE to open the project once cloned. ### Prerequisites You’ll need to ensure three things before we start: 1. Create an OpenAI API key, so you can use their embeddings model programmatically. If you already have an account, you can generate one at https://platform.openai.com/account/api-keys. Otherwise, registering an account might be required. 2. Run a Qdrant instance. The instance has to be reachable from the outside, so you either need to launch it on-premise or use the [Qdrant Cloud](https://cloud.qdrant.io/) offering. A free 1GB cluster is available, which might be enough in many cases. We’ll use the cloud. 3. Since ChatGPT will interact with your service through the network, you must deploy it, making it possible to connect from the Internet. Unfortunately, localhost is not an option, but any provider, such as Heroku or fly.io, will work perfectly. We will use [fly.io](https://fly.io/), so please register an account. You may also need to install the flyctl tool for the deployment. The process is described on the homepage of fly.io. ### Configuration The retrieval plugin is a FastAPI-based application, and its default functionality might be enough in most cases. However, some configuration is required so ChatGPT knows how and when to use it. However, we can start setting up Fly.io, as we need to know the service's hostname to configure it fully. First, let’s login into the Fly CLI: ```bash flyctl auth login ``` That will open the browser, so you can simply provide the credentials, and all the further commands will be executed with your account. If you have never used fly.io, you may need to give the credit card details before running any instance, but there is a Hobby Plan you won’t be charged for. Let’s try to launch the instance already, but do not deploy it. We’ll get the hostname assigned and have all the details to fill in the configuration. The retrieval plugin uses TCP port 8080, so we need to configure fly.io, so it redirects all the traffic to it as well. ```bash flyctl launch --no-deploy --internal-port 8080 ``` We’ll be prompted about the application name and the region it should be deployed to. Please choose whatever works best for you. After that, we should see the hostname of the newly created application: ```text ... Hostname: your-application-name.fly.dev ... ``` Let’s note it down. We’ll need it for the configuration of the service. But we’re going to start with setting all the applications secrets: ```bash flyctl secrets set DATASTORE=qdrant \ OPENAI_API_KEY=<your-openai-api-key> \ QDRANT_URL=https://<your-qdrant-instance>.aws.cloud.qdrant.io \ QDRANT_API_KEY=<your-qdrant-api-key> \ BEARER_TOKEN=eyJhbGciOiJIUzI1NiJ9.e30.ZRrHA1JJJW8opsbCGfG_HACGpVUMN_a9IV7pAx_Zmeo ``` The secrets will be staged for the first deployment. There is an example of a minimal Bearer token generated by https://jwt.io/. **Please adjust the token and do not expose it publicly, but you can keep the same value for the demo.** Right now, let’s dive into the application config files. You can optionally provide your icon and keep it as `.well-known/logo.png` file, but there are two additional files we’re going to modify. The `.well-known/openapi.yaml` file describes the exposed API in the OpenAPI format. Lines 3 to 5 might be filled with the application title and description, but the essential part is setting the server URL the application will run. Eventually, the top part of the file should look like the following: ```yaml openapi: 3.0.0 info: title: Qdrant Plugin API version: 1.0.0 description: Plugin for searching through the Qdrant doc… servers: - url: https://your-application-name.fly.dev ... ``` There is another file in the same directory, and that’s the most crucial piece to configure. It contains the description of the plugin we’re implementing, and ChatGPT uses this description to determine if it should communicate with our knowledge base. The file is called `.well-known/ai-plugin.json`, and let’s edit it before we finally deploy the app. There are various properties we need to fill in: | **Property** | **Meaning** | **Example** | |-------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `name_for_model` | Name of the plugin for the ChatGPT model | *qdrant* | | `name_for_human` | Human-friendly model name, to be displayed in ChatGPT UI | *Qdrant Documentation Plugin* | | `description_for_model` | Description of the purpose of the plugin, so ChatGPT knows in what cases it should be using it to answer a question. | *Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search* | | `description_for_human` | Short description of the plugin, also to be displayed in the ChatGPT UI. | *Search through Qdrant docs* | | `auth` | Authorization scheme used by the application. By default, the bearer token has to be configured. | ```{"type": "user_http", "authorization_type": "bearer"}``` | | `api.url` | Link to the OpenAPI schema definition. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/openapi.yaml* | | `logo_url` | Link to the application logo. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/logo.png* | A complete file may look as follows: ```json { "schema_version": "v1", "name_for_model": "qdrant", "name_for_human": "Qdrant Documentation Plugin", "description_for_model": "Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search", "description_for_human": "Search through Qdrant docs", "auth": { "type": "user_http", "authorization_type": "bearer" }, "api": { "type": "openapi", "url": "https://your-application-name.fly.dev/.well-known/openapi.yaml", "has_user_authentication": false }, "logo_url": "https://your-application-name.fly.dev/.well-known/logo.png", "contact_email": "email@domain.com", "legal_info_url": "email@domain.com" } ``` That was the last step before running the final command. The command that will deploy the application on the server: ```bash flyctl deploy ``` The command will build the image using the Dockerfile and deploy the service at a given URL. Once the command is finished, the service should be running on the hostname we got previously: ```text https://your-application-name.fly.dev ``` ## Integration with ChatGPT Once we have deployed the service, we can point ChatGPT to it, so the model knows how to connect. When you open the ChatGPT UI, you should see a dropdown with a Plugins tab included: ![](/articles_data/chatgpt-plugin/step-1.png) Once selected, you should be able to choose one of check the plugin store: ![](/articles_data/chatgpt-plugin/step-2.png) There are some premade plugins available, but there’s also a possibility to install your own plugin by clicking on the "*Develop your own plugin*" option in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-3.png) We need to confirm our plugin is ready, but since we relied on the official retrieval plugin from OpenAI, this should be all fine: ![](/articles_data/chatgpt-plugin/step-4.png) After clicking on "*My manifest is ready*", we can already point ChatGPT to our newly created service: ![](/articles_data/chatgpt-plugin/step-5.png) A successful plugin installation should end up with the following information: ![](/articles_data/chatgpt-plugin/step-6.png) There is a name and a description of the plugin we provided. Let’s click on "*Done*" and return to the "*Plugin store*" window again. There is another option we need to choose in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-7.png) Our plugin is not officially verified, but we can, of course, use it freely. The installation requires just the service URL: ![](/articles_data/chatgpt-plugin/step-8.png) OpenAI cannot guarantee the plugin provides factual information, so there is a warning we need to accept: ![](/articles_data/chatgpt-plugin/step-9.png) Finally, we need to provide the Bearer token again: ![](/articles_data/chatgpt-plugin/step-10.png) Our plugin is now ready to be tested. Since there is no data inside the knowledge base, extracting any facts is impossible, but we’re going to put some data using the Swagger UI exposed by our service at https://your-application-name.fly.dev/docs. We need to authorize first, and then call the upsert method with some docs. For the demo purposes, we can just put a single document extracted from the Qdrant documentation to see whether integration works properly: ![](/articles_data/chatgpt-plugin/step-11.png) We can come back to ChatGPT UI, and send a prompt, but we need to make sure the plugin is selected: ![](/articles_data/chatgpt-plugin/step-12.png) Now if our prompt seems somehow related to the plugin description provided, the model will automatically form a query and send it to the HTTP API. The query will get vectorized by our app, and then used to find some relevant documents that will be used as a context to generate the response. ![](/articles_data/chatgpt-plugin/step-13.png) We have a powerful language model, that can interact with our knowledge base, to return not only grammatically correct but also factual information. And this is how your interactions with the model may start to look like: <iframe width="560" height="315" src="https://www.youtube.com/embed/fQUGuHEYeog" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> However, a single document is not enough to enable the full power of the plugin. If you want to put more documents that you have collected, there are already some scripts available in the `scripts/` directory that allows converting JSON, JSON lines or even zip archives.
qdrant-landing/content/articles/data-privacy.md
--- title: " Data Privacy with Qdrant: Implementing Role-Based Access Control (RBAC)" #required short_description: "Secure Your Data with Qdrant: Implementing RBAC" description: Discover how Qdrant's Role-Based Access Control (RBAC) ensures data privacy and compliance for your AI applications. Build secure and scalable systems with ease. Read more now! social_preview_image: /articles_data/data-privacy/preview/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required. preview_dir: /articles_data/data-privacy/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. author: Qdrant Team # Author of the article. Required. author_link: https://qdrant.tech/ # Link to the author's page. Required. date: 2024-06-18T08:00:00-03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - Role-Based Access Control (RBAC) - Data Privacy in Vector Databases - Secure AI Data Management - Qdrant Data Security - Enterprise Data Compliance --- Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as "[embedding inversion attack](https://arxiv.org/abs/2004.00053)," where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves. Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.** This article explores various strategies to ensure the security of your critical data while leveraging the benefits of vector search. Implementing some of these security approaches can help you build privacy-enhanced similarity search algorithms and integrate them into your AI applications. Additionally, you will learn how to build a fully data-sovereign architecture, allowing you to retain control over your data and comply with relevant data laws and regulations. > To skip right to the code implementation, [click here](/articles/data-privacy/#jwt-on-qdrant). ## Vector Database Security: An Overview Vector databases are often unsecured by default to facilitate rapid prototyping and experimentation. This approach allows developers to quickly ingest data, build vector representations, and test similarity search algorithms without initial security concerns. However, in production environments, unsecured databases pose significant data breach risks. For production use, robust security systems are essential. Authentication, particularly using static API keys, is a common approach to control access and prevent unauthorized modifications. Yet, simple API authentication is insufficient for enterprise data, which requires granular control. The primary challenge with static API keys is their all-or-nothing access, inadequate for role-based data segregation in enterprise applications. Additionally, a compromised key could grant attackers full access to manipulate or steal data. To strengthen the security of the vector database, developers typically need the following: 1. **Encryption**: This ensures that sensitive data is scrambled as it travels between the application and the vector database. This safeguards against Man-in-the-Middle ([MitM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)) attacks, where malicious actors can attempt to intercept and steal data during transmission. 2. **Role-Based Access Control**: As mentioned before, traditional static API keys grant all-or-nothing access, which is a significant security risk in enterprise environments. RBAC offers a more granular approach by defining user roles and assigning specific data access permissions based on those roles. For example, an analyst might have read-only access to specific datasets, while an administrator might have full CRUD (Create, Read, Update, Delete) permissions across the database. 3. **Deployment Flexibility**: Data residency regulations like GDPR (General Data Protection Regulation) and industry-specific compliance requirements dictate where data can be stored, processed, and accessed. Developers would need to choose a database solution which offers deployment options that comply with these regulations. This might include on-premise deployments within a company's private cloud or geographically distributed cloud deployments that adhere to data residency laws. ## How Qdrant Handles Data Privacy and Security One of the cornerstones of our design choices at Qdrant has been the focus on security features. We have built in a range of features keeping the enterprise user in mind, which allow building of granular access control on a fully data sovereign architecture. A Qdrant instance is unsecured by default. However, when you are ready to deploy in production, Qdrant offers a range of security features that allow you to control access to your data, protect it from breaches, and adhere to regulatory requirements. Using Qdrant, you can build granular access control, segregate roles and privileges, and create a fully data sovereign architecture. ### API Keys and TLS Encryption For simpler use cases, Qdrant offers API key-based authentication. This includes both regular API keys and read-only API keys. Regular API keys grant full access to read, write, and delete operations, while read-only keys restrict access to data retrieval operations only, preventing write actions. On Qdrant Cloud, you can create API keys using the [Cloud Dashboard](https://qdrant.to/cloud). This allows you to generate API keys that give you access to a single node or cluster, or multiple clusters. You can read the steps to do so [here](/documentation/cloud/authentication/). ![web-ui](/articles_data/data-privacy/web-ui.png) For on-premise or local deployments, you'll need to configure API key authentication. This involves specifying a key in either the Qdrant configuration file or as an environment variable. This ensures that all requests to the server must include a valid API key sent in the header. When using the simple API key-based authentication, you should also turn on TLS encryption. Otherwise, you are exposing the connection to sniffing and MitM attacks. To secure your connection using TLS, you would need to create a certificate and private key, and then [enable TLS](/documentation/guides/security/#tls) in the configuration. API authentication, coupled with TLS encryption, offers a first layer of security for your Qdrant instance. However, to enable more granular access control, the recommended approach is to leverage JSON Web Tokens (JWTs). ### JWT on Qdrant JSON Web Tokens (JWTs) are a compact, URL-safe, and stateless means of representing _claims_ to be transferred between two parties. These claims are encoded as a JSON object and are cryptographically signed. JWT is composed of three parts: a header, a payload, and a signature, which are concatenated with dots (.) to form a single string. The header contains the type of token and algorithm being used. The payload contains the claims (explained in detail later). The signature is a cryptographic hash and ensures the token’s integrity. In Qdrant, JWT forms the foundation through which powerful access controls can be built. Let’s understand how. JWT is enabled on the Qdrant instance by specifying the API key and turning on the **jwt_rbac** feature in the configuration (alternatively, they can be set as environment variables). For any subsequent request, the API key is used to encode or decode the token. The way JWT works is that just the API key is enough to generate the token, and doesn’t require any communication with the Qdrant instance or server. There are several libraries that help generate tokens by encoding a payload, such as [PyJWT](https://pyjwt.readthedocs.io/en/stable/) (for Python), [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) (for JavaScript), and [jsonwebtoken](https://crates.io/crates/jsonwebtoken) (for Rust). Qdrant uses the HS256 algorithm to encode or decode the tokens. We will look at the payload structure shortly, but here’s how you can generate a token using PyJWT. ```python import jwt import datetime # Define your API key and other payload data api_key = "your_api_key" payload = { ... } token = jwt.encode(payload, api_key, algorithm="HS256") print(token) ``` Once you have generated the token, you should include it in the subsequent requests. You can do so by providing it as a bearer token in the Authorization header, or in the API Key header of your requests. Below is an example of how to do so using QdrantClient in Python: ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "http://localhost:6333", api_key="<JWT>", # the token goes here ) # Example search vector search_vector = [0.1, 0.2, 0.3, 0.4] # Example similarity search request response = qdrant_client.search( collection_name="demo_collection", query_vector=search_vector, limit=5 # Number of results to retrieve ) ``` For convenience, we have added a JWT generation tool in the Qdrant Web UI, which is present under the 🔑 tab. For your local deployments, you will find it at [http://localhost:6333/dashboard#/jwt](http://localhost:6333/dashboard#/jwt). ### Payload Configuration There are several different options (claims) you can use in the JWT payload that help control access and functionality. Let’s look at them one by one. **exp**: This claim is the expiration time of the token, and is a unix timestamp in seconds. After the expiration time, the token will be invalid. **value_exists**: This claim validates the token against a specific key-value stored in a collection. By using this claim, you can revoke access by simply changing a value without having to invalidate the API key. **access**: This claim defines the access level of the token. The access level can be global read (r) or manage (m). It can also be specific to a collection, or even a subset of a collection, using read (r) and read-write (rw). Let’s look at a few example JWT payload configurations. **Scenario 1: 1-hour expiry time, and read-only access to a collection** ```json { "exp": 1690995200, // Set to 1 hour from the current time (Unix timestamp) "access": [ { "collection": "demo_collection", "access": "r" // Read-only access } ] } ``` **Scenario 2: 1-hour expiry time, and access to user with a specific role** Suppose you have a ‘users’ collection and have defined specific roles for each user, such as ‘developer’, ‘manager’, ‘admin’, ‘analyst’, and ‘revoked’. In such a scenario, you can use a combination of **exp** and **value_exists**. ```json { "exp": 1690995200, "value_exists": { "collection": "users", "matches": [ { "key": "username", "value": "john" }, { "key": "role", "value": "developer" } ], }, } ``` Now, if you ever want to revoke access for a user, simply change the value of their role. All future requests will be invalid using a token payload of the above type. **Scenario 3: 1-hour expiry time, and read-write access to a subset of a collection** You can even specify access levels specific to subsets of a collection. This can be especially useful when you are leveraging [multitenancy](/documentation/guides/multiple-partitions/), and want to segregate access. ```json { "exp": 1690995200, "access": [ { "collection": "demo_collection", "access": "r", "payload": { "user_id": "user_123456" } } ] } ``` By combining the claims, you can fully customize the access level that a user or a role has within the vector store. ### Creating Role-Based Access Control (RBAC) Using JWT As we saw above, JWT claims create powerful levers through which you can create granular access control on Qdrant. Let’s bring it all together and understand how it helps you create Role-Based Access Control (RBAC). In a typical enterprise application, you will have a segregation of users based on their roles and permissions. These could be: 1. **Admin or Owner:** with full access, and can generate API keys. 2. **Editor:** with read-write access levels to specific collections. 3. **Viewer:** with read-only access to specific collections. 4. **Data Scientist or Analyst:** with read-only access to specific collections. 5. **Developer:** with read-write access to development- or testing-specific collections, but limited access to production data. 6. **Guest:** with limited read-only access to publicly available collections. In addition, you can create access levels within sections of a collection. In a multi-tenant application, where you have used payload-based partitioning, you can create read-only access for specific user roles for a subset of the collection that belongs to that user. Your application requirements will eventually help you decide the roles and access levels you should create. For example, in an application managing customer data, you could create additional roles such as: **Customer Support Representative**: read-write access to customer service-related data but no access to billing information. **Billing Department**: read-only access to billing data and read-write access to payment records. **Marketing Analyst**: read-only access to anonymized customer data for analytics. Each role can be assigned a JWT with claims that specify expiration times, read/write permissions for collections, and validating conditions. In such an application, an example JWT payload for a customer support representative role could be: ```json { "exp": 1690995200, "access": [ { "collection": "customer_data", "access": "rw", "payload": { "department": "support" } } ], "value_exists": { "collection": "departments", "matches": [ { "key": "department", "value": "support" } ] } } ``` As you can see, by implementing RBAC, you can ensure proper segregation of roles and their privileges, and avoid privacy loopholes in your application. ## Qdrant Hybrid Cloud and Data Sovereignty Data governance varies by country, especially for global organizations dealing with different regulations on data privacy, security, and access. This often necessitates deploying infrastructure within specific geographical boundaries. To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring. Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure. With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control. ## Conclusion Vector similarity is increasingly becoming the backbone of AI applications that leverage unstructured data. By transforming data into vectors – their numerical representations – organizations can build powerful applications that harness semantic search, ranging from better recommendation systems to algorithms that help with personalization, or powerful customer support chatbots. However, to fully leverage the power of AI in production, organizations need to choose a vector database that offers strong privacy and security features, while also helping them adhere to local laws and regulations. Qdrant provides exceptional efficiency and performance, along with the capability to implement granular access control to data, Role-Based Access Control (RBAC), and the ability to build a fully data-sovereign architecture. Interested in mastering vector search security and deployment strategies? [Join our Discord community](https://discord.gg/qdrant) to explore more advanced search strategies, connect with other developers and researchers in the industry, and stay updated on the latest innovations!
qdrant-landing/content/articles/dataset-quality.md
--- title: Finding errors in datasets with Similarity Search short_description: Finding errors datasets with distance-based methods description: Improving quality of text-and-images datasets on the online furniture marketplace example. preview_dir: /articles_data/dataset-quality/preview social_preview_image: /articles_data/dataset-quality/preview/social_preview.jpg small_preview_image: /articles_data/dataset-quality/icon.svg weight: 8 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-07-18T10:18:00.000Z # aliases: [ /articles/dataset-quality/ ] --- Nowadays, people create a huge number of applications of various types and solve problems in different areas. Despite such diversity, they have something in common - they need to process data. Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with. In some cases, you need to categorize or label your data, which can be a tough problem given its scale. The process of splitting or labelling is error-prone and these errors can be very costly. Imagine that you failed to achieve the desired quality of the model due to inaccurate labels. Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it. Thus, you get poor retention, and it directly impacts company revenue. It is really important to avoid such errors in your data. ## Furniture web-marketplace Let’s say you work on an online furniture marketplace. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/furniture_marketplace.png caption="Furniture marketplace" >}} In this case, to ensure a good user experience, you need to split items into different categories: tables, chairs, beds, etc. One can arrange all the items manually and spend a lot of money and time on this. There is also another way: train a classification or similarity model and rely on it. With both approaches it is difficult to avoid mistakes. Manual labelling is a tedious task, but it requires concentration. Once you got distracted or your eyes became blurred mistakes won't keep you waiting. The model also can be wrong. You can analyse the most uncertain predictions and fix them, but the other errors will still leak to the site. There is no silver bullet. You should validate your dataset thoroughly, and you need tools for this. When you are sure that there are not many objects placed in the wrong category, they can be considered outliers or anomalies. Thus, you can train a model or a bunch of models capable of looking for anomalies, e.g. autoencoder and a classifier on it. However, this is again a resource-intensive task, both in terms of time and manual labour, since labels have to be provided for classification. On the contrary, if the proportion of out-of-place elements is high enough, outlier search methods are likely to be useless. ### Similarity search The idea behind similarity search is to measure semantic similarity between related parts of the data. E.g. between category title and item images. The hypothesis is, that unsuitable items will be less similar. We can't directly compare text and image data. For this we need an intermediate representation - embeddings. Embeddings are just numeric vectors containing semantic information. We can apply a pre-trained model to our data to produce these vectors. After embeddings are created, we can measure the distances between them. Assume we want to search for something other than a single bed in «Single beds» category. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/similarity_search.png caption="Similarity search" >}} One of the possible pipelines would look like this: - Take the name of the category as an anchor and calculate the anchor embedding. - Calculate embeddings for images of each object placed into this category. - Compare obtained anchor and object embeddings. - Find the furthest. For instance, we can do it with the [CLIP](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1) model. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_image_transparent.png caption="Category vs. Image" >}} We can also calculate embeddings for titles instead of images, or even for both of them to find more errors. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_name_and_image_transparent.png caption="Category vs. Title and Image" >}} As you can see, different approaches can find new errors or the same ones. Stacking several techniques or even the same techniques with different models may provide better coverage. Hint: Caching embeddings for the same models and reusing them among different methods can significantly speed up your lookup. ### Diversity search Since pre-trained models have only general knowledge about the data, they can still leave some misplaced items undetected. You might find yourself in a situation when the model focuses on non-important features, selects a lot of irrelevant elements, and fails to find genuine errors. To mitigate this issue, you can perform a diversity search. Diversity search is a method for finding the most distinctive examples in the data. As similarity search, it also operates on embeddings and measures the distances between them. The difference lies in deciding which point should be extracted next. Let's imagine how to get 3 points with similarity search and then with diversity search. Similarity: 1. Calculate distance matrix 2. Choose your anchor 3. Get a vector corresponding to the distances from the selected anchor from the distance matrix 4. Sort fetched vector 5. Get top-3 embeddings Diversity: 1. Calculate distance matrix 2. Initialize starting point (randomly or according to the certain conditions) 3. Get a distance vector for the selected starting point from the distance matrix 4. Find the furthest point 5. Get a distance vector for the new point 6. Find the furthest point from all of already fetched points {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/diversity_transparent.png caption="Diversity search" >}} Diversity search utilizes the very same embeddings, and you can reuse them. If your data is huge and does not fit into memory, vector search engines like [Qdrant](https://github.com/qdrant/qdrant) might be helpful. Although the described methods can be used independently. But they are simple to combine and improve detection capabilities. If the quality remains insufficient, you can fine-tune the models using a similarity learning approach (e.g. with [Quaterion](https://quaterion.qdrant.tech) both to provide a better representation of your data and pull apart dissimilar objects in space. ## Conclusion In this article, we enlightened distance-based methods to find errors in categorized datasets. Showed how to find incorrectly placed items in the furniture web store. I hope these methods will help you catch sneaky samples leaked into the wrong categories in your data, and make your users` experience more enjoyable. Poke the [demo](https://dataset-quality.qdrant.tech). Stay tuned :)
qdrant-landing/content/articles/dedicated-service.md
--- title: "Vector Search as a dedicated service" short_description: "Why vector search requires to be a dedicated service." description: "Why vector search requires a dedicated service." social_preview_image: /articles_data/dedicated-service/social-preview.png small_preview_image: /articles_data/dedicated-service/preview/icon.svg preview_dir: /articles_data/dedicated-service/preview weight: -70 author: Andrey Vasnetsov author_link: https://vasnetsov.com/ date: 2023-11-30T10:00:00+03:00 draft: false keywords: - system architecture - vector search - best practices - anti-patterns --- Ever since the data science community discovered that vector search significantly improves LLM answers, various vendors and enthusiasts have been arguing over the proper solutions to store embeddings. Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases. Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/). This article presents our vision and arguments on the topic . We will: 1. Explain why and when you actually need a dedicated vector solution 2. Debunk some ungrounded claims and anti-patterns to be avoided when building a vector search system. A table of contents: * *Each database vendor will sooner or later introduce vector capabilities...* [[click](#each-database-vendor-will-sooner-or-later-introduce-vector-capabilities-that-will-make-every-database-a-vector-database)] * *Having a dedicated vector database requires duplication of data.* [[click](#having-a-dedicated-vector-database-requires-duplication-of-data)] * *Having a dedicated vector database requires complex data synchronization.* [[click](#having-a-dedicated-vector-database-requires-complex-data-synchronization)] * *You have to pay for a vector service uptime and data transfer.* [[click](#you-have-to-pay-for-a-vector-service-uptime-and-data-transfer-of-both-solutions)] * *What is more seamless than your current database adding vector search capability?* [[click](#what-is-more-seamless-than-your-current-database-adding-vector-search-capability)] * *Databases can support RAG use-case end-to-end.* [[click](#databases-can-support-rag-use-case-end-to-end)] ## Responding to claims ###### Each database vendor will sooner or later introduce vector capabilities. That will make every database a Vector Database. The origins of this misconception lie in the careless use of the term Vector *Database*. When we think of a *database*, we subconsciously envision a relational database like Postgres or MySQL. Or, more scientifically, a service built on ACID principles that provides transactions, strong consistency guarantees, and atomicity. The majority of Vector Database are not *databases* in this sense. It is more accurate to call them *search engines*, but unfortunately, the marketing term *vector database* has already stuck, and it is unlikely to change. *What makes search engines different, and why vector DBs are built as search engines?* First of all, search engines assume different patterns of workloads and prioritize different properties of the system. The core architecture of such solutions is built around those priorities. What types of properties do search engines prioritize? * **Scalability**. Search engines are built to handle large amounts of data and queries. They are designed to be horizontally scalable and operate with more data than can fit into a single machine. * **Search speed**. Search engines should guarantee low latency for queries, while the atomicity of updates is less important. * **Availability**. Search engines must stay available if the majority of the nodes in a cluster are down. At the same time, they can tolerate the eventual consistency of updates. {{< figure src=/articles_data/dedicated-service/compass.png caption="Database guarantees compass" width=80% >}} Those priorities lead to different architectural decisions that are not reproducible in a general-purpose database, even if it has vector index support. ###### Having a dedicated vector database requires duplication of data. By their very nature, vector embeddings are derivatives of the primary source data. In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source. And the distinguishing feature of derivative data is that it will change when the transformation pipeline changes. In the case of vector embeddings, the scenario of those changes is quite simple: every time you update the encoder model, all the embeddings will change. In systems where vector embeddings are fused with the primary data source, it is impossible to perform such migrations without significantly affecting the production system. As a result, even if you want to use a single database for storing all kinds of data, you would still need to duplicate data internally. ###### Having a dedicated vector database requires complex data synchronization. Most production systems prefer to isolate different types of workloads into separate services. In many cases, those isolated services are not even related to search use cases. For example, databases for analytics and one for serving can be updated from the same source. Yet they can store and organize the data in a way that is optimal for their typical workloads. Search engines are usually isolated for the same reason: you want to avoid creating a noisy neighbor problem and compromise the performance of your main database. *To give you some intuition, let's consider a practical example:* Assume we have a database with 1 million records. This is a small database by modern standards of any relational database. You can probably use the smallest free tier of any cloud provider to host it. But if we want to use this database for vector search, 1 million OpenAI `text-embedding-ada-002` embeddings will take **~6Gb of RAM** (sic!). As you can see, the vector search use case completely overwhelmed the main database resource requirements. In practice, this means that your main database becomes burdened with high memory requirements and can not scale efficiently, limited by the size of a single machine. Fortunately, the data synchronization problem is not new and definitely not unique to vector search. There are many well-known solutions, starting with message queues and ending with specialized ETL tools. For example, we recently released our [integration with Airbyte](/documentation/integrations/airbyte/), allowing you to synchronize data from various sources into Qdrant incrementally. ###### You have to pay for a vector service uptime and data transfer of both solutions. In the open-source world, you pay for the resources you use, not the number of different databases you run. Resources depend more on the optimal solution for each use case. As a result, running a dedicated vector search engine can be even cheaper, as it allows optimization specifically for vector search use cases. For instance, Qdrant implements a number of [quantization techniques](/documentation/guides/quantization/) that can significantly reduce the memory footprint of embeddings. In terms of data transfer costs, on most cloud providers, network use within a region is usually free. As long as you put the original source data and the vector store in the same region, there are no added data transfer costs. ###### What is more seamless than your current database adding vector search capability? In contrast to the short-term attractiveness of integrated solutions, dedicated search engines propose flexibility and a modular approach. You don't need to update the whole production database each time some of the vector plugins are updated. Maintenance of a dedicated search engine is as isolated from the main database as the data itself. In fact, integration of more complex scenarios, such as read/write segregation, is much easier with a dedicated vector solution. You can easily build cross-region replication to ensure low latency for your users. {{< figure src=/articles_data/dedicated-service/region-based-deploy.png caption="Read/Write segregation + cross-regional deployment" width=80% >}} It is especially important in large enterprise organizations, where the responsibility for different parts of the system is distributed among different teams. In those situations, it is much easier to maintain a dedicated search engine for the AI team than to convince the core team to update the whole primary database. Finally, the vector capabilities of the all-in-one database are tied to the development and release cycle of the entire stack. Their long history of use also means that they need to pay a high price for backward compatibility. ###### Databases can support RAG use-case end-to-end. Putting aside performance and scalability questions, the whole discussion about implementing RAG in the DBs assumes that the only detail missing in traditional databases is the vector index and the ability to make fast ANN queries. In fact, the current capabilities of vector search have only scratched the surface of what is possible. For example, in our recent article, we discuss the possibility of building an [exploration API](/articles/vector-similarity-beyond-search/) to fuel the discovery process - an alternative to kNN search, where you don’t even know what exactly you are looking for. ## Summary Ultimately, you do not need a vector database if you are looking for a simple vector search functionality with a small amount of data. We genuinely recommend starting with whatever you already have in your stack to prototype. But you need one if you are looking to do more out of it, and it is the central functionality of your application. It is just like using a multi-tool to make something quick or using a dedicated instrument highly optimized for the use case. Large-scale production systems usually consist of different specialized services and storage types for good reasons since it is one of the best practices of modern software architecture. Comparable to the orchestration of independent building blocks in a microservice architecture. When you stuff the database with a vector index, you compromise both the performance and scalability of the main database and the vector search capabilities. There is no one-size-fits-all approach that would not compromise on performance or flexibility. So if your use case utilizes vector search in any significant way, it is worth investing in a dedicated vector search engine, aka vector database.
qdrant-landing/content/articles/detecting-coffee-anomalies.md
--- title: Metric Learning for Anomaly Detection short_description: "How to use metric learning to detect anomalies: quality assessment of coffee beans with just 200 labelled samples" description: Practical use of metric learning for anomaly detection. A way to match the results of a classification-based approach with only ~0.6% of the labeled data. social_preview_image: /articles_data/detecting-coffee-anomalies/preview/social_preview.jpg preview_dir: /articles_data/detecting-coffee-anomalies/preview small_preview_image: /articles_data/detecting-coffee-anomalies/anomalies_icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-05-04T13:00:00+03:00 draft: false # aliases: [ /articles/detecting-coffee-anomalies/ ] --- Anomaly detection is a thirsting yet challenging task that has numerous use cases across various industries. The complexity results mainly from the fact that the task is data-scarce by definition. Similarly, anomalies are, again by definition, subject to frequent change, and they may take unexpected forms. For that reason, supervised classification-based approaches are: * Data-hungry - requiring quite a number of labeled data; * Expensive - data labeling is an expensive task itself; * Time-consuming - you would try to obtain what is necessarily scarce; * Hard to maintain - you would need to re-train the model repeatedly in response to changes in the data distribution. These are not desirable features if you want to put your model into production in a rapidly-changing environment. And, despite all the mentioned difficulties, they do not necessarily offer superior performance compared to the alternatives. In this post, we will detail the lessons learned from such a use case. ## Coffee Beans [Agrivero.ai](https://agrivero.ai/) - is a company making AI-enabled solution for quality control & traceability of green coffee for producers, traders, and roasters. They have collected and labeled more than **30 thousand** images of coffee beans with various defects - wet, broken, chipped, or bug-infested samples. This data is used to train a classifier that evaluates crop quality and highlights possible problems. {{< figure src=/articles_data/detecting-coffee-anomalies/detection.gif caption="Anomalies in coffee" width="400px" >}} We should note that anomalies are very diverse, so the enumeration of all possible anomalies is a challenging task on it's own. In the course of work, new types of defects appear, and shooting conditions change. Thus, a one-time labeled dataset becomes insufficient. Let's find out how metric learning might help to address this challenge. ## Metric Learning Approach In this approach, we aimed to encode images in an n-dimensional vector space and then use learned similarities to label images during the inference. The simplest way to do this is KNN classification. The algorithm retrieves K-nearest neighbors to a given query vector and assigns a label based on the majority vote. In production environment kNN classifier could be easily replaced with [Qdrant](https://github.com/qdrant/qdrant) vector search engine. {{< figure src=/articles_data/detecting-coffee-anomalies/anomalies_detection.png caption="Production deployment" >}} This approach has the following advantages: * We can benefit from unlabeled data, considering labeling is time-consuming and expensive. * The relevant metric, e.g., precision or recall, can be tuned according to changing requirements during the inference without re-training. * Queries labeled with a high score can be added to the KNN classifier on the fly as new data points. To apply metric learning, we need to have a neural encoder, a model capable of transforming an image into a vector. Training such an encoder from scratch may require a significant amount of data we might not have. Therefore, we will divide the training into two steps: * The first step is to train the autoencoder, with which we will prepare a model capable of representing the target domain. * The second step is finetuning. Its purpose is to train the model to distinguish the required types of anomalies. {{< figure src=/articles_data/detecting-coffee-anomalies/anomaly_detection_training.png caption="Model training architecture" >}} ### Step 1 - Autoencoder for Unlabeled Data First, we pretrained a Resnet18-like model in a vanilla autoencoder architecture by leaving the labels aside. Autoencoder is a model architecture composed of an encoder and a decoder, with the latter trying to recreate the original input from the low-dimensional bottleneck output of the former. There is no intuitive evaluation metric to indicate the performance in this setup, but we can evaluate the success by examining the recreated samples visually. {{< figure src=/articles_data/detecting-coffee-anomalies/image_reconstruction.png caption="Example of image reconstruction with Autoencoder" >}} Then we encoded a subset of the data into 128-dimensional vectors by using the encoder, and created a KNN classifier on top of these embeddings and associated labels. Although the results are promising, we can do even better by finetuning with metric learning. ### Step 2 - Finetuning with Metric Learning We started by selecting 200 labeled samples randomly without replacement. In this step, The model was composed of the encoder part of the autoencoder with a randomly initialized projection layer stacked on top of it. We applied transfer learning from the frozen encoder and trained only the projection layer with Triplet Loss and an online batch-all triplet mining strategy. Unfortunately, the model overfitted quickly in this attempt. In the next experiment, we used an online batch-hard strategy with a trick to prevent vector space from collapsing. We will describe our approach in the further articles. This time it converged smoothly, and our evaluation metrics also improved considerably to match the supervised classification approach. {{< figure src=/articles_data/detecting-coffee-anomalies/ae_report_knn.png caption="Metrics for the autoencoder model with KNN classifier" >}} {{< figure src=/articles_data/detecting-coffee-anomalies/ft_report_knn.png caption="Metrics for the finetuned model with KNN classifier" >}} We repeated this experiment with 500 and 2000 samples, but it showed only a slight improvement. Thus we decided to stick to 200 samples - see below for why. ## Supervised Classification Approach We also wanted to compare our results with the metrics of a traditional supervised classification model. For this purpose, a Resnet50 model was finetuned with ~30k labeled images, made available for training. Surprisingly, the F1 score was around ~0.86. Please note that we used only 200 labeled samples in the metric learning approach instead of ~30k in the supervised classification approach. These numbers indicate a huge saving with no considerable compromise in the performance. ## Conclusion We obtained results comparable to those of the supervised classification method by using **only 0.66%** of the labeled data with metric learning. This approach is time-saving and resource-efficient, and that may be improved further. Possible next steps might be: - Collect more unlabeled data and pretrain a larger autoencoder. - Obtain high-quality labels for a small number of images instead of tens of thousands for finetuning. - Use hyperparameter optimization and possibly gradual unfreezing in the finetuning step. - Use [vector search engine](https://github.com/qdrant/qdrant) to serve Metric Learning in production. We are actively looking into these, and we will continue to publish our findings in this challenge and other use cases of metric learning.
qdrant-landing/content/articles/discovery-search.md
--- title: "Discovery Search: A New Approach to Vector Space" short_description: Discovery Search, an innovative API for precise, tailored search results. description: Explore the next frontier in search technology with Discovery Search. Learn how this innovative API provides precise and tailored results. social_preview_image: /articles_data/discovery-search/social_preview.jpg small_preview_image: /articles_data/discovery-search/icon.svg preview_dir: /articles_data/discovery-search/preview weight: -110 author: Luis Cossío author_link: https://coszio.github.io date: 2024-01-31T08:00:00-03:00 draft: false keywords: - why use a vector database - specialty - search - discovery - state-of-the-art - vector-search --- # How to Master Vector Space Exploration with Discovery Search When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for America. They were looking for a new route to India, and they were convinced that the Earth was round. They didn't know anything about America, but since they were going west, they stumbled upon it. They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new "discovery" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep these concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__. ## What is discovery search? Discovery search is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily close to the target but are still relevant to the search. It can also be used to represent complex tastes and break out of the similarity bubble. Check out the documentation to learn more about the math behind it and how to use it. ## Qdrant's discovery search: version 1.7 release In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search. You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space. ## Understanding context in discovery search This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it. ![Discovery search visualization](/articles_data/discovery-search/discovery-search.png) While positive and negative vectors might suggest the use of the <a href="/documentation/concepts/explore/#recommendation-api" target="_blank">recommendation interface</a>, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of <a href="https://en.wikipedia.org/wiki/Triplet_loss" target="_blank">_triplet loss_</a>, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by "moving" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the "anchors", or result candidates, which fit this characteristic better. ![Triplet loss](/articles_data/discovery-search/triplet-loss.png) [__Discovery search__](#discovery-search), then, is made up of two main inputs: - __target__: the main point of interest - __context__: the pairs of positive and negative points we just defined. However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search). ## Real-world discovery search applications Let's talk about the first case: context with a target. To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images. CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing "burger" in the text input: ![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png) This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example: ![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png) Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken? Turns out, multimodal encoders <a href="https://modalitygap.readthedocs.io/en/latest/" target="_blank">might not work how you expect them to</a>. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text. ![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png) This is where discovery excels because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text). ![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png) Discovery search also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for. Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type "pizza", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza! ![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png) ## Context search Now, the second case: only providing context. Ever been caught in the same recommendations on your favorite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different. ![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png) __Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _"how much a point is closer to a negative than a positive vector?"_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets. ![Context search visualization](/articles_data/discovery-search/context-search.png) Creating complex tastes in a high-dimensional space becomes easier since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search "category" created just from the context in the input. ![A more complex context search](/articles_data/discovery-search/complex-context-search.png) This way you can give refreshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs. ## Key rakeaways: - Discovery search is a powerful tool for controlled exploration in vector spaces. Context, positive, and negative vectors guide search parameters and refine results. - Real-world applications include multimodal search, diverse recommendations, and context-driven exploration. - Ready to experience the power of Qdrant's Discovery search for yourself? [Try a free demo](https://qdrant.tech/contact-us/) now and unlock the full potential of controlled exploration in vector spaces!
qdrant-landing/content/articles/embedding-recycler.md
--- title: Layer Recycling and Fine-tuning Efficiency short_description: Tradeoff between speed and performance in layer recycling description: Learn when and how to use layer recycling to achieve different performance targets. preview_dir: /articles_data/embedding-recycling/preview small_preview_image: /articles_data/embedding-recycling/icon.svg social_preview_image: /articles_data/embedding-recycling/preview/social_preview.jpg weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-08-23T13:00:00+03:00 draft: false aliases: [ /articles/embedding-recycler/ ] --- A recent [paper](https://arxiv.org/abs/2207.04993) by Allen AI has attracted attention in the NLP community as they cache the output of a certain intermediate layer in the training and inference phases to achieve a speedup of ~83% with a negligible loss in model performance. This technique is quite similar to [the caching mechanism in Quaterion](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html), but the latter is intended for any data modalities while the former focuses only on language models despite presenting important insights from their experiments. In this post, I will share our findings combined with those, hoping to provide the community with a wider perspective on layer recycling. ## How layer recycling works The main idea of layer recycling is to accelerate the training (and inference) by avoiding repeated passes of the same data object through the frozen layers. Instead, it is possible to pass objects through those layers only once, cache the output and use them as inputs to the unfrozen layers in future epochs. In the paper, they usually cache 50% of the layers, e.g., the output of the 6th multi-head self-attention block in a 12-block encoder. However, they find out that it does not work equally for all the tasks. For example, the question answering task suffers from a more significant degradation in performance with 50% of the layers recycled, and they choose to lower it down to 25% for this task, so they suggest determining the level of caching based on the task at hand. they also note that caching provides a more considerable speedup for larger models and on lower-end machines. In layer recycling, the cache is hit for exactly the same object. It is easy to achieve this in textual data as it is easily hashable, but you may need more advanced tricks to generate keys for the cache when you want to generalize this technique to diverse data types. For instance, hashing PyTorch tensors [does not work as you may expect](https://github.com/joblib/joblib/issues/1282). Quaterion comes with an intelligent key extractor that may be applied to any data type, but it is also allowed to customize it with a callable passed as an argument. Thanks to this flexibility, we were able to run a variety of experiments in different setups, and I believe that these findings will be helpful for your future projects. ## Experiments We conducted different experiments to test the performance with: 1. Different numbers of layers recycled in [the similar cars search example](https://quaterion.qdrant.tech/tutorials/cars-tutorial.html). 2. Different numbers of samples in the dataset for training and fine-tuning for similar cars search. 3. Different numbers of layers recycled in [the question answerring example](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). ## Easy layer recycling with Quaterion The easiest way of caching layers in Quaterion is to compose a [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel) with a frozen [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) and an unfrozen [EncoderHead](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). Therefore, we modified the `TrainableModel` in the [example](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) as in the following: ```python class Model(TrainableModel): # ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet34(pretrained=True) self.avgpool = copy.deepcopy(pre_trained_encoder.avgpool) self.finetuned_block = copy.deepcopy(pre_trained_encoder.layer4) modules = [] for name, child in pre_trained_encoder.named_children(): modules.append(child) if name == "layer3": break pre_trained_encoder = nn.Sequential(*modules) return CarsEncoder(pre_trained_encoder) def configure_head(self, input_embedding_size) -> EncoderHead: return SequentialHead(self.finetuned_block, self.avgpool, nn.Flatten(), SkipConnectionHead(512, dropout=0.3, skip_dropout=0.2), output_size=512) # ... ``` This trick lets us finetune one more layer from the base model as a part of the `EncoderHead` while still benefiting from the speedup in the frozen `Encoder` provided by the cache. ## Experiment 1: Percentage of layers recycled The paper states that recycling 50% of the layers yields little to no loss in performance when compared to full fine-tuning. In this setup, we compared performances of four methods: 1. Freeze the whole base model and train only `EncoderHead`. 2. Move one of the four residual blocks `EncoderHead` and train it together with the head layer while freezing the rest (75% layer recycling). 3. Move two of the four residual blocks to `EncoderHead` while freezing the rest (50% layer recycling). 4. Train the whole base model together with `EncoderHead`. **Note**: During these experiments, we used ResNet34 instead of ResNet152 as the pretrained model in order to be able to use a reasonable batch size in full training. The baseline score with ResNet34 is 0.106. | Model | RRP | | ------------- | ---- | | Full training | 0.32 | | 50% recycling | 0.31 | | 75% recycling | 0.28 | | Head only | 0.22 | | Baseline | 0.11 | As is seen in the table, the performance in 50% layer recycling is very close to that in full training. Additionally, we can still have a considerable speedup in 50% layer recycling with only a small drop in performance. Although 75% layer recycling is better than training only `EncoderHead`, its performance drops quickly when compared to 50% layer recycling and full training. ## Experiment 2: Amount of available data In the second experiment setup, we compared performances of fine-tuning strategies with different dataset sizes. We sampled 50% of the training set randomly while still evaluating models on the whole validation set. | Model | RRP | | ------------- | ---- | | Full training | 0.27 | | 50% recycling | 0.26 | | 75% recycling | 0.25 | | Head only | 0.21 | | Baseline | 0.11 | This experiment shows that, the smaller the available dataset is, the bigger drop in performance we observe in full training, 50% and 75% layer recycling. On the other hand, the level of degradation in training only `EncoderHead` is really small when compared to others. When we further reduce the dataset size, full training becomes untrainable at some point, while we can still improve over the baseline by training only `EncoderHead`. ## Experiment 3: Layer recycling in question answering We also wanted to test layer recycling in a different domain as one of the most important takeaways of the paper is that the performance of layer recycling is task-dependent. To this end, we set up an experiment with the code from the [Question Answering with Similarity Learning tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). | Model | RP@1 | RRK | | ------------- | ---- | ---- | | Full training | 0.76 | 0.65 | | 50% recycling | 0.75 | 0.63 | | 75% recycling | 0.69 | 0.59 | | Head only | 0.67 | 0.58 | | Baseline | 0.64 | 0.55 | In this task, 50% layer recycling can still do a good job with only a small drop in performance when compared to full training. However, the level of degradation is smaller than that in the similar cars search example. This can be attributed to several factors such as the pretrained model quality, dataset size and task definition, and it can be the subject of a more elaborate and comprehensive research project. Another observation is that the performance of 75% layer recycling is closer to that of training only `EncoderHead` than 50% layer recycling. ## Conclusion We set up several experiments to test layer recycling under different constraints and confirmed that layer recycling yields varying performances with different tasks and domains. One of the most important observations is the fact that the level of degradation in layer recycling is sublinear with a comparison to full training, i.e., we lose a smaller percentage of performance than the percentage we recycle. Additionally, training only `EncoderHead` is more resistant to small dataset sizes. There is even a critical size under which full training does not work at all. The issue of performance differences shows that there is still room for further research on layer recycling, and luckily Quaterion is flexible enough to run such experiments quickly. We will continue to report our findings on fine-tuning efficiency. **Fun fact**: The preview image for this article was created with Dall.e with the following prompt: "Photo-realistic robot using a tuning fork to adjust a piano." [Click here](/articles_data/embedding-recycling/full.png) to see it in full size!
qdrant-landing/content/articles/faq-question-answering.md
--- title: Q&A with Similarity Learning short_description: A complete guide to building a Q&A system with similarity learning. description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers. social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg preview_dir: /articles_data/faq-question-answering/preview small_preview_image: /articles_data/faq-question-answering/icon.svg weight: 9 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-06-28T08:57:07.604Z # aliases: [ /articles/faq-question-answering/ ] --- # Question-answering system with Similarity Learning and Quaterion Many problems in modern machine learning are approached as classification tasks. Some are the classification tasks by design, but others are artificially transformed into such. And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions. In some cases, you would even get worse performance. Imagine that you got a new task and decided to solve it with a good old classification approach. Firstly, you will need labeled data. If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually. And I guess you are already familiar with how painful it might be. Assuming you somehow labeled all required data and trained a model. It shows good performance - well done! But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle. You repeat your pipeline. Then, two days later, you've been reached out one more time. You need to update the model again, and again, and again. Sounds tedious and expensive for me, does not it for you? ## Automating customer support Let's now take a look at the concrete example. There is a pressing problem with automating customer support. The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement. With the classification approach, you need to build a hierarchy of classification models to determine the question's topic. You have to collect and label a whole custom dataset of your private documentation topics to train that. And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data. Can we make it easier? ## Similarity option One of the possible alternatives is Similarity Learning, which we are going to discuss in this article. It suggests getting rid of the classes and making decisions based on the similarity between objects instead. To do it quickly, we would need some intermediate representation - embeddings. Embeddings are high-dimensional vectors with semantic information accumulated in them. As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance. So with similarity learning, all we need to do is provide pairs of correct questions and answers. And then, the model will learn to distinguish proper answers by the similarity of embeddings. >If you want to learn more about similarity learning and applications, check out this [article](/documentation/tutorials/neural-search/) which might be an asset. ## Let's build Similarity learning approach seems a lot simpler than classification in this case, and if you have some doubts on your mind, let me dispel them. As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers. The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq). Once we have data, we need to obtain embeddings for it. It is not a novel technique in NLP to represent texts as embeddings. There are plenty of algorithms and models to calculate them. You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings. However, it is better to produce embeddings with a model trained for semantic similarity tasks. For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html). Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial as it is 5x faster and still offers good results. Having all this, we can test our approach. We won't take all our dataset at the moment, but only a part of it. To measure model's performance we will use two metrics - [mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and [precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k). We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py) for this experiment, let's just launch it now. <div class="table-responsive"> | precision@1 | reciprocal_rank | |-------------|-----------------| | 0.564 | 0.663 | </div> That's already quite decent quality, but maybe we can do better? ## Improving results with fine-tuning Actually, we can! Model we used has a good natural language understanding, but it has never seen our data. An approach called `fine-tuning` might be helpful to overcome this issue. With fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on another task, apply a couple of layers on top and train its parameters. Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools. For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models. Let's see how we can train models with it First, create our project and call it `faq`. > All project dependencies, utils scripts not covered in the tutorial can be found in the > [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). ### Configure training The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html). This class makes model's building process fast and convenient. `TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). [Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually. Also Lightning's modularity is worth to be mentioned. It improves separation of responsibilities, makes code more readable, robust and easy to write. All these features make Pytorch Lightning a perfect training backend for Quaterion. To use `TrainableModel` you need to inherit your model class from it. The same way you would use `LightningModule` in pure `pytorch_lightning`. Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`, `configure_optimizers`. The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of imports to do that. But `configure_encoders` requires some code:) Let's create a `model.py` with model's template and a placeholder for `configure_encoders` for the moment. ```python from typing import Union, Dict, Optional from torch.optim import Adam from quaterion import TrainableModel from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead from quaterion_models.heads.skip_connection_head import SkipConnectionHead class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) def configure_optimizers(self): return Adam(self.model.parameters(), lr=self.lr) def configure_loss(self) -> SimilarityLoss: return MultipleNegativesRankingLoss(symmetric=True) def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: ... # ToDo def configure_head(self, input_embedding_size: int) -> EncoderHead: return SkipConnectionHead(input_embedding_size) ``` - `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later. - `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion. However, since Quaterion's purpose is not to cover all possible losses, or other entities and features of similarity learning, but to provide a convenient framework to build and use such models, there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html) to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses. You can also implement a custom loss yourself. - `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head. As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html). At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html). This loss is especially good for training retrieval tasks. It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples. `MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter. Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html). Now we can come back to `configure_encoders`:) ### Configure Encoder The encoder task is to convert objects into embeddings. They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`. In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class. Let's create our encoder in `encoder.py` ```python import os from torch import Tensor, nn from sentence_transformers.models import Transformer, Pooling from quaterion_models.encoders import Encoder from quaterion_models.types import TensorInterchange, CollateFnType class FAQEncoder(Encoder): def __init__(self, transformer, pooling): super().__init__() self.transformer = transformer self.pooling = pooling self.encoder = nn.Sequential(self.transformer, self.pooling) @property def trainable(self) -> bool: # Defines if we want to train encoder itself, or head layer only return False @property def embedding_size(self) -> int: return self.transformer.get_word_embedding_dimension() def forward(self, batch: TensorInterchange) -> Tensor: return self.encoder(batch)["sentence_embedding"] def get_collate_fn(self) -> CollateFnType: return self.transformer.tokenize @staticmethod def _transformer_path(path: str): return os.path.join(path, "transformer") @staticmethod def _pooling_path(path: str): return os.path.join(path, "pooling") def save(self, output_path: str): transformer_path = self._transformer_path(output_path) os.makedirs(transformer_path, exist_ok=True) pooling_path = self._pooling_path(output_path) os.makedirs(pooling_path, exist_ok=True) self.transformer.save(transformer_path) self.pooling.save(pooling_path) @classmethod def load(cls, input_path: str) -> Encoder: transformer = Transformer.load(cls._transformer_path(input_path)) pooling = Pooling.load(cls._pooling_path(input_path)) return cls(transformer=transformer, pooling=pooling) ``` As you can notice, there are more methods implemented, then we've already discussed. Let's go through them now! - In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant. - `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen. - `embedding_size` is a size of encoder's output, it is required for proper `head` configuration. - `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used. The remaining methods are considered self-describing. As our encoder is ready, we now are able to fill `configure_encoders`. Just insert the following code into `model.py`: ```python ... from sentence_transformers import SentenceTransformer from sentence_transformers.models import Transformer, Pooling from faq.encoder import FAQEncoder class FAQModel(TrainableModel): ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_model = SentenceTransformer("all-MiniLM-L6-v2") transformer: Transformer = pre_trained_model[0] pooling: Pooling = pre_trained_model[1] encoder = FAQEncoder(transformer, pooling) return encoder ``` ### Data preparation Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model. Currently, Quaterion takes two types of similarity representation - pairs and groups. The groups format assumes that all objects split into groups of similar objects. All objects inside one group are similar, and all other objects outside this group considered dissimilar to them. But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects. We can apply any of the approaches with our data, but pairs one seems more intuitive. The format in which Similarity is represented determines which loss can be used. For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format. [SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs. Let's take a look at it: ```python @dataclass class SimilarityPairSample: obj_a: Any obj_b: Any score: float = 1.0 subgroup: int = 0 ``` Here might be some questions: what `score` and `subgroup` are? Well, `score` is a measure of expected samples similarity. If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively. `subgroups` parameter is required for more granular description of what negative examples could be. By default, all pairs belong the subgroup zero. That means that we would need to specify all negative examples manually. But in most cases, we can avoid this by enabling different subgroups. All objects from different subgroups will be considered as negative examples in loss, and thus it provides a way to set negative examples implicitly. With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model: ```python import json from typing import List, Dict from torch.utils.data import Dataset from quaterion.dataset.similarity_samples import SimilarityPairSample class FAQDataset(Dataset): """Dataset class to process .jsonl files with FAQ from popular cloud providers.""" def __init__(self, dataset_path): self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path) def __getitem__(self, index) -> SimilarityPairSample: line = self.dataset[index] question = line["question"] # All questions have a unique subgroup # Meaning that all other answers are considered negative pairs subgroup = hash(question) return SimilarityPairSample( obj_a=question, obj_b=line["answer"], score=1, subgroup=subgroup ) def __len__(self): return len(self.dataset) @staticmethod def read_dataset(dataset_path) -> List[Dict[str, str]]: """Read jsonl-file into a memory.""" with open(dataset_path, "r") as fd: return [json.loads(json_line) for json_line in fd] ``` We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples. ### Evaluation Metric We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`. We just need to override it and attach interested metrics. Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_. They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package. But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries. You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one. In `configure_metrics` we need to return a list of `AttachedMetric`. They are just wrappers around metric instances and helps to log metrics more easily. Under the hood `logging` is handled by `pytorch-lightning`. You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`. For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html) Let's add mentioned metrics for our `FAQModel`. Add this code to `model.py`: ```python ... from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank from quaterion.eval.attached_metric import AttachedMetric class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) ... def configure_metrics(self): return [ AttachedMetric( "RetrievalPrecision", RetrievalPrecision(k=1), prog_bar=True, on_epoch=True, ), AttachedMetric( "RetrievalReciprocalRank", RetrievalReciprocalRank(), prog_bar=True, on_epoch=True ), ] ``` ### Fast training with Cache Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders. If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch. It provides a way to avoid repeated calculations and reduce training time. For this purpose Quaterion has a cache functionality. Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU). Everything you need is to define which encoders are trainable or not and set cache settings. And that's it: everything else Quaterion will handle for you. To configure cache you need to override `configure_cache` method in `TrainableModel`. This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig). Let's add cache to our model: ```python ... from quaterion.train.cache import CacheConfig, CacheType ... class FAQModel(TrainableModel): ... def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig(CacheType.AUTO) ... ``` [CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory. ### Training Now we need to combine all our code together in `train.py` and launch a training process. ```python import torch import pytorch_lightning as pl from quaterion import Quaterion from quaterion.dataset import PairsSimilarityDataLoader from faq.dataset import FAQDataset def train(model, train_dataset_path, val_dataset_path, params): use_gpu = params.get("cuda", torch.cuda.is_available()) trainer = pl.Trainer( min_epochs=params.get("min_epochs", 1), max_epochs=params.get("max_epochs", 500), auto_select_gpus=use_gpu, log_every_n_steps=params.get("log_every_n_steps", 1), gpus=int(use_gpu), ) train_dataset = FAQDataset(train_dataset_path) val_dataset = FAQDataset(val_dataset_path) train_dataloader = PairsSimilarityDataLoader( train_dataset, batch_size=1024 ) val_dataloader = PairsSimilarityDataLoader( val_dataset, batch_size=1024 ) Quaterion.fit(model, trainer, train_dataloader, val_dataloader) if __name__ == "__main__": import os from pytorch_lightning import seed_everything from faq.model import FAQModel from faq.config import DATA_DIR, ROOT_DIR seed_everything(42, workers=True) faq_model = FAQModel() train_path = os.path.join( DATA_DIR, "train_cloud_faq_dataset.jsonl" ) val_path = os.path.join( DATA_DIR, "val_cloud_faq_dataset.jsonl" ) train(faq_model, train_path, val_path, {}) faq_model.save_servable(os.path.join(ROOT_DIR, "servable")) ``` Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for `SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process. ### Dataset-wise evaluation Up to this moment we've calculated only batch-wise metrics. Such metrics can fluctuate a lot depending on a batch size and can be misleading. It might be helpful if we can calculate a metric on a whole dataset or some large part of it. Raw data may consume a huge amount of memory, and usually we can't fit it into one batch. Embeddings, on the contrary, most probably will consume less. That's where `Evaluator` enters the scene. At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels. After that, it calculates a metric value, which could be more representative than batch-wise ones. However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory. A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric. You can mitigate this bottleneck by calculating a rectangle matrix with reduced size. `Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings. If sample size is not specified, evaluation is performed on all embeddings. Fewer words! Let's add evaluator to our code and finish `train.py`. ```python ... from quaterion.eval.evaluator import Evaluator from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision from quaterion.eval.samplers.pair_sampler import PairSampler ... def train(model, train_dataset_path, val_dataset_path, params): ... metrics = { "rrk": RetrievalReciprocalRank(), "rp@1": RetrievalPrecision(k=1) } sampler = PairSampler() evaluator = Evaluator(metrics, sampler) results = Quaterion.evaluate(evaluator, val_dataset, model.model) print(f"results: {results}") ``` ### Train Results At this point we can train our model, I do it via `python3 -m faq.train`. <div class="table-responsive"> |epoch|train_precision@1|train_reciprocal_rank|val_precision@1|val_reciprocal_rank| |-----|-----------------|---------------------|---------------|-------------------| |0 |0.650 |0.732 |0.659 |0.741 | |100 |0.665 |0.746 |0.673 |0.754 | |200 |0.677 |0.757 |0.682 |0.763 | |300 |0.686 |0.765 |0.688 |0.768 | |400 |0.695 |0.772 |0.694 |0.773 | |500 |0.701 |0.778 |0.700 |0.777 | </div> Results obtained with `Evaluator`: <div class="table-responsive"> | precision@1 | reciprocal_rank | |-------------|-----------------| | 0.577 | 0.675 | </div> After training all the metrics have been increased. And this training was done in just 3 minutes on a single gpu! There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation. ## Model serving As you could already notice, Quaterion framework is split into two separate libraries: `quaterion` and [quaterion-models](https://quaterion-models.qdrant.tech/). The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc. While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself. The reasons for this separation are: - less amount of entities you need to operate in a production environment - reduced memory footprint It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated. Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage. The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production. In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in: ```python import os import json import torch from quaterion_models.model import SimilarityModel from quaterion.distances import Distance from faq.config import DATA_DIR, ROOT_DIR if __name__ == "__main__": device = "cuda:0" if torch.cuda.is_available() else "cpu" model = SimilarityModel.load(os.path.join(ROOT_DIR, "servable")) model.to(device) dataset_path = os.path.join(DATA_DIR, "val_cloud_faq_dataset.jsonl") with open(dataset_path) as fd: answers = [json.loads(json_line)["answer"] for json_line in fd] # everything is ready, let's encode our answers answer_embeddings = model.encode(answers, to_numpy=False) # Some prepared questions and answers to ensure that our model works as intended questions = [ "what is the pricing of aws lambda functions powered by aws graviton2 processors?", "can i run a cluster or job for a long time?", "what is the dell open manage system administrator suite (omsa)?", "what are the differences between the event streams standard and event streams enterprise plans?", ] ground_truth_answers = [ "aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions", "yes, you can run a cluster for as long as is required", "omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system", "to find out more information about the different event streams plans, see choosing your plan", ] # encode our questions and find the closest to them answer embeddings question_embeddings = model.encode(questions, to_numpy=False) distance = Distance.get_by_name(Distance.COSINE) question_answers_distances = distance.distance_matrix( question_embeddings, answer_embeddings ) answers_indices = question_answers_distances.min(dim=1)[1] for q_ind, a_ind in enumerate(answers_indices): print("Q:", questions[q_ind]) print("A:", answers[a_ind], end="\n\n") assert ( answers[a_ind] == ground_truth_answers[q_ind] ), f"<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>" ``` We stored our collection of answer embeddings in memory and perform search directly in Python. For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://github.com/qdrant/qdrant). It provides durability, speed boost, and a bunch of other features. So far, we've implemented a whole training process, prepared model for serving and even applied a trained model today with `Quaterion`. Thank you for your time and attention! I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects. All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). Stay tuned!:)
qdrant-landing/content/articles/fastembed.md
--- title: "FastEmbed: Fast and Lightweight Embedding Generation for Text" short_description: "FastEmbed: Quantized Embedding models for fast CPU Generation" description: "FastEmbed is a Python library engineered for speed, efficiency, and accuracy" social_preview_image: /articles_data/fastembed/preview/social_preview.jpg small_preview_image: /articles_data/fastembed/preview/lightning.svg preview_dir: /articles_data/fastembed/preview weight: -60 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-10-18T10:00:00+03:00 draft: false keywords: - vector search - embedding models - Flag Embedding - OpenAI Ada - NLP - embeddings - ONNX Runtime - quantized embedding model --- Data Science and Machine Learning practitioners often find themselves navigating through a labyrinth of models, libraries, and frameworks. Which model to choose, what embedding size, how to approach tokenizing, these are just some questions you are faced with when starting your work. We understood how, for many data scientists, they wanted an easier and intuitive means to do their embedding work. This is why we built FastEmbed (docs: https://qdrant.github.io/fastembed/) —a Python library engineered for speed, efficiency, and above all, usability. We have created easy to use default workflows, handling the 80% use cases in NLP embedding. ### Current State of Affairs for Generating Embeddings Usually you make embedding by utilizing PyTorch or TensorFlow models under the hood. But using these libraries comes at a cost in terms of ease of use and computational speed. This is at least in part because these are built for both: model inference and improvement e.g. via fine-tuning. To tackle these problems we built a small library focused on the task of quickly and efficiently creating text embeddings. We also decided to start with only a small sample of best in class transformer models. By keeping it small and focused on a particular use case, we could make our library focused without all the extraneous dependencies. We ship with limited models, quantize the model weights and seamlessly integrate them with the ONNX Runtime. FastEmbed strikes a balance between inference time, resource utilization and performance (recall/accuracy). ### Quick Example Here is an example of how simple we have made embedding text documents: ```python documents: List[str] = [ "Hello, World!", "fastembed is supported by and maintained by Qdrant." ]  embedding_model = DefaultEmbedding()  embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` These 3 lines of code do a lot of heavy lifting for you: They download the quantized model, load it using ONNXRuntime, and then run a batched embedding creation of your documents. ### Code Walkthrough Let’s delve into a more advanced example code snippet line-by-line: ```python from fastembed.embedding import DefaultEmbedding ``` Here, we import the FlagEmbedding class from FastEmbed and alias it as Embedding. This is the core class responsible for generating embeddings based on your chosen text model. This is also the class which you can import directly as DefaultEmbedding which is [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ```python documents: List[str] = [ "passage: Hello, World!", "query: How is the World?", "passage: This is an example passage.", "fastembed is supported by and maintained by Qdrant." ] ``` In this list called documents, we define four text strings that we want to convert into embeddings. Note the use of prefixes “passage” and “query” to differentiate the types of embeddings to be generated. This is inherited from the cross-encoder implementation of the BAAI/bge series of models themselves. This is particularly useful for retrieval and we strongly recommend using this as well. The use of text prefixes like “query” and “passage” isn’t merely syntactic sugar; it informs the algorithm on how to treat the text for embedding generation. A “query” prefix often triggers the model to generate embeddings that are optimized for similarity comparisons, while “passage” embeddings are fine-tuned for contextual understanding. If you omit the prefix, the default behavior is applied, although specifying it is recommended for more nuanced results. Next, we initialize the Embedding model with the default model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5). ```python embedding_model = DefaultEmbedding() ``` The default model and several other models have a context window of maximum 512 tokens. This maximum limit comes from the embedding model training and design itself.If you'd like to embed sequences larger than that, we'd recommend using some pooling strategy to get a single vector out of the sequence. For example, you can use the mean of the embeddings of different chunks of a document. This is also what the [SBERT Paper recommends](https://lilianweng.github.io/posts/2021-05-31-contrastive/#sentence-bert) This model strikes a balance between speed and accuracy, ideal for real-world applications. ```python embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` Finally, we call the `embed()` method on our embedding_model object, passing in the documents list. The method returns a Python generator, so we convert it to a list to get all the embeddings. These embeddings are NumPy arrays, optimized for fast mathematical operations. The `embed()` method returns a list of NumPy arrays, each corresponding to the embedding of a document in your original documents list. The dimensions of these arrays are determined by the model you chose e.g. for “BAAI/bge-small-en-v1.5” it’s a 384-dimensional vector. You can easily parse these NumPy arrays for any downstream application—be it clustering, similarity comparison, or feeding them into a machine learning model for further analysis. ## Key Features FastEmbed is built for inference speed, without sacrificing (too much) performance: 1. 50% faster than PyTorch Transformers 2. Better performance than Sentence Transformers and OpenAI Ada-002 3. Cosine similarity of quantized and original model vectors is 0.92 We use `BAAI/bge-small-en-v1.5` as our DefaultEmbedding, hence we've chosen that for comparison: ![](/articles_data/fastembed/throughput.png) ## Under the Hood **Quantized Models**: We quantize the models for CPU (and Mac Metal) – giving you the best buck for your compute model. Our default model is so small, you can run this in AWS Lambda if you’d like! Shout out to Huggingface's [Optimum](https://github.com/huggingface/optimum) – which made it easier to quantize models. **Reduced Installation Time**: FastEmbed sets itself apart by maintaining a low minimum RAM/Disk usage. It’s designed to be agile and fast, useful for businesses looking to integrate text embedding for production usage. For FastEmbed, the list of dependencies is refreshingly brief: > - onnx: Version ^1.11 – We’ll try to drop this also in the future if we can! > - onnxruntime: Version ^1.15 > - tqdm: Version ^4.65 – used only at Download > - requests: Version ^2.31 – used only at Download > - tokenizers: Version ^0.13 This minimized list serves two purposes. First, it significantly reduces the installation time, allowing for quicker deployments. Second, it limits the amount of disk space required, making it a viable option even for environments with storage limitations. Notably absent from the dependency list are bulky libraries like PyTorch, and there’s no requirement for CUDA drivers. This is intentional. FastEmbed is engineered to deliver optimal performance right on your CPU, eliminating the need for specialized hardware or complex setups. **ONNXRuntime**: The ONNXRuntime gives us the ability to support multiple providers. The quantization we do is limited for CPU (Intel), but we intend to support GPU versions of the same in future as well.  This allows for greater customization and optimization, further aligning with your specific performance and computational requirements. ## Current Models We’ve started with a small set of supported models: All the models we support are [quantized](https://pytorch.org/docs/stable/quantization.html) to enable even faster computation! If you're using FastEmbed and you've got ideas or need certain features, feel free to let us know. Just drop an issue on our GitHub page. That's where we look first when we're deciding what to work on next. Here's where you can do it: [FastEmbed GitHub Issues](https://github.com/qdrant/fastembed/issues). When it comes to FastEmbed's DefaultEmbedding model, we're committed to supporting the best Open Source models. If anything changes, you'll see a new version number pop up, like going from 0.0.6 to 0.1. So, it's a good idea to lock in the FastEmbed version you're using to avoid surprises. ## Usage with Qdrant Qdrant is a Vector Store, offering a comprehensive, efficient, and scalable solution for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant vector solution, or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on. The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant. Below is a detailed guide on how to get started with FastEmbed in conjunction with Qdrant. ### Installation Before diving into the code, the initial step involves installing the Qdrant Client along with the FastEmbed library. This can be done using pip: ``` pip install qdrant-client[fastembed] ``` For those using zsh as their shell, you might encounter syntax issues. In such cases, wrap the package name in quotes: ``` pip install 'qdrant-client[fastembed]' ``` ### Initializing the Qdrant Client After successful installation, the next step involves initializing the Qdrant Client. This can be done either in-memory or by specifying a database path: ```python from qdrant_client import QdrantClient # Initialize the client client = QdrantClient(":memory:")  # or QdrantClient(path="path/to/db") ``` ### Preparing Documents, Metadata, and IDs Once the client is initialized, prepare the text documents you wish to embed, along with any associated metadata and unique IDs: ```python docs = [ "Qdrant has Langchain integrations", "Qdrant also has Llama Index integrations" ] metadata = [ {"source": "Langchain-docs"}, {"source": "LlamaIndex-docs"}, ] ids = [42, 2] ``` Note that the add method we’ll use is overloaded: If you skip the ids, we’ll generate those for you. metadata is obviously optional. So, you can simply use this too: ```python docs = [ "Qdrant has Langchain integrations", "Qdrant also has Llama Index integrations" ] ``` ### Adding Documents to a Collection With your documents, metadata, and IDs ready, you can proceed to add these to a specified collection within Qdrant using the add method: ```python client.add( collection_name="demo_collection", documents=docs, metadata=metadata, ids=ids ) ``` Inside this function, Qdrant Client uses FastEmbed to make the text embedding, generate ids if they’re missing and then adding them to the index with metadata. This uses the DefaultEmbedding model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ![INDEX TIME: Sequence Diagram for Qdrant and FastEmbed](/articles_data/fastembed/generate-embeddings-from-docs.png) ### Performing Queries Finally, you can perform queries on your stored documents. Qdrant offers a robust querying capability, and the query results can be easily retrieved as follows: ```python search_result = client.query( collection_name="demo_collection", query_text="This is a query document" ) print(search_result) ``` Behind the scenes, we first convert the query_text to the embedding and use that to query the vector index. ![QUERY TIME: Sequence Diagram for Qdrant and FastEmbed integration](/articles_data/fastembed/generate-embeddings-query.png) By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks. Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like binary and scalar quantization for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency. ## Summary If you're curious about how FastEmbed and Qdrant can make your search tasks a breeze, why not take it for a spin? You get a real feel for what it can do. Here are two easy ways to get started: 1. **Cloud**: Get started with a free plan on the [Qdrant Cloud](https://qdrant.to/cloud?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). 2. **Docker Container**: If you're the DIY type, you can set everything up on your own machine. Here's a quick guide to help you out: [Quick Start with Docker](/documentation/quick-start/?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). So, go ahead, take it for a test drive. We're excited to hear what you think! Lastly, If you find FastEmbed useful and want to keep up with what we're doing, giving our GitHub repo a star would mean a lot to us. Here's the link to [star the repository](https://github.com/qdrant/fastembed). If you ever have questions about FastEmbed, please ask them on the Qdrant Discord: [https://discord.gg/Qy6HCJK9Dc](https://discord.gg/Qy6HCJK9Dc)
qdrant-landing/content/articles/filtrable-hnsw.md
--- title: Filtrable HNSW short_description: How to make ANN search with custom filtering? description: How to make ANN search with custom filtering? Search in selected subsets without loosing the results. # external_link: https://blog.vasnetsov.com/posts/categorical-hnsw/ social_preview_image: /articles_data/filtrable-hnsw/social_preview.jpg preview_dir: /articles_data/filtrable-hnsw/preview small_preview_image: /articles_data/filtrable-hnsw/global-network.svg weight: 60 date: 2019-11-24T22:44:08+03:00 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ # aliases: [ /articles/filtrable-hnsw/ ] --- If you need to find some similar objects in vector space, provided e.g. by embeddings or matching NN, you can choose among a variety of libraries: Annoy, FAISS or NMSLib. All of them will give you a fast approximate neighbors search within almost any space. But what if you need to introduce some constraints in your search? For example, you want search only for products in some category or select the most similar customer of a particular brand. I did not find any simple solutions for this. There are several discussions like [this](https://github.com/spotify/annoy/issues/263), but they only suggest to iterate over top search results and apply conditions consequently after the search. Let's see if we could somehow modify any of ANN algorithms to be able to apply constrains during the search itself. Annoy builds tree index over random projections. Tree index implies that we will meet same problem that appears in relational databases: if field indexes were built independently, then it is possible to use only one of them at a time. Since nobody solved this problem before, it seems that there is no easy approach. There is another algorithm which shows top results on the [benchmark](https://github.com/erikbern/ann-benchmarks). It is called HNSW which stands for Hierarchical Navigable Small World. The [original paper](https://arxiv.org/abs/1603.09320) is well written and very easy to read, so I will only give the main idea here. We need to build a navigation graph among all indexed points so that the greedy search on this graph will lead us to the nearest point. This graph is constructed by sequentially adding points that are connected by a fixed number of edges to previously added points. In the resulting graph, the number of edges at each point does not exceed a given threshold $m$ and always contains the nearest considered points. ![NSW](/articles_data/filtrable-hnsw/NSW.png) ### How can we modify it? What if we simply apply the filter criteria to the nodes of this graph and use in the greedy search only those that meet these criteria? It turns out that even with this naive modification algorithm can cover some use cases. One such case is if your criteria do not correlate with vector semantics. For example, you use a vector search for clothing names and want to filter out some sizes. In this case, the nodes will be uniformly filtered out from the entire cluster structure. Therefore, the theoretical conclusions obtained in the [Percolation theory](https://en.wikipedia.org/wiki/Percolation_theory) become applicable: > Percolation is related to the robustness of the graph (called also network). Given a random graph of $n$ nodes and an average degree $\langle k\rangle$ . Next we remove randomly a fraction $1-p$ of nodes and leave only a fraction $p$. There exists a critical percolation threshold $ pc = \frac{1}{\langle k\rangle} $ below which the network becomes fragmented while above $pc$ a giant connected component exists. This statement also confirmed by experiments: {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_m0.png caption="Dependency of connectivity to the number of edges" >}} {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_num_elements.png caption="Dependency of connectivity to the number of point (no dependency)." >}} There is a clear threshold when the search begins to fail. This threshold is due to the decomposition of the graph into small connected components. The graphs also show that this threshold can be shifted by increasing the $m$ parameter of the algorithm, which is responsible for the degree of nodes. Let's consider some other filtering conditions we might want to apply in the search: * Categorical filtering * Select only points in a specific category * Select points which belong to a specific subset of categories * Select points with a specific set of labels * Numerical range * Selection within some geographical region In the first case, we can guarantee that the HNSW graph will be connected simply by creating additional edges inside each category separately, using the same graph construction algorithm, and then combining them into the original graph. In this case, the total number of edges will increase by no more than 2 times, regardless of the number of categories. Second case is a little harder. A connection may be lost between two categories if they lie in different clusters. ![category clusters](/articles_data/filtrable-hnsw/hnsw_graph_category.png) The idea here is to build same navigation graph but not between nodes, but between categories. Distance between two categories might be defined as distance between category entry points (or, for precision, as the average distance between a random sample). Now we can estimate expected graph connectivity by number of excluded categories, not nodes. It still does not guarantee that two random categories will be connected, but allows us to switch to multiple searches in each category if connectivity threshold passed. In some cases, multiple searches can be even faster if you take advantage of parallel processing. {{< figure src=/articles_data/filtrable-hnsw/exp_random_groups.png caption="Dependency of connectivity to the random categories included in search" >}} Third case might be resolved in a same way it is resolved in classical databases. Depending on labeled subsets size ration we can go for one of the following scenarios: * if at least one subset is small: perform search over the label containing smallest subset and then filter points consequently. * if large subsets give large intersection: perform regular search with constraints expecting that intersection size fits connectivity threshold. * if large subsets give small intersection: perform linear search over intersection expecting that it is small enough to fit a time frame. Numerical range case can be reduces to the previous one if we split numerical range into a buckets containing equal amount of points. Next we also connect neighboring buckets to achieve graph connectivity. We still need to filter some results which presence in border buckets but do not fulfill actual constraints, but their amount might be regulated by the size of buckets. Geographical case is a lot like a numerical one. Usual geographical search involves [geohash](https://en.wikipedia.org/wiki/Geohash), which matches any geo-point to a fixes length identifier. ![Geohash example](/articles_data/filtrable-hnsw/geohash.png) We can use this identifiers as categories and additionally make connections between neighboring geohashes. It will ensure that any selected geographical region will also contain connected HNSW graph. ## Conclusion It is possible to enchant HNSW algorithm so that it will support filtering points in a first search phase. Filtering can be carried out on the basis of belonging to categories, which in turn is generalized to such popular cases as numerical ranges and geo. Experiments were carried by modification [python implementation](https://github.com/generall/hnsw-python) of the algorithm, but real production systems require much faster version, like [NMSLib](https://github.com/nmslib/nmslib).
qdrant-landing/content/articles/food-discovery-demo.md
--- title: Food Discovery Demo short_description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. preview_dir: /articles_data/food-discovery-demo/preview social_preview_image: /articles_data/food-discovery-demo/preview/social_preview.png small_preview_image: /articles_data/food-discovery-demo/icon.svg weight: -30 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-09-05T11:32:00.000Z --- Not every search journey begins with a specific destination in mind. Sometimes, you just want to explore and see what’s out there and what you might like. This is especially true when it comes to food. You might be craving something sweet, but you don’t know what. You might be also looking for a new dish to try, and you just want to see the options available. In these cases, it's impossible to express your needs in a textual query, as the thing you are looking for is not yet defined. Qdrant's semantic search for images is useful when you have a hard time expressing your tastes in words. ## General architecture We are happy to announce a refreshed version of our [Food Discovery Demo](https://food-discovery.qdrant.tech/). This time available as an open source project, so you can easily deploy it on your own and play with it. If you prefer to dive into the source code directly, then feel free to check out the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/). Otherwise, read on to learn more about the demo and how it works! In general, our application consists of three parts: a [FastAPI](https://fastapi.tiangolo.com/) backend, a [React](https://react.dev/) frontend, and a [Qdrant](/) instance. The architecture diagram below shows how these components interact with each other: ![Archtecture diagram](/articles_data/food-discovery-demo/architecture-diagram.png) ## Why did we use a CLIP model? CLIP is a neural network that can be used to encode both images and texts into vectors. And more importantly, both images and texts are vectorized into the same latent space, so we can compare them directly. This lets you perform semantic search on images using text queries and the other way around. For example, if you search for “flat bread with toppings”, you will get images of pizza. Or if you search for “pizza”, you will get images of some flat bread with toppings, even if they were not labeled as “pizza”. This is because CLIP embeddings capture the semantics of the images and texts and can find the similarities between them no matter the wording. ![CLIP model](/articles_data/food-discovery-demo/clip-model.png) CLIP is available in many different ways. We used the pretrained `clip-ViT-B-32` model available in the [Sentence-Transformers](https://www.sbert.net/examples/applications/image-search/README.html) library, as this is the easiest way to get started. ## The dataset The demo is based on the [Wolt](https://wolt.com/) dataset. It contains over 2M images of dishes from different restaurants along with some additional metadata. This is how a payload for a single dish looks like: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` Processing this amount of records takes some time, so we precomputed the CLIP embeddings, stored them in a Qdrant collection and exported the collection as a snapshot. You may [download it here](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot). ## Different search modes The FastAPI backend [exposes just a single endpoint](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/main.py#L37), however it handles multiple scenarios. Let's dive into them one by one and understand why they are needed. ### Cold start Recommendation systems struggle with a cold start problem. When a new user joins the system, there is no data about their preferences, so it’s hard to recommend anything. The same applies to our demo. When you open it, you will see a random selection of dishes, and it changes every time you refresh the page. Internally, the demo [chooses some random points](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L70) in the vector space. ![Random points selection](/articles_data/food-discovery-demo/random-results.png) That procedure should result in returning diverse results, so we have a higher chance of showing something interesting to the user. ### Textual search Since the demo suffers from the cold start problem, we implemented a textual search mode that is useful to start exploring the data. You can type in any text query by clicking a search icon in the top right corner. The demo will use the CLIP model to encode the query into a vector and then search for the nearest neighbors in the vector space. ![Random points selection](/articles_data/food-discovery-demo/textual-search.png) This is implemented as [a group search query to Qdrant](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L44). We didn't use a simple search, but performed grouping by the restaurant to get more diverse results. [Search groups](/documentation/concepts/search/#search-groups) is a mechanism similar to `GROUP BY` clause in SQL, and it's useful when you want to get a specific number of result per group (in our case just one). ```python import settings # Encode query into a vector, model is an instance of # sentence_transformers.SentenceTransformer that loaded CLIP model query_vector = model.encode(query).tolist() # Search for nearest neighbors, client is an instance of # qdrant_client.QdrantClient that has to be initialized before response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=query_vector, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` ### Exploring the results The main feature of the demo is the ability to explore the space of the dishes. You can click on any of them to see more details, but first of all you can like or dislike it, and the demo will update the search results accordingly. ![Recommendation results](/articles_data/food-discovery-demo/recommendation-results.png) #### Negative feedback only Qdrant [Recommendation API](/documentation/concepts/search/#recommendation-api) needs at least one positive example to work. However, in our demo we want to be able to provide only negative examples. This is because we want to be able to say “I don’t like this dish” without having to like anything first. To achieve this, we use a trick. We negate the vectors of the disliked dishes and use their mean as a query. This way, the disliked dishes will be pushed away from the search results. **This works because the cosine distance is based on the angle between two vectors, and the angle between a vector and its negation is 180 degrees.** ![CLIP model](/articles_data/food-discovery-demo/negated-vector.png) Food Discovery Demo [implements that trick](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L122) by calling Qdrant twice. Initially, we use the [Scroll API](/documentation/concepts/points/#scroll-points) to find disliked items, and then calculate a negated mean of all their vectors. That allows using the [Search Groups API](/documentation/concepts/search/#search-groups) to find the nearest neighbors of the negated mean vector. ```python import numpy as np # Retrieve the disliked points based on their ids disliked_points, _ = client.scroll( settings.QDRANT_COLLECTION, scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=search_query.negative), ] ), with_vectors=True, ) # Calculate a mean vector of disliked points disliked_vectors = np.array([point.vector for point in disliked_points]) mean_vector = np.mean(disliked_vectors, axis=0) negated_vector = -mean_vector # Search for nearest neighbors of the negated mean vector response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=negated_vector.tolist(), group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` #### Positive and negative feedback Since the [Recommendation API](/documentation/concepts/search/#recommendation-api) requires at least one positive example, we can use it only when the user has liked at least one dish. We could theoretically use the same trick as above and negate the disliked dishes, but it would be a bit weird, as Qdrant has that feature already built-in, and we can call it just once to do the job. It's always better to perform the search server-side. Thus, in this case [we just call the Qdrant server with a list of positive and negative examples](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L166), so it can find some points which are close to the positive examples and far from the negative ones. ```python response = client.recommend_groups( settings.QDRANT_COLLECTION, positive=search_query.positive, negative=search_query.negative, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` From the user perspective nothing changes comparing to the previous case. ### Location-based search Last but not least, location plays an important role in the food discovery process. You are definitely looking for something you can find nearby, not on the other side of the globe. Therefore, your current location can be toggled as a filtering condition. You can enable it by clicking on “Find near me” icon in the top right. This way you can find the best pizza in your neighborhood, not in the whole world. Qdrant [geo radius filter](/documentation/concepts/filtering/#geo-radius) is a perfect choice for this. It lets you filter the results by distance from a given point. ```python from qdrant_client import models # Create a geo radius filter query_filter = models.Filter( must=[ models.FieldCondition( key="cafe.location", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=location.longitude, lat=location.latitude, ), radius=location.radius_km * 1000, ), ) ] ) ``` Such a filter needs [a payload index](/documentation/concepts/indexing/#payload-index) to work efficiently, and it was created on a collection we used to create the snapshot. When you import it into your instance, the index will be already there. ## Using the demo The Food Discovery Demo [is available online](https://food-discovery.qdrant.tech/), but if you prefer to run it locally, you can do it with Docker. The [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes all the steps more in detail, but here is a quick start: ```bash git clone git@github.com:qdrant/demo-food-discovery.git cd demo-food-discovery # Create .env file based on .env.example docker-compose up -d ``` The demo will be available at `http://localhost:8001`, but you won't be able to search anything until you [import the snapshot into your Qdrant instance](/documentation/concepts/snapshots/#recover-via-api). If you don't want to bother with hosting a local one, you can use the [Qdrant Cloud](https://cloud.qdrant.io/) cluster. 4 GB RAM is enough to load all the 2 million entries. ## Fork and reuse Our demo is completely open-source. Feel free to fork it, update with your own dataset or adapt the application to your use case. Whether you’re looking to understand the mechanics of semantic search or to have a foundation to build a larger project, this demo can serve as a starting point. Check out the [Food Discovery Demo repository ](https://github.com/qdrant/demo-food-discovery/) to get started. If you have any questions, feel free to reach out [through Discord](https://qdrant.to/discord).
qdrant-landing/content/articles/geo-polygon-filter-gsoc.md
--- title: Google Summer of Code 2023 - Polygon Geo Filter for Qdrant Vector Database short_description: Gsoc'23 Polygon Geo Filter for Qdrant Vector Database description: A Summary of my work and experience at Qdrant's Gsoc '23. preview_dir: /articles_data/geo-polygon-filter-gsoc/preview small_preview_image: /articles_data/geo-polygon-filter-gsoc/icon.svg social_preview_image: /articles_data/geo-polygon-filter-gsoc/preview/social_preview.jpg weight: -50 author: Zein Wen author_link: https://www.linkedin.com/in/zishenwen/ date: 2023-10-12T08:00:00+03:00 draft: false keywords: - payload filtering - geo polygon - search condition - gsoc'23 --- ## Introduction Greetings, I'm Zein Wen, and I was a Google Summer of Code 2023 participant at Qdrant. I got to work with an amazing mentor, Arnaud Gourlay, on enhancing the Qdrant Geo Polygon Filter. This new feature allows users to refine their query results using polygons. As the latest addition to the Geo Filter family of radius and rectangle filters, this enhancement promises greater flexibility in querying geo data, unlocking interesting new use cases. ## Project Overview {{< figure src="/articles_data/geo-polygon-filter-gsoc/geo-filter-example.png" caption="A Use Case of Geo Filter (https://traveltime.com/blog/map-postcode-data-catchment-area)" alt="A Use Case of Geo Filter" >}} Because Qdrant is a powerful query vector database it presents immense potential for machine learning-driven applications, such as recommendation. However, the scope of vector queries alone may not always meet user requirements. Consider a scenario where you're seeking restaurant recommendations; it's not just about a list of restaurants, but those within your neighborhood. This is where the Geo Filter comes into play, enhancing query by incorporating additional filtering criteria. Up until now, Qdrant's geographic filter options were confined to circular and rectangular shapes, which may not align with the diverse boundaries found in the real world. This scenario was exactly what led to a user feature request and we decided it would be a good feature to tackle since it introduces greater capability for geo-related queries. ## Technical Challenges **1. Geo Geometry Computation** {{< figure src="/articles_data/geo-polygon-filter-gsoc/basic-concept.png" caption="Geo Space Basic Concept" alt="Geo Space Basic Concept" >}} Internally, the Geo Filter doesn't start by testing each individual geo location as this would be computationally expensive. Instead, we create a geo hash layer that [divides the world](https://en.wikipedia.org/wiki/Grid_(spatial_index)#Grid-based_spatial_indexing) into rectangles. When a spatial index is created for Qdrant entries it assigns the entry to the geohash for its location. During a query we first identify all potential geo hashes that satisfy the filters and subsequently check for location candidates within those hashes. Accomplishing this search involves two critical geometry computations: 1. determining if a polygon intersects with a rectangle 2. ascertaining if a point lies within a polygon. {{< figure src=/articles_data/geo-polygon-filter-gsoc/geo-computation-testing.png caption="Geometry Computation Testing" alt="Geometry Computation Testing" >}} While we have a geo crate (a Rust library) that provides APIs for these computations, we dug in deeper to understand the underlying algorithms and verify their accuracy. This lead us to conduct extensive testing and visualization to determine correctness. In addition to assessing the current crate, we also discovered that there are multiple algorithms available for these computations. We invested time in exploring different approaches, such as [winding windows](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=of%20the%20algorithm.-,Winding%20number%20algorithm,-%5Bedit%5D) and [ray casting](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=.%5B2%5D-,Ray%20casting%20algorithm,-%5Bedit%5D), to grasp their distinctions, and pave the way for future improvements. Through this process, I enjoyed honing my ability to swiftly grasp unfamiliar concepts. In addition, I needed to develop analytical strategies to dissect and draw meaningful conclusions from them. This experience has been invaluable in expanding my problem-solving toolkit. **2. Proto and JSON format design** Considerable effort was devoted to designing the ProtoBuf and JSON interfaces for this new feature. This component is directly exposed to users, requiring a consistent and user-friendly interface, which in turns help drive a a positive user experience and less code modifications in the future. Initially, we contemplated aligning our interface with the [GeoJSON](https://geojson.org/) specification, given its prominence as a standard for many geo-related APIs. However, we soon realized that the way GeoJSON defines geometries significantly differs from our current JSON and ProtoBuf coordinate definitions for our point radius and rectangular filter. As a result, we prioritized API-level consistency and user experience, opting to align the new polygon definition with all our existing definitions. In addition, we planned to develop a separate multi-polygon filter in addition to the polygon. However, after careful consideration, we recognize that, for our use case, polygon filters can achieve the same result as a multi-polygon filter. This relationship mirrors how we currently handle multiple circles or rectangles. Consequently, we deemed the multi-polygon filter redundant and would introduce unnecessary complexity to the API. Doing this work illustrated to me the challenge of navigating real-world solutions that require striking a balance between adhering to established standards and prioritizing user experience. It also was key to understanding the wisdom of focusing on developing what's truly necessary for users, without overextending our efforts. ## Outcomes **1. Capability of Deep Dive** Navigating unfamiliar code bases, concepts, APIs, and techniques is a common challenge for developers. Participating in GSoC was akin to me going from the safety of a swimming pool and right into the expanse of the ocean. Having my mentor’s support during this transition was invaluable. He provided me with numerous opportunities to independently delve into areas I had never explored before. I have grown into no longer fearing unknown technical areas, whether it's unfamiliar code, techniques, or concepts in specific domains. I've gained confidence in my ability to learn them step by step and use them to create the things I envision. **2. Always Put User in Minds** Another crucial lesson I learned is the importance of considering the user's experience and their specific use cases. While development may sometimes entail iterative processes, every aspect that directly impacts the user must be approached and executed with empathy. Neglecting this consideration can lead not only to functional errors but also erode the trust of users due to inconsistency and confusion, which then leads to them no longer using my work. **3. Speak Up and Effectively Communicate** Finally, In the course of development, encountering differing opinions is commonplace. It's essential to remain open to others' ideas, while also possessing the resolve to communicate one's own perspective clearly. This fosters productive discussions and ultimately elevates the quality of the development process. ### Wrap up Being selected for Google Summer of Code 2023 and collaborating with Arnaud and the other Qdrant engineers, along with all the other community members, has been a true privilege. I'm deeply grateful to those who invested their time and effort in reviewing my code, engaging in discussions about alternatives and design choices, and offering assistance when needed. Through these interactions, I've experienced firsthand the essence of open source and the culture that encourages collaboration. This experience not only allowed me to write Rust code for a real-world product for the first time, but it also opened the door to the amazing world of open source. Without a doubt, I'm eager to continue growing alongside this community and contribute to new features and enhancements that elevate the product. I've also become an advocate for Qdrant, introducing this project to numerous coworkers and friends in the tech industry. I'm excited to witness new users and contributors emerge from within my own network! If you want to try out my work, read the [documentation](/documentation/concepts/filtering/#geo-polygon) and then, either sign up for a free [cloud account](https://cloud.qdrant.io) or download the [Docker image](https://hub.docker.com/r/qdrant/qdrant). I look forward to seeing how people are using my work in their own applications!
qdrant-landing/content/articles/hybrid-search.md
--- title: On Hybrid Search short_description: What Hybrid Search is and how to get the best of both worlds. description: What Hybrid Search is and how to get the best of both worlds. preview_dir: /articles_data/hybrid-search/preview social_preview_image: /articles_data/hybrid-search/social_preview.png small_preview_image: /articles_data/hybrid-search/icon.svg weight: 8 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-02-15T10:48:00.000Z --- There is not a single definition of hybrid search. Actually, if we use more than one search algorithm, it might be described as some sort of hybrid. Some of the most popular definitions are: 1. A combination of vector search with [attribute filtering](/documentation/filtering/). We won't dive much into details, as we like to call it just filtered vector search. 2. Vector search with keyword-based search. This one is covered in this article. 3. A mix of dense and sparse vectors. That strategy will be covered in the upcoming article. ## Why do we still need keyword search? A keyword-based search was the obvious choice for search engines in the past. It struggled with some common issues, but since we didn't have any alternatives, we had to overcome them with additional preprocessing of the documents and queries. Vector search turned out to be a breakthrough, as it has some clear advantages in the following scenarios: - 🌍 Multi-lingual & multi-modal search - 🤔 For short texts with typos and ambiguous content-dependent meanings - 👨‍🔬 Specialized domains with tuned encoder models - 📄 Document-as-a-Query similarity search It doesn't mean we do not keyword search anymore. There are also some cases in which this kind of method might be useful: - 🌐💭 Out-of-domain search. Words are just words, no matter what they mean. BM25 ranking represents the universal property of the natural language - less frequent words are more important, as they carry most of the meaning. - ⌨️💨 Search-as-you-type, when there are only a few characters types in, and we cannot use vector search yet. - 🎯🔍 Exact phrase matching when we want to find the occurrences of a specific term in the documents. That's especially useful for names of the products, people, part numbers, etc. ## Matching the tool to the task There are various cases in which we need search capabilities and each of those cases will have some different requirements. Therefore, there is not just one strategy to rule them all, and some different tools may fit us better. Text search itself might be roughly divided into multiple specializations like: - Web-scale search - documents retrieval - Fast search-as-you-type - Search over less-than-natural texts (logs, transactions, code, etc.) Each of those scenarios has a specific tool, which performs better for that specific use case. If you already expose search capabilities, then you probably have one of them in your tech stack. And we can easily combine those tools with vector search to get the best of both worlds. # The fast search: A Fallback strategy The easiest way to incorporate vector search into the existing stack is to treat it as some sort of fallback strategy. So whenever your keyword search struggle with finding proper results, you can run a semantic search to extend the results. That is especially important in cases like search-as-you-type in which a new query is fired every single time your user types the next character in. For such cases the speed of the search is crucial. Therefore, we can't use vector search on every query. At the same time, the simple prefix search might have a bad recall. In this case, a good strategy is to use vector search only when the keyword/prefix search returns none or just a small number of results. A good candidate for this is [MeiliSearch](https://www.meilisearch.com/). It uses custom ranking rules to provide results as fast as the user can type. The pseudocode of such strategy may go as following: ```python async def search(query: str): # Get fast results from MeiliSearch keyword_search_result = search_meili(query) # Check if there are enough results # or if the results are good enough for given query if are_results_enough(keyword_search_result, query): return keyword_search # Encoding takes time, but we get more results vector_query = encode(query) vector_result = search_qdrant(vector_query) return vector_result ``` # The precise search: The re-ranking strategy In the case of document retrieval, we care more about the search result quality and time is not a huge constraint. There is a bunch of search engines that specialize in the full-text search we found interesting: - [Tantivy](https://github.com/quickwit-oss/tantivy) - a full-text indexing library written in Rust. Has a great performance and featureset. - [lnx](https://github.com/lnx-search/lnx) - a young but promising project, utilizes Tanitvy as a backend. - [ZincSearch](https://github.com/zinclabs/zinc) - a project written in Go, focused on minimal resource usage and high performance. - [Sonic](https://github.com/valeriansaliou/sonic) - a project written in Rust, uses custom network communication protocol for fast communication between the client and the server. All of those engines might be easily used in combination with the vector search offered by Qdrant. But the exact way how to combine the results of both algorithms to achieve the best search precision might be still unclear. So we need to understand how to do it effectively. We will be using reference datasets to benchmark the search quality. ## Why not linear combination? It's often proposed to use full-text and vector search scores to form a linear combination formula to rerank the results. So it goes like this: ```final_score = 0.7 * vector_score + 0.3 * full_text_score``` However, we didn't even consider such a setup. Why? Those scores don't make the problem linearly separable. We used BM25 score along with cosine vector similarity to use both of them as points coordinates in 2-dimensional space. The chart shows how those points are distributed: ![A distribution of both Qdrant and BM25 scores mapped into 2D space.](/articles_data/hybrid-search/linear-combination.png) *A distribution of both Qdrant and BM25 scores mapped into 2D space. It clearly shows relevant and non-relevant objects are not linearly separable in that space, so using a linear combination of both scores won't give us a proper hybrid search.* Both relevant and non-relevant items are mixed. **None of the linear formulas would be able to distinguish between them.** Thus, that's not the way to solve it. ## How to approach re-ranking? There is a common approach to re-rank the search results with a model that takes some additional factors into account. Those models are usually trained on clickstream data of a real application and tend to be very business-specific. Thus, we'll not cover them right now, as there is a more general approach. We will use so-called **cross-encoder models**. Cross-encoder takes a pair of texts and predicts the similarity of them. Unlike embedding models, cross-encoders do not compress text into vector, but uses interactions between individual tokens of both texts. In general, they are more powerful than both BM25 and vector search, but they are also way slower. That makes it feasible to use cross-encoders only for re-ranking of some preselected candidates. This is how a pseudocode for that strategy look like: ```python async def search(query: str): keyword_search = search_keyword(query) vector_search = search_qdrant(query) all_results = await asyncio.gather(keyword_search, vector_search) # parallel calls rescored = cross_encoder_rescore(query, all_results) return rescored ``` It is worth mentioning that queries to keyword search and vector search and re-scoring can be done in parallel. Cross-encoder can start scoring results as soon as the fastest search engine returns the results. ## Experiments For that benchmark, there have been 3 experiments conducted: 1. **Vector search with Qdrant** All the documents and queries are vectorized with [all-MiniLM-L6-v2](https://www.sbert.net/docs/pretrained_models.html) model, and compared with cosine similarity. 2. **Keyword-based search with BM25** All the documents are indexed by BM25 and queried with its default configuration. 3. **Vector and keyword-based candidates generation and cross-encoder reranking** Both Qdrant and BM25 provides N candidates each and [ms-marco-MiniLM-L-6-v2](https://www.sbert.net/docs/pretrained-models/ce-msmarco.html) cross encoder performs reranking on those candidates only. This is an approach that makes it possible to use the power of semantic and keyword based search together. ![The design of all the three experiments](/articles_data/hybrid-search/experiments-design.png) ### Quality metrics There are various ways of how to measure the performance of search engines, and *[Recommender Systems: Machine Learning Metrics and Business Metrics](https://neptune.ai/blog/recommender-systems-metrics)* is a great introduction to that topic. I selected the following ones: - NDCG@5, NDCG@10 - DCG@5, DCG@10 - MRR@5, MRR@10 - Precision@5, Precision@10 - Recall@5, Recall@10 Since both systems return a score for each result, we could use DCG and NDCG metrics. However, BM25 scores are not normalized be default. We performed the normalization to a range `[0, 1]` by dividing each score by the maximum score returned for that query. ### Datasets There are various benchmarks for search relevance available. Full-text search has been a strong baseline for most of them. However, there are also cases in which semantic search works better by default. For that article, I'm performing **zero shot search**, meaning our models didn't have any prior exposure to the benchmark datasets, so this is effectively an out-of-domain search. #### Home Depot [Home Depot dataset](https://www.kaggle.com/competitions/home-depot-product-search-relevance/) consists of real inventory and search queries from Home Depot's website with a relevancy score from 1 (not relevant) to 3 (highly relevant). Anna Montoya, RG, Will Cukierski. (2016). Home Depot Product Search Relevance. Kaggle. https://kaggle.com/competitions/home-depot-product-search-relevance There are over 124k products with textual descriptions in the dataset and around 74k search queries with the relevancy score assigned. For the purposes of our benchmark, relevancy scores were also normalized. #### WANDS I also selected a relatively new search relevance dataset. [WANDS](https://github.com/wayfair/WANDS), which stands for Wayfair ANnotation Dataset, is designed to evaluate search engines for e-commerce. WANDS: Dataset for Product Search Relevance Assessment Yan Chen, Shujian Liu, Zheng Liu, Weiyi Sun, Linas Baltrunas and Benjamin Schroeder In a nutshell, the dataset consists of products, queries and human annotated relevancy labels. Each product has various textual attributes, as well as facets. The relevancy is provided as textual labels: “Exact”, “Partial” and “Irrelevant” and authors suggest to convert those to 1, 0.5 and 0.0 respectively. There are 488 queries with a varying number of relevant items each. ## The results Both datasets have been evaluated with the same experiments. The achieved performance is shown in the tables. ### Home Depot ![The results of all the experiments conducted on Home Depot dataset](/articles_data/hybrid-search/experiment-results-home-depot.png) The results achieved with BM25 alone are better than with Qdrant only. However, if we combine both methods into hybrid search with an additional cross encoder as a last step, then that gives great improvement over any baseline method. With the cross-encoder approach, Qdrant retrieved about 56.05% of the relevant items on average, while BM25 fetched 59.16%. Those numbers don't sum up to 100%, because some items were returned by both systems. ### WANDS ![The results of all the experiments conducted on WANDS dataset](/articles_data/hybrid-search/experiment-results-wands.png) The dataset seems to be more suited for semantic search, but the results might be also improved if we decide to use a hybrid search approach with cross encoder model as a final step. Overall, combining both full-text and semantic search with an additional reranking step seems to be a good idea, as we are able to benefit the advantages of both methods. Again, it's worth mentioning that with the 3rd experiment, with cross-encoder reranking, Qdrant returned more than 48.12% of the relevant items and BM25 around 66.66%. ## Some anecdotal observations None of the algorithms works better in all the cases. There might be some specific queries in which keyword-based search will be a winner and the other way around. The table shows some interesting examples we could find in WANDS dataset during the experiments: <table> <thead> <th>Query</th> <th>BM25 Search</th> <th>Vector Search</th> </thead> <tbody> <tr> <th>cybersport desk</th> <td>desk ❌</td> <td>gaming desk ✅</td> </tr> <tr> <th>plates for icecream</th> <td>"eat" plates on wood wall décor ❌</td> <td>alicyn 8.5 '' melamine dessert plate ✅</td> </tr> <tr> <th>kitchen table with a thick board</th> <td>craft kitchen acacia wood cutting board ❌</td> <td>industrial solid wood dining table ✅</td> </tr> <tr> <th>wooden bedside table</th> <td>30 '' bedside table lamp ❌</td> <td>portable bedside end table ✅</td> </tr> </tbody> </table> Also examples where keyword-based search did better: <table> <thead> <th>Query</th> <th>BM25 Search</th> <th>Vector Search</th> </thead> <tbody> <tr> <th>computer chair</th> <td>vibrant computer task chair ✅</td> <td>office chair ❌</td> </tr> <tr> <th>64.2 inch console table</th> <td>cervantez 64.2 '' console table ✅</td> <td>69.5 '' console table ❌</td> </tr> </tbody> </table> # A wrap up Each search scenario requires a specialized tool to achieve the best results possible. Still, combining multiple tools with minimal overhead is possible to improve the search precision even further. Introducing vector search into an existing search stack doesn't need to be a revolution but just one small step at a time. You'll never cover all the possible queries with a list of synonyms, so a full-text search may not find all the relevant documents. There are also some cases in which your users use different terminology than the one you have in your database. Those problems are easily solvable with neural vector embeddings, and combining both approaches with an additional reranking step is possible. So you don't need to resign from your well-known full-text search mechanism but extend it with vector search to support the queries you haven't foreseen.
qdrant-landing/content/articles/io_uring.md
--- title: "Qdrant under the hood: io_uring" short_description: "The Linux io_uring API offers great performance in certain cases. Here's how Qdrant uses it!" description: "Slow disk decelerating your Qdrant deployment? Get on top of IO overhead with this one trick!" social_preview_image: /articles_data/io_uring/social_preview.png small_preview_image: /articles_data/io_uring/io_uring-icon.svg preview_dir: /articles_data/io_uring/preview weight: 3 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-06-21T09:45:00+02:00 draft: false keywords: - vector search - linux - optimization aliases: [ /articles/io-uring/ ] --- With Qdrant [version 1.3.0](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) we introduce the alternative io\_uring based *async uring* storage backend on Linux-based systems. Since its introduction, io\_uring has been known to improve async throughput wherever the OS syscall overhead gets too high, which tends to occur in situations where software becomes *IO bound* (that is, mostly waiting on disk). ## Input+Output Around the mid-90s, the internet took off. The first servers used a process- per-request setup, which was good for serving hundreds if not thousands of concurrent request. The POSIX Input + Output (IO) was modeled in a strictly synchronous way. The overhead of starting a new process for each request made this model unsustainable. So servers started forgoing process separation, opting for the thread-per-request model. But even that ran into limitations. I distinctly remember when someone asked the question whether a server could serve 10k concurrent connections, which at the time exhausted the memory of most systems (because every thread had to have its own stack and some other metadata, which quickly filled up available memory). As a result, the synchronous IO was replaced by asynchronous IO during the 2.5 kernel update, either via `select` or `epoll` (the latter being Linux-only, but a small bit more efficient, so most servers of the time used it). However, even this crude form of asynchronous IO carries the overhead of at least one system call per operation. Each system call incurs a context switch, and while this operation is itself not that slow, the switch disturbs the caches. Today's CPUs are much faster than memory, but if their caches start to miss data, the memory accesses required led to longer and longer wait times for the CPU. ### Memory-mapped IO Another way of dealing with file IO (which unlike network IO doesn't have a hard time requirement) is to map parts of files into memory - the system fakes having that chunk of the file in memory, so when you read from a location there, the kernel interrupts your process to load the needed data from disk, and resumes your process once done, whereas writing to the memory will also notify the kernel. Also the kernel can prefetch data while the program is running, thus reducing the likelyhood of interrupts. Thus there is still some overhead, but (especially in asynchronous applications) it's far less than with `epoll`. The reason this API is rarely used in web servers is that these usually have a large variety of files to access, unlike a database, which can map its own backing store into memory once. ### Combating the Poll-ution There were multiple experiments to improve matters, some even going so far as moving a HTTP server into the kernel, which of course brought its own share of problems. Others like Intel added their own APIs that ignored the kernel and worked directly on the hardware. Finally, Jens Axboe took matters into his own hands and proposed a ring buffer based interface called *io\_uring*. The buffers are not directly for data, but for operations. User processes can setup a Submission Queue (SQ) and a Completion Queue (CQ), both of which are shared between the process and the kernel, so there's no copying overhead. ![io_uring diagram](/articles_data/io_uring/io-uring.png) Apart from avoiding copying overhead, the queue-based architecture lends itself to multithreading as item insertion/extraction can be made lockless, and once the queues are set up, there is no further syscall that would stop any user thread. Servers that use this can easily get to over 100k concurrent requests. Today Linux allows asynchronous IO via io\_uring for network, disk and accessing other ports, e.g. for printing or recording video. ## And what about Qdrant? Qdrant can store everything in memory, but not all data sets may fit, which can require storing on disk. Before io\_uring, Qdrant used mmap to do its IO. This led to some modest overhead in case of disk latency. The kernel may stop a user thread trying to access a mapped region, which incurs some context switching overhead plus the wait time until the disk IO is finished. Ultimately, this works very well with the asynchronous nature of Qdrant's core. One of the great optimizations Qdrant offers is quantization (either [scalar](/articles/scalar-quantization/) or [product](/articles/product-quantization/)-based). However unless the collection resides fully in memory, this optimization method generates significant disk IO, so it is a prime candidate for possible improvements. If you run Qdrant on Linux, you can enable io\_uring with the following in your configuration: ```yaml # within the storage config storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. ## Benchmarks To run the benchmark, use a test instance of Qdrant. If necessary spin up a docker container and load a snapshot of the collection you want to benchmark with. You can copy and edit our [benchmark script](/articles_data/io_uring/rescore-benchmark.sh) to run the benchmark. Run the script with and without enabling `storage.async_scorer` and once. You can measure IO usage with `iostat` from another console. For our benchmark, we chose the laion dataset picking 5 million 768d entries. We enabled scalar quantization + HNSW with m=16 and ef_construct=512. We do the quantization in RAM, HNSW in RAM but keep the original vectors on disk (which was a network drive rented from Hetzner for the benchmark). If you want to reproduce the benchmarks, you can get snapshots containing the datasets: * [mmap only](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-mmap.snapshot) * [with scalar quantization](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-sq-m16-mmap.shapshot) Running the benchmark, we get the following IOPS, CPU loads and wall clock times: | | oversampling | parallel | ~max IOPS | CPU% (of 4 cores) | time (s) (avg of 3) | |----------|--------------|----------|-----------|-------------------|---------------------| | io_uring | 1 | 4 | 4000 | 200 | 12 | | mmap | 1 | 4 | 2000 | 93 | 43 | | io_uring | 1 | 8 | 4000 | 200 | 12 | | mmap | 1 | 8 | 2000 | 90 | 43 | | io_uring | 4 | 8 | 7000 | 100 | 30 | | mmap | 4 | 8 | 2300 | 50 | 145 | Note that in this case, the IO operations have relatively high latency due to using a network disk. Thus, the kernel takes more time to fulfil the mmap requests, and application threads need to wait, which is reflected in the CPU percentage. On the other hand, with the io\_uring backend, the application threads can better use available cores for the rescore operation without any IO-induced delays. Oversampling is a new feature to improve accuracy at the cost of some performance. It allows setting a factor, which is multiplied with the `limit` while doing the search. The results are then re-scored using the original vector and only then the top results up to the limit are selected. ## Discussion Looking back, disk IO used to be very serialized; re-positioning read-write heads on moving platter was a slow and messy business. So the system overhead didn't matter as much, but nowadays with SSDs that can often even parallelize operations while offering near-perfect random access, the overhead starts to become quite visible. While memory-mapped IO gives us a fair deal in terms of ease of use and performance, we can improve on the latter in exchange for some modest complexity increase. io\_uring is still quite young, having only been introduced in 2019 with kernel 5.1, so some administrators will be wary of introducing it. Of course, as with performance, the right answer is usually "it depends", so please review your personal risk profile and act accordingly. ## Best Practices If your on-disk collection's query performance is of sufficiently high priority to you, enable the io\_uring-based async\_scorer to greatly reduce operating system overhead from disk IO. On the other hand, if your collections are in memory only, activating it will be ineffective. Also note that many queries are not IO bound, so the overhead may or may not become measurable in your workload. Finally, on-device disks typically carry lower latency than network drives, which may also affect mmap overhead. Therefore before you roll out io\_uring, perform the above or a similar benchmark with both mmap and io\_uring and measure both wall time and IOps). Benchmarks are always highly use-case dependent, so your mileage may vary. Still, doing that benchmark once is a small price for the possible performance wins. Also please [tell us](https://discord.com/channels/907569970500743200/907569971079569410) about your benchmark results!
qdrant-landing/content/articles/langchain-integration.md
--- title: "Question Answering with LangChain and Qdrant without boilerplate" short_description: "Large Language Models might be developed fast with modern tool. Here is how!" description: "We combined LangChain, pretrained LLM from OpenAI, SentenceTransformers and Qdrant to create a Q&A system with just a few lines of code." social_preview_image: /articles_data/langchain-integration/social_preview.png small_preview_image: /articles_data/langchain-integration/chain.svg preview_dir: /articles_data/langchain-integration/preview weight: 6 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-01-31T10:53:20+01:00 draft: false keywords: - vector search - langchain - llm - large language models - question answering - openai - embeddings --- Building applications with Large Language Models don't have to be complicated. A lot has been going on recently to simplify the development, so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. [LangChain](https://langchain.readthedocs.io) provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring. ## Question Answering with Qdrant in the loop It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. Vector database, such as Qdrant, is of great help here, as their ability to perform a semantic search over a huge knowledge base is crucial to preselect some possibly valid documents, so they can be provided into the LLM. That's also one of the **chains** implemented in LangChain, which is called `VectorDBQA`. And Qdrant got integrated with the library, so it might be used to build it effortlessly. ### What do we need? Surprisingly enough, there will be two models required to set things up. First of all, we need an embedding model that will convert the set of facts into vectors, and store those into Qdrant. That's an identical process to any other semantic search application. We're going to use one of the `SentenceTransformers` models, so it can be hosted locally. The embeddings created by that model will be put into Qdrant and used to retrieve the most similar documents, given the query. However, when we receive a query, there are two steps involved. First of all, we ask Qdrant to provide the most relevant documents and simply combine all of them into a single text. Then, we build a prompt to the LLM (in our case OpenAI), including those documents as a context, of course together with the question asked. So the input to the LLM looks like the following: ```text Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. It's as certain as 2 + 2 = 4 ... Question: How much is 2 + 2? Helpful Answer: ``` There might be several context documents combined, and it is solely up to LLM to choose the right piece of content. But our expectation is, the model should respond with just `4`. Why do we need two different models? Both solve some different tasks. The first model performs feature extraction, by converting the text into vectors, while the second one helps in text generation or summarization. Disclaimer: This is not the only way to solve that task with LangChain. Such a chain is called `stuff` in the library nomenclature. ![](/articles_data/langchain-integration/flow-diagram.png) Enough theory! This sounds like a pretty complex application, as it involves several systems. But with LangChain, it might be implemented in just a few lines of code, thanks to the recent integration with Qdrant. We're not even going to work directly with `QdrantClient`, as everything is already done in the background by LangChain. If you want to get into the source code right away, all the processing is available as a [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing). ## Implementing Question Answering with LangChain and Qdrant ### Configuration A journey of a thousand miles begins with a single step, in our case with the configuration of all the services. We'll be using [Qdrant Cloud](https://cloud.qdrant.io), so we need an API key. The same is for OpenAI - the API key has to be obtained from their website. ![](/articles_data/langchain-integration/code-configuration.png) ### Building the knowledge base We also need some facts from which the answers will be generated. There is plenty of public datasets available, and [Natural Questions](https://ai.google.com/research/NaturalQuestions/visualization) is one of them. It consists of the whole HTML content of the websites they were scraped from. That means we need some preprocessing to extract plain text content. As a result, we’re going to have two lists of strings - one for questions and the other one for the answers. The answers have to be vectorized with the first of our models. The `sentence-transformers/all-mpnet-base-v2` is one of the possibilities, but there are some other options available. LangChain will handle that part of the process in a single function call. ![](/articles_data/langchain-integration/code-qdrant.png) ### Setting up QA with Qdrant in a loop `VectorDBQA` is a chain that performs the process described above. So it, first of all, loads some facts from Qdrant and then feeds them into OpenAI LLM which should analyze them to find the answer to a given question. The only last thing to do before using it is to put things together, also with a single function call. ![](/articles_data/langchain-integration/code-vectordbqa.png) ## Testing out the chain And that's it! We can put some queries, and LangChain will perform all the required processing to find the answer in the provided context. ![](/articles_data/langchain-integration/code-answering.png) ```text > what kind of music is scott joplin most famous for Scott Joplin is most famous for composing ragtime music. > who died from the band faith no more Chuck Mosley > when does maggie come on grey's anatomy Maggie first appears in season 10, episode 1, which aired on September 26, 2013. > can't take my eyes off you lyrics meaning I don't know. > who lasted the longest on alone season 2 David McIntyre lasted the longest on Alone season 2, with a total of 66 days. ``` The great thing about such a setup is that the knowledge base might be easily extended with some new facts and those will be included in the prompts sent to LLM later on. Of course, assuming their similarity to the given question will be in the top results returned by Qdrant. If you want to run the chain on your own, the simplest way to reproduce it is to open the [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
qdrant-landing/content/articles/memory-consumption.md
--- title: "How to Optimize RAM Requirements for 1 Million Vectors: A Case Study" short_description: Master RAM measurement and memory optimization for optimal performance and resource use. description: Unlock the secrets of efficient RAM measurement and memory optimization with this comprehensive guide, ensuring peak performance and resource utilization. social_preview_image: /articles_data/memory-consumption/preview/social_preview.jpg preview_dir: /articles_data/memory-consumption/preview small_preview_image: /articles_data/memory-consumption/icon.svg weight: 7 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2022-12-07T10:18:00.000Z # aliases: [ /articles/memory-consumption/ ] --- <!-- 1. How people usually measure memory and why it might be misleading 2. How to properly measure memory 3. Try different configurations of Qdrant and see how they affect the memory consumption and search speed 4. Conclusion --> <!-- Introduction: 1. We are used to measure memory consumption by looking into `htop`. But it could be misleading. 2. There are multiple reasons why it is wrong: 1. Process may allocate memory, but not use it. 2. Process may not free deallocated memory. 3. Process might be forked and memory is shared between processes. 3. Process may use disk cache. 3. As a result, if you see `10Gb` memory consumption in `htop`, it doesn't mean that your process actually needs `10Gb` of RAM to work. --> # Mastering RAM Measurement and Memory Optimization in Qdrant: A Comprehensive Guide When it comes to measuring the memory consumption of our processes, we often rely on tools such as `htop` to give us an indication of how much RAM is being used. However, this method can be misleading and doesn't always accurately reflect the true memory usage of a process. There are many different ways in which `htop` may not be a reliable indicator of memory usage. For instance, a process may allocate memory in advance but not use it, or it may not free deallocated memory, leading to overstated memory consumption. A process may be forked, which means that it will have a separate memory space, but it will share the same code and data with the parent process. This means that the memory consumption of the child process will be counted twice. Additionally, a process may utilize disk cache, which is also accounted as resident memory in the `htop` measurements. As a result, even if `htop` shows that a process is using 10GB of memory, it doesn't necessarily mean that the process actually requires 10GB of RAM to operate efficiently. In this article, we will explore how to properly measure RAM usage and optimize [Qdrant](https://qdrant.tech/) for optimal memory consumption. ## How to measure actual RAM requirements <!-- 1. We need to know how much RAM we need to have for the program to work, so why not just do a straightforward experiment. 2. Let's limit the allowed memory of the process and see at which point the process will working. 3. We can do a grid search, but it is better to apply binary search to find the minimum amount of RAM more quickly. 4. We will use docker to limit the memory usage of the process. 5. Before running docker we will use ``` # Ensure that there is no data in page cache before each benchmark run sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches' ``` to clear the page between runs and make sure that the process doesn't use of the previous runs. --> We need to know memory consumption in order to estimate how much RAM is required to run the program. So in order to determine that, we can conduct a simple experiment. Let's limit the allowed memory of the process and observe at which point it stops functioning. In this way we can determine the minimum amount of RAM the program needs to operate. One way to do this is by conducting a grid search, but a more efficient method is to use binary search to quickly find the minimum required amount of RAM. We can use docker to limit the memory usage of the process. Before running each benchmark, it is important to clear the page cache with the following command: ```bash sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches' ``` This ensures that the process doesn't utilize any data from previous runs, providing more accurate and consistent results. We can use the following command to run Qdrant with a memory limit of 1GB: ```bash docker run -it --rm \ --memory 1024mb \ --network=host \ -v "$(pwd)/data/storage:/qdrant/storage" \ qdrant/qdrant:latest ``` ## Let's run some benchmarks Let's run some benchmarks to see how much RAM Qdrant needs to serve 1 million vectors. We can use the `glove-100-angular` and scripts from the [vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) project to upload and query the vectors. With the first run we will use the default configuration of Qdrant with all data stored in RAM. ```bash # Upload vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular ``` After uploading vectors, we will repeat the same experiment with different RAM limits to see how they affect the memory consumption and search speed. ```bash # Search vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular --skip-upload ``` <!-- Experiment results: All in memory: 1024mb - out of memory 1512mb - 774.38 rps 1256mb - 760.63 rps 1152mb - out of memory 1200mb - 794.72it/s Conclusion: about 1.2Gb is needed to serve ~1 million vectors, no speed degradation with limiting memory above 1.2Gb MMAP for vectors: 1200mb - 759.94 rps 1100mb - 687.00 rps 1000mb - 10 rps --- use a bit faster disk --- 1000mb - 25 rps 500mb - out of memory 750mb - 5 rps 625mb - 2.5 rps 575mb - out of memory 600mb - out of memory We can go even lower by using mmap not only for vectors, but also for the index. MMAP for vectors and HNSW graph: 600mb - 5 rps 300mb - 0.9 rps / 1.1 sec per query 150mb - 0.4 rps / 2.5 sec per query 75mb - out of memory 110mb - out of memory 125mb - out of memory 135mb - 0.33 rps / 3 sec per query --> ### All in Memory In the first experiment, we tested how well our system performs when all vectors are stored in memory. We tried using different amounts of memory, ranging from 1512mb to 1024mb, and measured the number of requests per second (rps) that our system was able to handle. | Memory | Requests/s | |--------|---------------| | 1512mb | 774.38 | | 1256mb | 760.63 | | 1200mb | 794.72 | | 1152mb | out of memory | | 1024mb | out of memory | We found that 1152Mb memory limit resulted in our system running out of memory, but using 1512mb, 1256mb, and 1200mb of memory resulted in our system being able to handle around 780 RPS. This suggests that about 1.2Gb of memory is needed to serve around 1 million vectors, and there is no speed degradation when limiting memory usage above 1.2Gb. ### Vectors stored using MMAP Let's go a bit further! In the second experiment, we tested how well our system performs when **vectors are stored using the memory-mapped file** (mmap). Create collection with: ```http PUT /collections/benchmark { "vectors": { ... "on_disk": true } } ``` This configuration tells Qdrant to use mmap for vectors if the segment size is greater than 20000Kb (which is approximately 40K 128d-vectors). Now the out-of-memory happens when we allow using **600mb** RAM only <details> <summary>Experiments details</summary> | Memory | Requests/s | |--------|---------------| | 1200mb | 759.94 | | 1100mb | 687.00 | | 1000mb | 10 | --- use a bit faster disk --- | Memory | Requests/s | |--------|---------------| | 1000mb | 25 rps | | 750mb | 5 rps | | 625mb | 2.5 rps | | 600mb | out of memory | </details> <br/> At this point we have to switch from network-mounted storage to a faster disk, as the network-based storage is too slow to handle the amount of sequential reads that our system needs to serve the queries. But let's first see how much RAM we need to serve 1 million vectors and then we will discuss the speed optimization as well. ### Vectors and HNSW graph stored using MMAP In the third experiment, we tested how well our system performs when vectors and [HNSW](https://qdrant.tech/articles/filtrable-hnsw/) graph are stored using the memory-mapped files. Create collection with: ```http PUT /collections/benchmark { "vectors": { ... "on_disk": true }, "hnsw_config": { "on_disk": true }, ... } ``` With this configuration we are able to serve 1 million vectors with **only 135mb of RAM**! <details> <summary>Experiments details</summary> | Memory | Requests/s | |--------|---------------| | 600mb | 5 rps | | 300mb | 0.9 rps / 1.1 sec per query | | 150mb | 0.4 rps / 2.5 sec per query | | 135mb | 0.33 rps / 3 sec per query | | 125mb | out of memory | </details> <br/> At this point the importance of the disk speed becomes critical. We can serve the search requests with 135mb of RAM, but the speed of the requests makes it impossible to use the system in production. Let's see how we can improve the speed. ## How to speed up the search <!-- We need to look into disk parameters and see how they affect the search speed. Let's measure the disk speed with `fio`: ``` fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randread ``` Initially we tested on network-mounted disk, but it was too slow: ``` read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec) ``` So we switched to default local disk: ``` read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec) ``` Let's now try it on a machine with local SSD and see if it affects the search speed: ``` read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec) ``` We can use faster disk to speed up the search. Here are the results: 600mb - 50 rps 300mb - 13 rps 200md - 8 rps 150mb - 7 rps --> To measure the impact of disk parameters on search speed, we used the `fio` tool to test the speed of different types of disks. ```bash # Install fio sudo apt-get install fio # Run fio to check the random reads speed fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=fiotest \ --filename=testfio \ --bs=4k \ --iodepth=64 \ --size=8G \ --readwrite=randread ``` Initially, we tested on a network-mounted disk, but its performance was too slow, with a read IOPS of 6366 and a bandwidth of 24.9 MiB/s: ```text read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec) ``` To improve performance, we switched to a local disk, which showed much faster results, with a read IOPS of 63.2k and a bandwidth of 247 MiB/s: ```text read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec) ``` That gave us a significant speed boost, but we wanted to see if we could improve performance even further. To do that, we switched to a machine with a local SSD, which showed even better results, with a read IOPS of 183k and a bandwidth of 716 MiB/s: ```text read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec) ``` Let's see how these results translate into search speed: | Memory | RPS with IOPS=63.2k | RPS with IOPS=183k | |--------|---------------------|--------------------| | 600mb | 5 | 50 | | 300mb | 0.9 | 13 | | 200mb | 0.5 | 8 | | 150mb | 0.4 | 7 | As you can see, the speed of the disk has a significant impact on the search speed. With a local SSD, we were able to increase the search speed by 10x! With the production-grade disk, the search speed could be even higher. Some configurations of the SSDs can reach 1M IOPS and more. Which might be an interesting option to serve large datasets with low search latency in Qdrant. ## Conclusion In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](https://qdrant.tech/contact-us/)! We are eager to learn more about how you use Qdrant in your projects, what challenges you face, and how we can help you solve them. Please feel free to join our [Discord](https://qdrant.to/discord) and share your experience with us!
qdrant-landing/content/articles/metric-learning-tips.md
--- title: Metric Learning Tips & Tricks short_description: How to train an object matching model and serve it in production. description: Practical recommendations on how to train a matching model and serve it in production. Even with no labeled data. # external_link: https://vasnetsov93.medium.com/metric-learning-tips-n-tricks-2e4cfee6b75b social_preview_image: /articles_data/metric-learning-tips/preview/social_preview.jpg preview_dir: /articles_data/metric-learning-tips/preview small_preview_image: /articles_data/metric-learning-tips/scatter-graph.svg weight: 20 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-05-15T10:18:00.000Z # aliases: [ /articles/metric-learning-tips/ ] --- ## How to train object matching model with no labeled data and use it in production Currently, most machine-learning-related business cases are solved as a classification problems. Classification algorithms are so well studied in practice that even if the original problem is not directly a classification task, it is usually decomposed or approximately converted into one. However, despite its simplicity, the classification task has requirements that could complicate its production integration and scaling. E.g. it requires a fixed number of classes, where each class should have a sufficient number of training samples. In this article, I will describe how we overcome these limitations by switching to metric learning. By the example of matching job positions and candidates, I will show how to train metric learning model with no manually labeled data, how to estimate prediction confidence, and how to serve metric learning in production. ## What is metric learning and why using it? According to Wikipedia, metric learning is the task of learning a distance function over objects. In practice, it means that we can train a model that tells a number for any pair of given objects. And this number should represent a degree or score of similarity between those given objects. For example, objects with a score of 0.9 could be more similar than objects with a score of 0.5 Actual scores and their direction could vary among different implementations. In practice, there are two main approaches to metric learning and two corresponding types of NN architectures. The first is the interaction-based approach, which first builds local interactions (i.e., local matching signals) between two objects. Deep neural networks learn hierarchical interaction patterns for matching. Examples of neural network architectures include MV-LSTM, ARC-II, and MatchPyramid. ![MV-LSTM, example of interaction-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/mv_lstm.png) > MV-LSTM, example of interaction-based model, [Shengxian Wan et al. ](https://www.researchgate.net/figure/Illustration-of-MV-LSTM-S-X-and-S-Y-are-the-in_fig1_285271115) via Researchgate The second is the representation-based approach. In this case distance function is composed of 2 components: the Encoder transforms an object into embedded representation - usually a large float point vector, and the Comparator takes embeddings of a pair of objects from the Encoder and calculates their similarity. The most well-known example of this embedding representation is Word2Vec. Examples of neural network architectures also include DSSM, C-DSSM, and ARC-I. The Comparator is usually a very simple function that could be calculated very quickly. It might be cosine similarity or even a dot production. Two-stage schema allows performing complex calculations only once per object. Once transformed, the Comparator can calculate object similarity independent of the Encoder much more quickly. For more convenience, embeddings can be placed into specialized storages or vector search engines. These search engines allow to manage embeddings using API, perform searches and other operations with vectors. ![C-DSSM, example of representation-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/cdssm.png) > C-DSSM, example of representation-based model, [Xue Li et al.](https://arxiv.org/abs/1901.10710v2) via arXiv Pre-trained NNs can also be used. The output of the second-to-last layer could work as an embedded representation. Further in this article, I would focus on the representation-based approach, as it proved to be more flexible and fast. So what are the advantages of using metric learning comparing to classification? Object Encoder does not assume the number of classes. So if you can't split your object into classes, if the number of classes is too high, or you suspect that it could grow in the future - consider using metric learning. In our case, business goal was to find suitable vacancies for candidates who specify the title of the desired position. To solve this, we used to apply a classifier to determine the job category of the vacancy and the candidate. But this solution was limited to only a few hundred categories. Candidates were complaining that they couldn't find the right category for them. Training the classifier for new categories would be too long and require new training data for each new category. Switching to metric learning allowed us to overcome these limitations, the resulting solution could compare any pair position descriptions, even if we don't have this category reference yet. ![T-SNE with job samples](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/embeddings.png) > T-SNE with job samples, Image by Author. Play with [Embedding Projector](https://projector.tensorflow.org/?config=https://gist.githubusercontent.com/generall/7e712425e3b340c2c4dbc1a29f515d91/raw/b45b2b6f6c1d5ab3d3363c50805f3834a85c8879/config.json) yourself. With metric learning, we learn not a concrete job type but how to match job descriptions from a candidate's CV and a vacancy. Secondly, with metric learning, it is easy to add more reference occupations without model retraining. We can then add the reference to a vector search engine. Next time we will match occupations - this new reference vector will be searchable. ## Data for metric learning Unlike classifiers, a metric learning training does not require specific class labels. All that is required are examples of similar and dissimilar objects. We would call them positive and negative samples. At the same time, it could be a relative similarity between a pair of objects. For example, twins look more alike to each other than a pair of random people. And random people are more similar to each other than a man and a cat. A model can use such relative examples for learning. The good news is that the division into classes is only a special case of determining similarity. To use such datasets, it is enough to declare samples from one class as positive and samples from another class as negative. In this way, it is possible to combine several datasets with mismatched classes into one generalized dataset for metric learning. But not only datasets with division into classes are suitable for extracting positive and negative examples. If, for example, there are additional features in the description of the object, the value of these features can also be used as a similarity factor. It may not be as explicit as class membership, but the relative similarity is also suitable for learning. In the case of job descriptions, there are many ontologies of occupations, which were able to be combined into a single dataset thanks to this approach. We even went a step further and used identical job titles to find similar descriptions. As a result, we got a self-supervised universal dataset that did not require any manual labeling. Unfortunately, universality does not allow some techniques to be applied in training. Next, I will describe how to overcome this disadvantage. ## Training the model There are several ways to train a metric learning model. Among the most popular is the use of Triplet or Contrastive loss functions, but I will not go deep into them in this article. However, I will tell you about one interesting trick that helped us work with unified training examples. One of the most important practices to efficiently train the metric learning model is hard negative mining. This technique aims to include negative samples on which model gave worse predictions during the last training epoch. Most articles that describe this technique assume that training data consists of many small classes (in most cases it is people's faces). With data like this, it is easy to find bad samples - if two samples from different classes have a high similarity score, we can use it as a negative sample. But we had no such classes in our data, the only thing we have is occupation pairs assumed to be similar in some way. We cannot guarantee that there is no better match for each job occupation among this pair. That is why we can't use hard negative mining for our model. ![Loss variations](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/losses.png) > [Alfonso Medela et al.](https://arxiv.org/abs/1905.10675) via arXiv To compensate for this limitation we can try to increase the number of random (weak) negative samples. One way to achieve this is to train the model longer, so it will see more samples by the end of the training. But we found a better solution in adjusting our loss function. In a regular implementation of Triplet or Contractive loss, each positive pair is compared with some or a few negative samples. What we did is we allow pair comparison amongst the whole batch. That means that loss-function penalizes all pairs of random objects if its score exceeds any of the positive scores in a batch. This extension gives `~ N * B^2` comparisons where `B` is a size of batch and `N` is a number of batches. Much bigger than `~ N * B` in regular triplet loss. This means that increasing the size of the batch significantly increases the number of negative comparisons, and therefore should improve the model performance. We were able to observe this dependence in our experiments. Similar idea we also found in the article [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362). ## Model confidence In real life it is often needed to know how confident the model was in the prediction. Whether manual adjustment or validation of the result is required. With conventional classification, it is easy to understand by scores how confident the model is in the result. If the probability values of different classes are close to each other, the model is not confident. If, on the contrary, the most probable class differs greatly, then the model is confident. At first glance, this cannot be applied to metric learning. Even if the predicted object similarity score is small it might only mean that the reference set has no proper objects to compare with. Conversely, the model can group garbage objects with a large score. Fortunately, we found a small modification to the embedding generator, which allows us to define confidence in the same way as it is done in conventional classifiers with a Softmax activation function. The modification consists in building an embedding as a combination of feature groups. Each feature group is presented as a one-hot encoded sub-vector in the embedding. If the model can confidently predict the feature value - the corresponding sub-vector will have a high absolute value in some of its elements. For a more intuitive understanding, I recommend thinking about embeddings not as points in space, but as a set of binary features. To implement this modification and form proper feature groups we would need to change a regular linear output layer to a concatenation of several Softmax layers. Each softmax component would represent an independent feature and force the neural network to learn them. Let's take for example that we have 4 softmax components with 128 elements each. Every such component could be roughly imagined as a one-hot-encoded number in the range of 0 to 127. Thus, the resulting vector will represent one of `128^4` possible combinations. If the trained model is good enough, you can even try to interpret the values of singular features individually. ![Softmax feature embeddings](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/feature_embedding.png) > Softmax feature embeddings, Image by Author. ## Neural rules Machine learning models rarely train to 100% accuracy. In a conventional classifier, errors can only be eliminated by modifying and repeating the training process. Metric training, however, is more flexible in this matter and allows you to introduce additional steps that allow you to correct the errors of an already trained model. A common error of the metric learning model is erroneously declaring objects close although in reality they are not. To correct this kind of error, we introduce exclusion rules. Rules consist of 2 object anchors encoded into vector space. If the target object falls into one of the anchors' effects area - it triggers the rule. It will exclude all objects in the second anchor area from the prediction result. ![Exclusion rules](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/exclusion_rule.png) > Neural exclusion rules, Image by Author. The convenience of working with embeddings is that regardless of the number of rules, you only need to perform the encoding once per object. Then to find a suitable rule, it is enough to compare the target object's embedding and the pre-calculated embeddings of the rule's anchors. Which, when implemented, translates into just one additional query to the vector search engine. ## Vector search in production When implementing a metric learning model in production, the question arises about the storage and management of vectors. It should be easy to add new vectors if new job descriptions appear in the service. In our case, we also needed to apply additional conditions to the search. We needed to filter, for example, the location of candidates and the level of language proficiency. We did not find a ready-made tool for such vector management, so we created [Qdrant](https://github.com/qdrant/qdrant) - open-source vector search engine. It allows you to add and delete vectors with a simple API, independent of a programming language you are using. You can also assign the payload to vectors. This payload allows additional filtering during the search request. Qdrant has a pre-built docker image and start working with it is just as simple as running ```bash docker run -p 6333:6333 qdrant/qdrant ``` Documentation with examples could be found [here](https://api.qdrant.tech/api-reference). ## Conclusion In this article, I have shown how metric learning can be more scalable and flexible than the classification models. I suggest trying similar approaches in your tasks - it might be matching similar texts, images, or audio data. With the existing variety of pre-trained neural networks and a vector search engine, it is easy to build your metric learning-based application.
qdrant-landing/content/articles/multitenancy.md
--- title: "How to Implement Multitenancy and Custom Sharding in Qdrant" short_description: "Explore how Qdrant's multitenancy and custom sharding streamline machine-learning operations, enhancing scalability and data security." description: "Discover how multitenancy and custom sharding in Qdrant can streamline your machine-learning operations. Learn how to scale efficiently and manage data securely." social_preview_image: /articles_data/multitenancy/social_preview.png preview_dir: /articles_data/multitenancy/preview small_preview_image: /articles_data/multitenancy/icon.svg weight: -120 author: David Myriel date: 2024-02-06T13:21:00.000Z draft: false keywords: - multitenancy - custom sharding - multiple partitions - vector database --- # Scaling Your Machine Learning Setup: The Power of Multitenancy and Custom Sharding in Qdrant We are seeing the topics of [multitenancy](/documentation/guides/multiple-partitions/) and [distributed deployment](/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup. Whether you are building a bank fraud-detection system, [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product. In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs. ## Multitenancy & custom sharding with Qdrant We have developed two major features just for this. __You can now scale a single Qdrant cluster and support all of your customers worldwide.__ Under [multitenancy](/documentation/guides/multiple-partitions/), each customer's data is completely isolated and only accessible by them. At times, if this data is location-sensitive, Qdrant also gives you the option to divide your cluster by region or other criteria that further secure your customer's access. This is called [custom sharding](/documentation/guides/distributed_deployment/#user-defined-sharding). Combining these two will result in an efficiently-partitioned architecture that further leverages the convenience of a single Qdrant cluster. This article will briefly explain the benefits and show how you can get started using both features. ## One collection, many tenants When working with Qdrant, you can upsert all your data to a single collection, and then partition each vector via its payload. This means that all your users are leveraging the power of a single Qdrant cluster, but their data is still isolated within the collection. Let's take a look at a two-tenant collection: **Figure 1:** Each individual vector is assigned a specific payload that denotes which tenant it belongs to. This is how a large number of different tenants can share a single Qdrant collection. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy-single.png) Qdrant is built to excel in a single collection with a vast number of tenants. You should only create multiple collections when your data is not homogenous or if users' vectors are created by different embedding models. Creating too many collections may result in resource overhead and cause dependencies. This can increase costs and affect overall performance. ## Sharding your database With Qdrant, you can also specify a shard for each vector individually. This feature is useful if you want to [control where your data is kept in the cluster](/documentation/guides/distributed_deployment/#sharding). For example, one set of vectors can be assigned to one shard on its own node, while another set can be on a completely different node. During vector search, your operations will be able to hit only the subset of shards they actually need. In massive-scale deployments, __this can significantly improve the performance of operations that do not require the whole collection to be scanned__. This works in the other direction as well. Whenever you search for something, you can specify a shard or several shards and Qdrant will know where to find them. It will avoid asking all machines in your cluster for results. This will minimize overhead and maximize performance. ### Common use cases A clear use-case for this feature is managing a multitenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. Sharding solves the problem of region-based data placement, whereby certain data needs to be kept within specific locations. To do this, however, you will need to [move your shards between nodes](/documentation/guides/distributed_deployment/#moving-shards). **Figure 2:** Users can both upsert and query shards that are relevant to them, all within the same collection. Regional sharding can help avoid cross-continental traffic. ![Qdrant Multitenancy](/articles_data/multitenancy/shards.png) Custom sharding also gives you precise control over other use cases. A time-based data placement means that data streams can index shards that represent latest updates. If you organize your shards by date, you can have great control over the recency of retrieved data. This is relevant for social media platforms, which greatly rely on time-sensitive data. ## Before I go any further.....how secure is my user data? By design, Qdrant offers three levels of isolation. We initially introduced collection-based isolation, but your scaled setup has to move beyond this level. In this scenario, you will leverage payload-based isolation (from multitenancy) and resource-based isolation (from sharding). The ultimate goal is to have a single collection, where you can manipulate and customize placement of shards inside your cluster more precisely and avoid any kind of overhead. The diagram below shows the arrangement of your data within a two-tier isolation arrangement. **Figure 3:** Users can query the collection based on two filters: the `group_id` and the individual `shard_key_selector`. This gives your data two additional levels of isolation. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy.png) ## Create custom shards for a single collection When creating a collection, you will need to configure user-defined sharding. This lets you control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations, since you won't need to go through the entire collection to retrieve data. ```python client.create_collection( collection_name="{tenant_data}", shard_number=2, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key("{tenant_data}", "canada") client.create_shard_key("{tenant_data}", "germany") ``` In this example, your cluster is divided between Germany and Canada. Canadian and German law differ when it comes to international data transfer. Let's say you are creating a RAG application that supports the healthcare industry. Your Canadian customer data will have to be clearly separated for compliance purposes from your German customer. Even though it is part of the same collection, data from each shard is isolated from other shards and can be retrieved as such. For additional examples on shards and retrieval, consult [Distributed Deployments](/documentation/guides/distributed_deployment/) documentation and [Qdrant Client specification](https://python-client.qdrant.tech). ## Configure a multitenant setup for users Let's continue and start adding data. As you upsert your vectors to your new collection, you can add a `group_id` field to each vector. If you do this, Qdrant will assign each vector to its respective group. Additionally, each vector can now be allocated to a shard. You can specify the `shard_key_selector` for each individual vector. In this example, you are upserting data belonging to `tenant_1` to the Canadian region. ```python client.upsert( collection_name="{tenant_data}", points=[ models.PointStruct( id=1, payload={"group_id": "tenant_1"}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={"group_id": "tenant_1"}, vector=[0.1, 0.9, 0.1], ), ], shard_key_selector="canada", ) ``` Keep in mind that the data for each `group_id` is isolated. In the example below, `tenant_1` vectors are kept separate from `tenant_2`. The first tenant will be able to access their data in the Canadian portion of the cluster. However, as shown below `tenant_2 `might only be able to retrieve information hosted in Germany. ```python client.upsert( collection_name="{tenant_data}", points=[ models.PointStruct( id=3, payload={"group_id": "tenant_2"}, vector=[0.1, 0.1, 0.9], ), ], shard_key_selector="germany", ) ``` ## Retrieve data via filters The access control setup is completed as you specify the criteria for data retrieval. When searching for vectors, you need to use a `query_filter` along with `group_id` to filter vectors for each user. ```python client.search( collection_name="{tenant_data}", query_filter=models.Filter( must=[ models.FieldCondition( key="group_id", match=models.MatchValue( value="tenant_1", ), ), ] ), query_vector=[0.1, 0.1, 0.9], limit=10, ) ``` ## Performance considerations The speed of indexation may become a bottleneck if you are adding large amounts of data in this way, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{tenant_data}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` 3. Create keyword payload index for `group_id` field. ```python client.create_payload_index( collection_name="{tenant_data}", field_name="group_id", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` > Note: Keep in mind that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors. ## Explore multitenancy and custom sharding in Qdrant for scalable solutions Qdrant is ready to support a massive-scale architecture for your machine learning project. If you want to see whether our [vector database](https://qdrant.tech/) is right for you, try the [quickstart tutorial](/documentation/quick-start/) or read our [docs and tutorials](/documentation/). To spin up a free instance of Qdrant, sign up for [Qdrant Cloud](https://qdrant.to/cloud) - no strings attached. Get support or share ideas in our [Discord](https://qdrant.to/discord) community. This is where we talk about vector search theory, publish examples and demos and discuss vector database setups.
qdrant-landing/content/articles/neural-search-tutorial.md
--- title: Neural Search Tutorial short_description: Step-by-step guide on how to build a neural search service. description: Our step-by-step guide on how to build a neural search service with BERT + Qdrant + FastAPI. # external_link: https://blog.qdrant.tech/neural-search-tutorial-3f034ab13adc social_preview_image: /articles_data/neural-search-tutorial/social_preview.jpg preview_dir: /articles_data/neural-search-tutorial/preview small_preview_image: /articles_data/neural-search-tutorial/tutorial.svg weight: 50 author: Andrey Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-06-10T10:18:00.000Z # aliases: [ /articles/neural-search-tutorial/ ] --- ## How to build a neural search service with BERT + Qdrant + FastAPI Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn't get much change until neural networks came into play. In this tutorial we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? ## What is neural search? A regular full-text search, such as Google's, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem - it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called embeddings. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![Encoders and embedding space](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/e52e3f1a320cd985ebc96f48955d7f355de8876c/encoders.png) Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to [the curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). ## Which model could be used? It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models could be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. ## What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions - neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user's actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. ## Let's build our own With all that said, let's make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). ### Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `all-MiniLM-L6-v2`. This model is an all-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs. It is optimized for low memory consumption and fast inference. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) ### Vector search engine Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial we will use [Qdrant](https://github.com/qdrant/qdrant) vector search engine. It not only supports all necessary operations with vectors but also allows to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/qdrant/qdrant): ```bash docker pull qdrant/qdrant ``` And run the service inside the docker: ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333 ``` This means that the service is successfully launched and listening port 6333. To make sure you can test [http://localhost:6333/](http://localhost:6333/) in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `./qdrant_storage` directory and will be persisted even if you recreate the container. ### Upload data to Qdrant Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command ```bash pip install qdrant-client ``` At this point, we should have startup records in file `startups.json`, encoded vectors in file `startup_vectors.npy`, and running Qdrant on a local machine. Let's write a script to upload all startup data and vectors into the search engine. First, let's create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient from qdrant_client.models import VectorParams, Distance qdrant_client = QdrantClient(host='localhost', port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let's create a new collection for our startup vectors. ```python qdrant_client.recreate_collection( collection_name='startups', vectors_config=VectorParams(size=384, distance=Distance.COSINE), ) ``` The `recreate_collection` function first tries to remove an existing collection with the same name. This is useful if you are experimenting and running the script several times. The `vector_size` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `384` is the output dimensionality of the encoder we are using. The `distance` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let's create an iterator over the startup data and vectors. ```python import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') ``` And the final step - data uploading ```python qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ) ``` Now we have vectors, uploaded to the vector search engine. On the next step we will learn how to actually search for closest vectors. The full code for this step could be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py). ### Make a search API Now that all the preparations are complete, let's start building a neural search class. First, install all the requirements: ```bash pip install sentence-transformers numpy ``` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ```python # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('all-MiniLM-L6-v2', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) ``` The search function looks as simple as possible: ```python def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = "Berlin" # Define a filter for cities city_filter = Filter(**{ "must": [{ "key": "city", # We store city information in a field of the same name "match": { # This condition checks if payload field have requested value "keyword": city_of_interest } }] }) search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=city_filter, top=5 ) ... ``` We now have a class for making neural search queries. Let's wrap it up into a service. ### Deploy as a service To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command ```bash pip install fastapi uvicorn ``` Our service will have only one API endpoint and will look like this: ```python # File: service.py from fastapi import FastAPI # That is the file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create an instance of the neural searcher neural_searcher = NeuralSearcher(collection_name='startups') @app.get("/api/search") def search_startup(q: str): return { "result": neural_searcher.search(text=q) } if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Now, if you run the service with ```bash python service.py ``` and open your browser at [http://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![FastAPI Swagger interface](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/d866e37a60036ebe65508bd736faff817a5d27e9/fastapi_neural_search.png) Feel free to play around with it, make queries and check out the results. This concludes the tutorial. ### Online Demo The described code is the core of this [online demo](https://qdrant.to/semantic-search-demo). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use startup description to find similar ones. ## Conclusion In this tutorial, I have tried to give minimal information about neural search, but enough to start using it. Many potential applications are not mentioned here, this is a space to go further into the subject. Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications.
qdrant-landing/content/articles/new-recommendation-api.md
--- title: Deliver Better Recommendations with Qdrant’s new API short_description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. preview_dir: /articles_data/new-recommendation-api/preview social_preview_image: /articles_data/new-recommendation-api/preview/social_preview.png small_preview_image: /articles_data/new-recommendation-api/icon.svg weight: -80 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-10-25T09:46:00.000Z --- The most popular use case for vector search engines, such as Qdrant, is Semantic search with a single query vector. Given the query, we can vectorize (embed) it and find the closest points in the index. But [Vector Similarity beyond Search](/articles/vector-similarity-beyond-search/) does exist, and recommendation systems are a great example. Recommendations might be seen as a multi-aim search, where we want to find items close to positive and far from negative examples. This use of vector databases has many applications, including recommendation systems for e-commerce, content, or even dating apps. Qdrant has provided the [Recommendation API](/documentation/concepts/search/#recommendation-api) for a while, and with the latest release, [Qdrant 1.6](https://github.com/qdrant/qdrant/releases/tag/v1.6.0), we're glad to give you more flexibility and control over the Recommendation API. Here, we'll discuss some internals and show how they may be used in practice. ### Recap of the old recommendations API The previous [Recommendation API](/documentation/concepts/search/#recommendation-api) in Qdrant came with some limitations. First of all, it was required to pass vector IDs for both positive and negative example points. If you wanted to use vector embeddings directly, you had to either create a new point in a collection or mimic the behaviour of the Recommendation API by using the [Search API](/documentation/concepts/search/#search-api). Moreover, in the previous releases of Qdrant, you were always asked to provide at least one positive example. This requirement was based on the algorithm used to combine multiple samples into a single query vector. It was a simple, yet effective approach. However, if the only information you had was that your user dislikes some items, you couldn't use it directly. Qdrant 1.6 brings a more flexible API. You can now provide both IDs and vectors of positive and negative examples. You can even combine them within a single request. That makes the new implementation backward compatible, so you can easily upgrade an existing Qdrant instance without any changes in your code. And the default behaviour of the API is still the same as before. However, we extended the API, so **you can now choose the strategy of how to find the recommended points**. ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718, [0.2, 0.3, 0.4, 0.5]], "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "strategy": "average_vector", "limit": 3 } ``` There are two key changes in the request. First of all, we can adjust the strategy of search and set it to `average_vector` (the default) or `best_score`. Moreover, we can pass both IDs (`718`) and embeddings (`[0.2, 0.3, 0.4, 0.5]`) as both positive and negative examples. ## HNSW ANN example and strategy Let’s start with an example to help you understand the [HNSW graph](/articles/filtrable-hnsw/). Assume you want to travel to a small city on another continent: 1. You start from your hometown and take a bus to the local airport. 2. Then, take a flight to one of the closest hubs. 3. From there, you have to take another flight to a hub on your destination continent. 4. Hopefully, one last flight to your destination city. 5. You still have one more leg on local transport to get to your final address. This journey is similar to the HNSW graph’s use in Qdrant's approximate nearest neighbours search. ![Transport network](/articles_data/new-recommendation-api/example-transport-network.png) HNSW is a multilayer graph of vectors (embeddings), with connections based on vector proximity. The top layer has the least points, and the distances between those points are the biggest. The deeper we go, the more points we have, and the distances get closer. The graph is built in a way that the points are connected to their closest neighbours at every layer. All the points from a particular layer are also in the layer below, so switching the search layer while staying in the same location is possible. In the case of transport networks, the top layer would be the airline hubs, well-connected but with big distances between the airports. Local airports, along with railways and buses, with higher density and smaller distances, make up the middle layers. Lastly, our bottom layer consists of local means of transport, which is the densest and has the smallest distances between the points. You don’t have to check all the possible connections when you travel. You select an intercontinental flight, then a local one, and finally a bus or a taxi. All the decisions are made based on the distance between the points. The search process in HNSW is also based on similarly traversing the graph. Start from the entry point in the top layer, find its closest point and then use that point as the entry point into the next densest layer. This process repeats until we reach the bottom layer. Visited points and distances to the original query vector are kept in memory. If none of the neighbours of the current point is better than the best match, we can stop the traversal, as this is a local minimum. We start at the biggest scale, and then gradually zoom in. In this oversimplified example, we assumed that the distance between the points is the only factor that matters. In reality, we might want to consider other criteria, such as the ticket price, or avoid some specific locations due to certain restrictions. That means, there are various strategies for choosing the best match, which is also true in the case of vector recommendations. We can use different approaches to determine the path of traversing the HNSW graph by changing how we calculate the score of a candidate point during traversal. The default behaviour is based on pure distance, but Qdrant 1.6 exposes two strategies for the recommendation API. ### Average vector The default strategy, called `average_vector` is the previous one, based on the average of positive and negative examples. It simplifies the recommendations process and converts it into a single vector search. It supports both point IDs and vectors as parameters. For example, you can get recommendations based on past interactions with existing points combined with query vector embedding. Internally, that mechanism is based on the averages of positive and negative examples and was calculated with the following formula: $$ \text{average vector} = \text{avg}(\text{positive vectors}) + \left( \text{avg}(\text{positive vectors}) - \text{avg}(\text{negative vectors}) \right) $$ The `average_vector` converts the problem of recommendations into a single vector search. ### The new hotness - Best score The new strategy is called `best_score`. It does not rely on averages and is more flexible. It allows you to pass just negative samples and uses a slightly more sophisticated algorithm under the hood. The best score is chosen at every step of HNSW graph traversal. We separately calculate the distance between a traversed point and every positive and negative example. In the case of the best score strategy, **there is no single query vector anymore, but a bunch of positive and negative queries**. As a result, for each sample in the query, we have a set of distances, one for each sample. In the next step, we simply take the best scores for positives and negatives, creating two separate values. Best scores are just the closest distances of a query to positives and negatives. The idea is: **if a point is closer to any negative than to any positive example, we do not want it**. We penalize being close to the negatives, so instead of using the similarity value directly, we check if it’s closer to positives or negatives. The following formula is used to calculate the score of a traversed potential point: ```rust if best_positive_score > best_negative_score { score = best_positive_score } else { score = -(best_negative_score * best_negative_score) } ``` If the point is closer to the negatives, we penalize it by taking the negative squared value of the best negative score. For a closer negative, the score of the candidate point will always be lower or equal to zero, making the chances of choosing that point significantly lower. However, if the best negative score is higher than the best positive score, we still prefer those that are further away from the negatives. That procedure effectively **pulls the traversal procedure away from the negative examples**. If you want to know more about the internals of HNSW, you can check out the article about the [Filtrable HNSW](/articles/filtrable-hnsw/) that covers the topic thoroughly. ## Food Discovery demo Our [Food Discovery demo](/articles/food-discovery-demo/) is an application built on top of the new [Recommendation API](/documentation/concepts/search/#recommendation-api). It allows you to find a meal based on liked and disliked photos. There are some updates, enabled by the new Qdrant release: * **Ability to include multiple textual queries in the recommendation request.** Previously, we only allowed passing a single query to solve the cold start problem. Right now, you can pass multiple queries and mix them with the liked/disliked photos. This became possible because of the new flexibility in parameters. We can pass both point IDs and embedding vectors in the same request, and user queries are obviously not a part of the collection. * **Switch between the recommendation strategies.** You can now choose between the `average_vector` and the `best_score` scoring algorithm. ### Differences between the strategies The UI of the Food Discovery demo allows you to switch between the strategies. The `best_vector` is the default one, but with just a single switch, you can see how the results differ when using the previous `average_vector` strategy. If you select just a single positive example, both algorithms work identically. ##### One positive example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/one-positive.mp4" type="video/mp4"></video> The difference only becomes apparent when you start adding more examples, especially if you choose some negatives. ##### One positive and one negative example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/one-positive-one-negative.mp4" type="video/mp4"></video> The more likes and dislikes we add, the more diverse the results of the `best_score` strategy will be. In the old strategy, there is just a single vector, so all the examples are similar to it. The new one takes into account all the examples separately, making the variety richer. ##### Multiple positive and negative examples <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/multiple.mp4" type="video/mp4"></video> Choosing the right strategy is dataset-dependent, and the embeddings play a significant role here. Thus, it’s always worth trying both of them and comparing the results in a particular case. #### Handling the negatives only In the case of our Food Discovery demo, passing just the negative images can work as an outlier detection mechanism. While the dataset was supposed to contain only food photos, this is not actually true. A simple way to find these outliers is to pass in food item photos as negatives, leading to the results being the most "unlike" food images. In our case you will see pill bottles and books. **The `average_vector` strategy still requires providing at least one positive example!** However, since cosine distance is set up for the collection used in the demo, we faked it using [a trick described in the previous article](/articles/food-discovery-demo/#negative-feedback-only). In a nutshell, if you only pass negative examples, their vectors will be averaged, and the negated resulting vector will be used as a query to the search endpoint. ##### Negatives only <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/negatives-only.mp4" type="video/mp4"></video> Still, both methods return different results, so they each have their place depending on the questions being asked and the datasets being used. #### Challenges with multimodality Food Discovery uses the [CLIP embeddings model](https://huggingface.co/sentence-transformers/clip-ViT-B-32), which is multimodal, allowing both images and texts encoded into the same vector space. Using this model allows for image queries, text queries, or both of them combined. We utilized that mechanism in the updated demo, allowing you to pass the textual queries to filter the results further. ##### A single text query <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/text-query.mp4" type="video/mp4"></video> Text queries might be mixed with the liked and disliked photos, so you can combine them in a single request. However, you might be surprised by the results achieved with the new strategy, if you start adding the negative examples. ##### A single text query with negative example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/text-query-with-negative.mp4" type="video/mp4"></video> This is an issue related to the embeddings themselves. Our dataset contains a bunch of image embeddings that are pretty close to each other. On the other hand, our text queries are quite far from most of the image embeddings, but relatively close to some of them, so the text-to-image search seems to work well. When all query items come from the same domain, such as only text, everything works fine. However, if we mix positive text and negative image embeddings, the results of the `best_score` are overwhelmed by the negative samples, which are simply closer to the dataset embeddings. If you experience such a problem, the `average_vector` strategy might be a better choice. ### Check out the demo The [Food Discovery Demo](https://food-discovery.qdrant.tech/) is available online, so you can test and see the difference. This is an open source project, so you can easily deploy it on your own. The source code is available in the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/) and the [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes the process of setting it up. Since calculating the embeddings takes a while, we precomputed them and exported them as a [snapshot](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot), which might be easily imported into any Qdrant instance. [Qdrant Cloud is the easiest way to start](https://cloud.qdrant.io/), though!
qdrant-landing/content/articles/product-quantization.md
--- title: "Qdrant under the hood: Product Quantization" short_description: "Vector search with low memory? Try out our brand-new Product Quantization!" description: "Vector search with low memory? Try out our brand-new Product Quantization!" social_preview_image: /articles_data/product-quantization/social_preview.png small_preview_image: /articles_data/product-quantization/product-quantization-icon.svg preview_dir: /articles_data/product-quantization/preview weight: 4 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-30T09:45:00+02:00 draft: false keywords: - vector search - product quantization - memory optimization aliases: [ /articles/product_quantization/ ] --- Qdrant 1.1.0 brought the support of [Scalar Quantization](/articles/scalar-quantization/), a technique of reducing the memory footprint by even four times, by using `int8` to represent the values that would be normally represented by `float32`. The memory usage in vector search might be reduced even further! Please welcome **Product Quantization**, a brand-new feature of Qdrant 1.2.0! ## Product Quantization Product Quantization converts floating-point numbers into integers like every other quantization method. However, the process is slightly more complicated than Scalar Quantization and is more customizable, so you can find the sweet spot between memory usage and search precision. This article covers all the steps required to perform Product Quantization and the way it's implemented in Qdrant. Let’s assume we have a few vectors being added to the collection and that our optimizer decided to start creating a new segment. ![A list of raw vectors](/articles_data/product-quantization/raw-vectors.png) ### Cutting the vector into pieces First of all, our vectors are going to be divided into **chunks** aka **subvectors**. The number of chunks is configurable, but as a rule of thumb - the lower it is, the higher the compression rate. That also comes with reduced search precision, but in some cases, you may prefer to keep the memory usage as low as possible. ![A list of chunked vectors](/articles_data/product-quantization/chunked-vectors.png) Qdrant API allows choosing the compression ratio from 4x up to 64x. In our example, we selected 16x, so each subvector will consist of 4 floats (16 bytes), and it will eventually be represented by a single byte. ### Clustering The chunks of our vectors are then used as input for clustering. Qdrant uses the K-means algorithm, with $ K = 256 $. It was selected a priori, as this is the maximum number of values a single byte represents. As a result, we receive a list of 256 centroids for each chunk and assign each of them a unique id. **The clustering is done separately for each group of chunks.** ![Clustered chunks of vectors](/articles_data/product-quantization/chunks-clustering.png) Each chunk of a vector might now be mapped to the closest centroid. That’s where we lose the precision, as a single point will only represent a whole subspace. Instead of using a subvector, we can store the id of the closest centroid. If we repeat that for each chunk, we can approximate the original embedding as a vector of subsequent ids of the centroids. The dimensionality of the created vector is equal to the number of chunks, in our case 2. ![A new vector built from the ids of the centroids](/articles_data/product-quantization/vector-of-ids.png) ### Full process All those steps build the following pipeline of Product Quantization: ![Full process of Product Quantization](/articles_data/product-quantization/full-process.png) ## Measuring the distance Vector search relies on the distances between the points. Enabling Product Quantization slightly changes the way it has to be calculated. The query vector is divided into chunks, and then we figure the overall distance as a sum of distances between the subvectors and the centroids assigned to the specific id of the vector we compare to. We know the coordinates of the centroids, so that's easy. ![Calculating the distance of between the query and the stored vector](/articles_data/product-quantization/distance-calculation.png) #### Qdrant implementation Search operation requires calculating the distance to multiple points. Since we calculate the distance to a finite set of centroids, those might be precomputed and reused. Qdrant creates a lookup table for each query, so it can then simply sum up several terms to measure the distance between a query and all the centroids. | | Centroid 0 | Centroid 1 | ... | |-------------|------------|------------|-----| | **Chunk 0** | 0.14213 | 0.51242 | | | **Chunk 1** | 0.08421 | 0.00142 | | | **...** | ... | ... | ... | ## Benchmarks Product Quantization comes with a cost - there are some additional operations to perform so that the performance might be reduced. However, memory usage might be reduced drastically as well. As usual, we did some benchmarks to give you a brief understanding of what you may expect. Again, we reused the same pipeline as in [the other benchmarks we published](/benchmarks/). We selected [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Glove-100](https://github.com/erikbern/ann-benchmarks/) datasets to measure the impact of Product Quantization on precision and time. Both experiments were launched with $ EF = 128 $. The results are summarized in the tables: #### Glove-100 <table> <thead> <tr> <th></th> <th>Original</th> <th>1D clusters</th> <th>2D clusters</th> <th>3D clusters</th> </tr> </thead> <tbody> <tr> <th>Mean precision</th> <td>0.7158</td> <td>0.7143</td> <td>0.6731</td> <td>0.5854</td> </tr> <tr> <th>Mean search time</th> <td>2336 µs</td> <td>2750 µs</td> <td>2597 µs</td> <td>2534 µs</td> </tr> <tr> <th>Compression</th> <td>x1</td> <td>x4</td> <td>x8</td> <td>x12</td> </tr> <tr> <th>Upload & indexing time</th> <td>147 s</td> <td>339 s</td> <td>217 s</td> <td>178 s</td> </tr> </tbody> </table> Product Quantization increases both indexing and searching time. The higher the compression ratio, the lower the search precision. The main benefit is undoubtedly the reduced usage of memory. #### Arxiv-titles-384-angular-no-filters <table> <thead> <tr> <th></th> <th>Original</th> <th>1D clusters</th> <th>2D clusters</th> <th>4D clusters</th> <th>8D clusters</th> </tr> </thead> <tbody> <tr> <th>Mean precision</th> <td>0.9837</td> <td>0.9677</td> <td>0.9143</td> <td>0.8068</td> <td>0.6618</td> </tr> <tr> <th>Mean search time</th> <td>2719 µs</td> <td>4134 µs</td> <td>2947 µs</td> <td>2175 µs</td> <td>2053 µs</td> </tr> <tr> <th>Compression</th> <td>x1</td> <td>x4</td> <td>x8</td> <td>x16</td> <td>x32</td> </tr> <tr> <th>Upload & indexing time</th> <td>332 s</td> <td>921 s</td> <td>597 s</td> <td>481 s</td> <td>474 s</td> </tr> </tbody> </table> It turns out that in some cases, Product Quantization may not only reduce the memory usage, but also the search time. ## Good practices Compared to Scalar Quantization, Product Quantization offers a higher compression rate. However, this comes with considerable trade-offs in accuracy, and at times, in-RAM search speed. Product Quantization tends to be favored in certain specific scenarios: - Deployment in a low-RAM environment where the limiting factor is the number of disk reads rather than the vector comparison itself - Situations where the dimensionality of the original vectors is sufficiently high - Cases where indexing speed is not a critical factor In circumstances that do not align with the above, Scalar Quantization should be the preferred choice. Qdrant documentation on [Product Quantization](/documentation/guides/quantization/#setting-up-product-quantization) will help you to set and configure the new quantization for your data and achieve even up to 64x memory reduction.
qdrant-landing/content/articles/qa-with-cohere-and-qdrant.md
--- title: Question Answering as a Service with Cohere and Qdrant short_description: "End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant" description: "End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant" social_preview_image: /articles_data/qa-with-cohere-and-qdrant/social_preview.png small_preview_image: /articles_data/qa-with-cohere-and-qdrant/q-and-a-article-icon.svg preview_dir: /articles_data/qa-with-cohere-and-qdrant/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-11-29T15:45:00+01:00 draft: false keywords: - vector search - question answering - cohere - co.embed - embeddings --- Bi-encoders are probably the most efficient way of setting up a semantic Question Answering system. This architecture relies on the same neural model that creates vector embeddings for both questions and answers. The assumption is, both question and answer should have representations close to each other in the latent space. It should be like that because they should both describe the same semantic concept. That doesn't apply to answers like "Yes" or "No" though, but standard FAQ-like problems are a bit easier as there is typically an overlap between both texts. Not necessarily in terms of wording, but in their semantics. ![Bi-encoder structure. Both queries (questions) and documents (answers) are vectorized by the same neural encoder. Output embeddings are then compared by a chosen distance function, typically cosine similarity.](/articles_data/qa-with-cohere-and-qdrant/biencoder-diagram.png) And yeah, you need to **bring your own embeddings**, in order to even start. There are various ways how to obtain them, but using Cohere [co.embed API](https://docs.cohere.ai/reference/embed) is probably the easiest and most convenient method. ## Why co.embed API and Qdrant go well together? Maintaining a **Large Language Model** might be hard and expensive. Scaling it up and down, when the traffic changes, require even more effort and becomes unpredictable. That might be definitely a blocker for any semantic search system. But if you want to start right away, you may consider using a SaaS model, Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) in particular. It gives you state-of-the-art language models available as a Highly Available HTTP service with no need to train or maintain your own service. As all the communication is done with JSONs, you can simply provide the co.embed output as Qdrant input. ```python # Putting the co.embed API response directly as Qdrant method input qdrant_client.upsert( collection_name="collection", points=rest.Batch( ids=[...], vectors=cohere_client.embed(...).embeddings, payloads=[...], ), ) ``` Both tools are easy to combine, so you can start working with semantic search in a few minutes, not days. And what if your needs are so specific that you need to fine-tune a general usage model? Co.embed API goes beyond pre-trained encoders and allows providing some custom datasets to [customize the embedding model with your own data](https://docs.cohere.com/docs/finetuning). As a result, you get the quality of domain-specific models, but without worrying about infrastructure. ## System architecture overview In real systems, answers get vectorized and stored in an efficient vector search database. We typically don’t even need to provide specific answers, but just use sentences or paragraphs of text and vectorize them instead. Still, if a bit longer piece of text contains the answer to a particular question, its distance to the question embedding should not be that far away. And for sure closer than all the other, non-matching answers. Storing the answer embeddings in a vector database makes the search process way easier. ![Building the database of possible answers. All the texts are converted into their vector embeddings and those embeddings are stored in a vector database, i.e. Qdrant.](/articles_data/qa-with-cohere-and-qdrant/vector-database.png) ## Looking for the correct answer Once our database is working and all the answer embeddings are already in place, we can start querying it. We basically perform the same vectorization on a given question and ask the database to provide some near neighbours. We rely on the embeddings to be close to each other, so we expect the points with the smallest distance in the latent space to contain the proper answer. ![While searching, a question gets vectorized by the same neural encoder. Vector database is a component that looks for the closest answer vectors using i.e. cosine similarity. A proper system, like Qdrant, will make the lookup process more efficient, as it won’t calculate the distance to all the answer embeddings. Thanks to HNSW, it will be able to find the nearest neighbours with sublinear complexity.](/articles_data/qa-with-cohere-and-qdrant/search-with-vector-database.png) ## Implementing the QA search system with SaaS tools We don’t want to maintain our own service for the neural encoder, nor even set up a Qdrant instance. There are SaaS solutions for both — Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) and [Qdrant Cloud](https://qdrant.to/cloud), so we’ll use them instead of on-premise tools. ### Question Answering on biomedical data We’re going to implement the Question Answering system for the biomedical data. There is a *[pubmed_qa](https://huggingface.co/datasets/pubmed_qa)* dataset, with it *pqa_labeled* subset containing 1,000 examples of questions and answers labelled by domain experts. Our system is going to be fed with the embeddings generated by co.embed API and we’ll load them to Qdrant. Using Qdrant Cloud vs your own instance does not matter much here. There is a subtle difference in how to connect to the cloud instance, but all the other operations are executed in the same way. ```python from datasets import load_dataset # Loading the dataset from HuggingFace hub. It consists of several columns: pubid, # question, context, long_answer and final_decision. For the purposes of our system, # we’ll use question and long_answer. dataset = load_dataset("pubmed_qa", "pqa_labeled") ``` | **pubid** | **question** | **context** | **long_answer** | **final_decision** | |-----------|---------------------------------------------------|-------------|---------------------------------------------------|--------------------| | 18802997 | Can calprotectin predict relapse risk in infla... | ... | Measuring calprotectin may help to identify UC... | maybe | | 20538207 | Should temperature be monitorized during kidne... | ... | The new storage can affords more stable temper... | no | | 25521278 | Is plate clearing a risk factor for obesity? | ... | The tendency to clear one's plate when eating ... | yes | | 17595200 | Is there an intrauterine influence on obesity? | ... | Comparison of mother-offspring and father-offs.. | no | | 15280782 | Is unsafe sexual behaviour increasing among HI... | ... | There was no evidence of a trend in unsafe sex... | no | ### Using Cohere and Qdrant to build the answers database In order to start generating the embeddings, you need to [create a Cohere account](https://dashboard.cohere.ai/welcome/register). That will start your trial period, so you’ll be able to vectorize the texts for free. Once logged in, your default API key will be available in [Settings](https://dashboard.cohere.ai/api-keys). We’ll need it to call the co.embed API. with the official python package. ```python import cohere cohere_client = cohere.Client(COHERE_API_KEY) # Generating the embeddings with Cohere client library embeddings = cohere_client.embed( texts=["A test sentence"], model="large", ) vector_size = len(embeddings.embeddings[0]) print(vector_size) # output: 4096 ``` Let’s connect to the Qdrant instance first and create a collection with the proper configuration, so we can put some embeddings into it later on. ```python # Connecting to Qdrant Cloud with qdrant-client requires providing the api_key. # If you use an on-premise instance, it has to be skipped. qdrant_client = QdrantClient( host="xyz-example.eu-central.aws.cloud.qdrant.io", prefer_grpc=True, api_key=QDRANT_API_KEY, ) ``` Now we’re able to vectorize all the answers. They are going to form our collection, so we can also put them already into Qdrant, along with the payloads and identifiers. That will make our dataset easily searchable. ```python answer_response = cohere_client.embed( texts=dataset["train"]["long_answer"], model="large", ) vectors = [ # Conversion to float is required for Qdrant list(map(float, vector)) for vector in answer_response.embeddings ] ids = [entry["pubid"] for entry in dataset["train"]] # Filling up Qdrant collection with the embeddings generated by Cohere co.embed API qdrant_client.upsert( collection_name="pubmed_qa", points=rest.Batch( ids=ids, vectors=vectors, payloads=list(dataset["train"]), ) ) ``` And that’s it. Without even setting up a single server on our own, we created a system that might be easily asked a question. I don’t want to call it serverless, as this term is already taken, but co.embed API with Qdrant Cloud makes everything way easier to maintain. ### Answering the questions with semantic search — the quality It’s high time to query our database with some questions. It might be interesting to somehow measure the quality of the system in general. In those kinds of problems we typically use *top-k accuracy*. We assume the prediction of the system was correct if the correct answer was present in the first *k* results. ```python # Finding the position at which Qdrant provided the expected answer for each question. # That allows to calculate accuracy@k for different values of k. k_max = 10 answer_positions = [] for embedding, pubid in tqdm(zip(question_response.embeddings, ids)): response = qdrant_client.search( collection_name="pubmed_qa", query_vector=embedding, limit=k_max, ) answer_ids = [record.id for record in response] if pubid in answer_ids: answer_positions.append(answer_ids.index(pubid)) else: answer_positions.append(-1) ``` Saved answer positions allow us to calculate the metric for different *k* values. ```python # Prepared answer positions are being used to calculate different values of accuracy@k for k in range(1, k_max + 1): correct_answers = len( list( filter(lambda x: 0 <= x < k, answer_positions) ) ) print(f"accuracy@{k} =", correct_answers / len(dataset["train"])) ``` Here are the values of the top-k accuracy for different values of k: | **metric** | **value** | |-------------|-----------| | accuracy@1 | 0.877 | | accuracy@2 | 0.921 | | accuracy@3 | 0.942 | | accuracy@4 | 0.950 | | accuracy@5 | 0.956 | | accuracy@6 | 0.960 | | accuracy@7 | 0.964 | | accuracy@8 | 0.971 | | accuracy@9 | 0.976 | | accuracy@10 | 0.977 | It seems like our system worked pretty well even if we consider just the first result, with the lowest distance. We failed with around 12% of questions. But numbers become better with the higher values of k. It might be also valuable to check out what questions our system failed to answer, their perfect match and our guesses. We managed to implement a working Question Answering system within just a few lines of code. If you are fine with the results achieved, then you can start using it right away. Still, if you feel you need a slight improvement, then fine-tuning the model is a way to go. If you want to check out the full source code, it is available on [Google Colab](https://colab.research.google.com/drive/1YOYq5PbRhQ_cjhi6k4t1FnWgQm8jZ6hm?usp=sharing).
qdrant-landing/content/articles/qdrant-0-10-release.md
--- title: Qdrant 0.10 released short_description: A short review of all the features introduced in Qdrant 0.10 description: Qdrant 0.10 brings a lot of changes. Check out what's new! preview_dir: /articles_data/qdrant-0-10-release/preview small_preview_image: /articles_data/qdrant-0-10-release/new-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-10-release/preview/social_preview.jpg weight: 70 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-09-19T13:30:00+02:00 draft: false --- [Qdrant 0.10 is a new version](https://github.com/qdrant/qdrant/releases/tag/v0.10.0) that brings a lot of performance improvements, but also some new features which were heavily requested by our users. Here is an overview of what has changed. ## Storing multiple vectors per object Previously, if you wanted to use semantic search with multiple vectors per object, you had to create separate collections for each vector type. This was even if the vectors shared some other attributes in the payload. With Qdrant 0.10, you can now store all of these vectors together in the same collection, which allows you to share a single copy of the payload. This makes it easier to use semantic search with multiple vector types, and reduces the amount of work you need to do to set up your collections. ## Batch vector search Previously, you had to send multiple requests to the Qdrant API to perform multiple non-related tasks. However, this can cause significant network overhead and slow down the process, especially if you have a poor connection speed. Fortunately, the [new batch search feature](/documentation/concepts/search/#batch-search-api) allows you to avoid this issue. With just one API call, Qdrant will handle multiple search requests in the most efficient way possible. This means that you can perform multiple tasks simultaneously without having to worry about network overhead or slow performance. ## Built-in ARM support To make our application accessible to ARM users, we have compiled it specifically for that platform. If it is not compiled for ARM, the device will have to emulate it, which can slow down performance. To ensure the best possible experience for ARM users, we have created Docker images specifically for that platform. Keep in mind that using a limited set of processor instructions may affect the performance of your vector search. Therefore, we have tested both ARM and non-ARM architectures using similar setups to understand the potential impact on performance. ## Full-text filtering Qdrant is a vector database that allows you to quickly search for the nearest neighbors. However, you may need to apply additional filters on top of the semantic search. Up until version 0.10, Qdrant only supported keyword filters. With the release of Qdrant 0.10, [you can now use full-text filters](/documentation/concepts/filtering/#full-text-match) as well. This new filter type can be used on its own or in combination with other filter types to provide even more flexibility in your searches.
qdrant-landing/content/articles/qdrant-0-11-release.md
--- title: Introducing Qdrant 0.11 short_description: Check out what's new in Qdrant 0.11 description: Replication support is the most important change introduced by Qdrant 0.11. Check out what else has been added! preview_dir: /articles_data/qdrant-0-11-release/preview small_preview_image: /articles_data/qdrant-0-11-release/announcement-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-11-release/preview/social_preview.jpg weight: 65 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-10-26T13:55:00+02:00 draft: false --- We are excited to [announce the release of Qdrant v0.11](https://github.com/qdrant/qdrant/releases/tag/v0.11.0), which introduces a number of new features and improvements. ## Replication One of the key features in this release is replication support, which allows Qdrant to provide a high availability setup with distributed deployment out of the box. This, combined with sharding, enables you to horizontally scale both the size of your collections and the throughput of your cluster. This means that you can use Qdrant to handle large amounts of data without sacrificing performance or reliability. ## Administration API Another new feature is the administration API, which allows you to disable write operations to the service. This is useful in situations where search availability is more critical than updates, and can help prevent issues like memory usage watermarks from affecting your searches. ## Exact search We have also added the ability to report indexed payload points in the info API, which allows you to verify that payload values were properly formatted for indexing. In addition, we have introduced a new `exact` search parameter that allows you to force exact searches of vectors, even if an ANN index is built. This can be useful for validating the accuracy of your HNSW configuration. ## Backward compatibility This release is backward compatible with v0.10.5 storage in single node deployment, but unfortunately, distributed deployment is not compatible with previous versions due to the large number of changes required for the replica set implementation. However, clients are tested for backward compatibility with the v0.10.x service.
qdrant-landing/content/articles/qdrant-1.2.x.md
--- title: "Introducing Qdrant 1.2.x" short_description: "Check out what Qdrant 1.2 brings to vector search" description: "Check out what Qdrant 1.2 brings to vector search" social_preview_image: /articles_data/qdrant-1.2.x/social_preview.png small_preview_image: /articles_data/qdrant-1.2.x/icon.svg preview_dir: /articles_data/qdrant-1.2.x/preview weight: 8 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-24T10:45:00+02:00 draft: false keywords: - vector search - new features - product quantization - optional vectors - nested filters - appendable mmap - group requests --- A brand-new Qdrant 1.2 release comes packed with a plethora of new features, some of which were highly requested by our users. If you want to shape the development of the Qdrant vector database, please [join our Discord community](https://qdrant.to/discord) and let us know how you use it! ## New features As usual, a minor version update of Qdrant brings some interesting new features. We love to see your feedback, and we tried to include the features most requested by our community. ### Product Quantization The primary focus of Qdrant was always performance. That's why we built it in Rust, but we were always concerned about making vector search affordable. From the very beginning, Qdrant offered support for disk-stored collections, as storage space is way cheaper than memory. That's also why we have introduced the [Scalar Quantization](/articles/scalar-quantization/) mechanism recently, which makes it possible to reduce the memory requirements by up to four times. Today, we are bringing a new quantization mechanism to life. A separate article on [Product Quantization](/documentation/quantization/#product-quantization) will describe that feature in more detail. In a nutshell, you can **reduce the memory requirements by up to 64 times**! ### Optional named vectors Qdrant has been supporting multiple named vectors per point for quite a long time. Those may have utterly different dimensionality and distance functions used to calculate similarity. Having multiple embeddings per item is an essential real-world scenario. For example, you might be encoding textual and visual data using different models. Or you might be experimenting with different models but don't want to make your payloads redundant by keeping them in separate collections. ![Optional vectors](/articles_data/qdrant-1.2.x/optional-vectors.png) However, up to the previous version, we requested that you provide all the vectors for each point. There have been many requests to allow nullable vectors, as sometimes you cannot generate an embedding or simply don't want to for reasons we don't need to know. ### Grouping requests Embeddings are great for capturing the semantics of the documents, but we rarely encode larger pieces of data into a single vector. Having a summary of a book may sound attractive, but in reality, we divide it into paragraphs or some different parts to have higher granularity. That pays off when we perform the semantic search, as we can return the relevant pieces only. That's also how modern tools like Langchain process the data. The typical way is to encode some smaller parts of the document and keep the document id as a payload attribute. ![Query without grouping request](/articles_data/qdrant-1.2.x/without-grouping-request.png) There are cases where we want to find relevant parts, but only up to a specific number of results per document (for example, only a single one). Up till now, we had to implement such a mechanism on the client side and send several calls to the Qdrant engine. But that's no longer the case. Qdrant 1.2 provides a mechanism for [grouping requests](/documentation/search/#grouping-api), which can handle that server-side, within a single call to the database. This mechanism is similar to the SQL `GROUP BY` clause. ![Query with grouping request](/articles_data/qdrant-1.2.x/with-grouping-request.png) You are not limited to a single result per document, and you can select how many entries will be returned. ### Nested filters Unlike some other vector databases, Qdrant accepts any arbitrary JSON payload, including arrays, objects, and arrays of objects. You can also [filter the search results using nested keys](/documentation/filtering/#nested-key), even though arrays (using the `[]` syntax). Before Qdrant 1.2 it was impossible to express some more complex conditions for the nested structures. For example, let's assume we have the following payload: ```json { "country": "Japan", "cities": [ { "name": "Tokyo", "population": 9.3, "area": 2194 }, { "name": "Osaka", "population": 2.7, "area": 223 }, { "name": "Kyoto", "population": 1.5, "area": 827.8 } ] } ``` We want to filter out the results to include the countries with a city with over 2 million citizens and an area bigger than 500 square kilometers but no more than 1000. There is no such a city in Japan, looking at our data, but if we wrote the following filter, it would be returned: ```json { "filter": { "must": [ { "key": "country.cities[].population", "range": { "gte": 2 } }, { "key": "country.cities[].area", "range": { "gt": 500, "lte": 1000 } } ] }, "limit": 3 } ``` Japan would be returned because Tokyo and Osaka match the first criteria, while Kyoto fulfills the second. But that's not what we wanted to achieve. That's the motivation behind introducing a new type of nested filter. ```json { "filter": { "must": [ { "nested": { "key": "country.cities", "filter": { "must": [ { "key": "population", "range": { "gte": 2 } }, { "key": "area", "range": { "gt": 500, "lte": 1000 } } ] } } } ] }, "limit": 3 } ``` The syntax is consistent with all the other supported filters and enables new possibilities. In our case, it allows us to express the joined condition on a nested structure and make the results list empty but correct. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Recovery mode There has been an issue in memory-constrained environments, such as cloud, happening when users were pushing massive amounts of data into the service using `wait=false`. This data influx resulted in an overreaching of disk or RAM limits before the Write-Ahead Logging (WAL) was fully applied. This situation was causing Qdrant to attempt a restart and reapplication of WAL, failing recurrently due to the same memory constraints and pushing the service into a frustrating crash loop with many Out-of-Memory errors. Qdrant 1.2 enters recovery mode, if enabled, when it detects a failure on startup. That makes the service halt the loading of collection data and commence operations in a partial state. This state allows for removing collections but doesn't support search or update functions. **Recovery mode [has to be enabled by user](/documentation/administration/#recovery-mode).** ### Appendable mmap For a long time, segments using mmap storage were `non-appendable` and could only be constructed by the optimizer. Dynamically adding vectors to the mmap file is fairly complicated and thus not implemented in Qdrant, but we did our best to implement it in the recent release. If you want to read more about segments, check out our docs on [vector storage](/documentation/storage/#vector-storage). ## Security There are two major changes in terms of [security](/documentation/security/): 1. **API-key support** - basic authentication with a static API key to prevent unwanted access. Previously API keys were only supported in [Qdrant Cloud](https://cloud.qdrant.io/). 2. **TLS support** - to use encrypted connections and prevent sniffing/MitM attacks. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.2.0) describe all the changes introduced in the latest version.
qdrant-landing/content/articles/qdrant-1.3.x.md
--- title: "Introducing Qdrant 1.3.0" short_description: "New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes." description: "New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes." social_preview_image: /articles_data/qdrant-1.3.x/social_preview.png small_preview_image: /articles_data/qdrant-1.3.x/icon.svg preview_dir: /articles_data/qdrant-1.3.x/preview weight: 2 author: David Sertic author_link: date: 2023-06-26T00:00:00Z draft: false keywords: - vector search - new features - oversampling - grouping lookup - io_uring - oversampling - group lookup --- A brand-new [Qdrant 1.3.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) comes packed with a plethora of new features, performance improvements and bux fixes: 1. Asynchronous I/O interface: Reduce overhead by managing I/O operations asynchronously, thus minimizing context switches. 2. Oversampling for Quantization: Improve the accuracy and performance of your queries while using Scalar or Product Quantization. 3. Grouping API lookup: Storage optimization method that lets you look for points in another collection using group ids. 4. Qdrant Web UI: A convenient dashboard to help you manage data stored in Qdrant. 5. Temp directory for Snapshots: Set a separate storage directory for temporary snapshots on a faster disk. 6. Other important changes Your feedback is valuable to us, and are always tying to include some of your feature requests into our roadmap. Join [our Discord community](https://qdrant.to/discord) and help us build Qdrant!. ## New features ### Asychronous I/O interface Going forward, we will support the `io_uring` asychnronous interface for storage devices on Linux-based systems. Since its introduction, `io_uring` has been proven to speed up slow-disk deployments as it decouples kernel work from the IO process. <aside role="status">This experimental feature works on Linux kernels > 5.4 </aside> This interface uses two ring buffers to queue and manage I/O operations asynchronously, avoiding costly context switches and reducing overhead. Unlike mmap, it frees the user threads to do computations instead of waiting for the kernel to complete. ![io_uring](/articles_data/qdrant-1.3.x/io-uring.png) #### Enable the interface from your config file: ```yaml storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. This optimization will mainly benefit workloads with lots of disk IO (e.g. querying on-disk collections with rescoring). Please keep in mind that this feature is experimental and that the interface may change in further versions. ### Oversampling for quantization We are introducing [oversampling](/documentation/guides/quantization/#oversampling) as a new way to help you improve the accuracy and performance of similarity search algorithms. With this method, you are able to significantly compress high-dimensional vectors in memory and then compensate the accuracy loss by re-scoring additional points with the original vectors. You will experience much faster performance with quantization due to parallel disk usage when reading vectors. Much better IO means that you can keep quantized vectors in RAM, so the pre-selection will be even faster. Finally, once pre-selection is done, you can use parallel IO to retrieve original vectors, which is significantly faster than traversing HNSW on slow disks. #### Set the oversampling factor via query: Here is how you can configure the oversampling factor - define how many extra vectors should be pre-selected using the quantized index, and then re-scored using original vectors. ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "ignore": false, "rescore": true, "oversampling": 2.4 } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 100 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.4 ) ) ) ``` In this case, if `oversampling` is 2.4 and `limit` is 100, then 240 vectors will be pre-selected using quantized index, and then the top 100 points will be returned after re-scoring with the unquantized vectors. As you can see from the example above, this parameter is set during the query. This is a flexible method that will let you tune query accuracy. While the index is not changed, you can decide how many points you want to retrieve using quantized vectors. ### Grouping API lookup In version 1.2.0, we introduced a mechanism for requesting groups of points. Our new feature extends this functionality by giving you the option to look for points in another collection using the group ids. We wanted to add this feature, since having a single point for the shared data of the same item optimizes storage use, particularly if the payload is large. This has the extra benefit of having a single point to update when the information shared by the points in a group changes. ![Group Lookup](/articles_data/qdrant-1.3.x/group-lookup.png) For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point. #### Adding the parameter to grouping API request: When using the grouping API, add the `with_lookup` parameter to bring the information from those points into each group: ```http POST /collections/chunks/points/search/groups { // Same as in the regular search API "vector": [1.1], ..., // Grouping parameters "group_by": "document_id", "limit": 2, "group_size": 2, // Lookup parameters "with_lookup": { // Name of the collection to look up points in "collection_name": "documents", // Options for specifying what to bring from the payload // of the looked up point, true by default "with_payload": ["title", "text"], // Options for specifying what to bring from the vector(s) // of the looked up point, true by default "with_vectors: false, } } ``` ```python client.search_groups( collection_name="chunks", # Same as in the regular search() API query_vector=[1.1], ..., # Grouping parameters group_by="document_id", # Path of the field to group by limit=2, # Max amount of groups group_size=2, # Max amount of points per group # Lookup parameters with_lookup=models.WithLookup( # Name of the collection to look up points in collection_name="documents", # Options for specifying what to bring from the payload # of the looked up point, True by default with_payload=["title", "text"] # Options for specifying what to bring from the vector(s) # of the looked up point, True by default with_vectors=False, ) ) ``` ### Qdrant web user interface We are excited to announce a more user-friendly way to organize and work with your collections inside of Qdrant. Our dashboard's design is simple, but very intuitive and easy to access. Try it out now! If you have Docker running, you can [quickstart Qdrant](/documentation/quick-start/) and access the Dashboard locally from [http://localhost:6333/dashboard](http://localhost:6333/dashboard). You should see this simple access point to Qdrant: ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Temporary directory for Snapshots Currently, temporary snapshot files are created inside the `/storage` directory. Oftentimes `/storage` is a network-mounted disk. Therefore, we found this method suboptimal because `/storage` is limited in disk size and also because writing data to it may affect disk performance as it consumes bandwidth. This new feature allows you to specify a different directory on another disk that is faster. We expect this feature to significantly optimize cloud performance. To change it, access `config.yaml` and set `storage.temp_path` to another directory location. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Optimizing group requests Internally, `is_empty` was not using the index when it was called, so it had to deserialize the whole payload to see if the key had values or not. Our new update makes sure to check the index first, before confirming with the payload if it is actually `empty`/`null`, so these changes improve performance only when the negated condition is true (e.g. it improves when the field is not empty). Going forward, this will improve the way grouping API requests are handled. ### Faster read access with mmap If you used mmap, you most likely found that segments were always created with cold caches. The first request to the database needed to request the disk, which made startup slower despite plenty of RAM being available. We have implemeneted a way to ask the kernel to "heat up" the disk cache and make initialization much faster. The function is expected to be used on startup and after segment optimization and reloading of newly indexed segment. So far this is only implemented for "immutable" memmaps. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) describe all the changes introduced in the latest version.
qdrant-landing/content/articles/qdrant-1.7.x.md
--- title: "Qdrant 1.7.0 has just landed!" short_description: "Qdrant 1.7.0 brought a bunch of new features. Let's take a closer look at them!" description: "Sparse vectors, Discovery API, user-defined sharding, and snapshot-based shard transfer. That's what you can find in the latest Qdrant 1.7.0 release!" social_preview_image: /articles_data/qdrant-1.7.x/social_preview.png small_preview_image: /articles_data/qdrant-1.7.x/icon.svg preview_dir: /articles_data/qdrant-1.7.x/preview weight: -90 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2023-12-10T10:00:00Z draft: false keywords: - vector search - new features - sparse vectors - discovery - exploration - custom sharding - snapshot-based shard transfer - hybrid search - bm25 - tfidf - splade --- Please welcome the long-awaited [Qdrant 1.7.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.7.0). Except for a handful of minor fixes and improvements, this release brings some cool brand-new features that we are excited to share! The latest version of your favorite vector search engine finally supports **sparse vectors**. That's the feature many of you requested, so why should we ignore it? We also decided to continue our journey with [vector similarity beyond search](/articles/vector-similarity-beyond-search/). The new Discovery API covers some utterly new use cases. We're more than excited to see what you will build with it! But there is more to it! Check out what's new in **Qdrant 1.7.0**! 1. Sparse vectors: do you want to use keyword-based search? Support for sparse vectors is finally here! 2. Discovery API: an entirely new way of using vectors for restricted search and exploration. 3. User-defined sharding: you can now decide which points should be stored on which shard. 4. Snapshot-based shard transfer: a new option for moving shards between nodes. Do you see something missing? Your feedback drives the development of Qdrant, so do not hesitate to [join our Discord community](https://qdrant.to/discord) and help us build the best vector search engine out there! ## New features Qdrant 1.7.0 brings a bunch of new features. Let's take a closer look at them! ### Sparse vectors Traditional keyword-based search mechanisms often rely on algorithms like TF-IDF, BM25, or comparable methods. While these techniques internally utilize vectors, they typically involve sparse vector representations. In these methods, the **vectors are predominantly filled with zeros, containing a relatively small number of non-zero values**. Those sparse vectors are theoretically high dimensional, definitely way higher than the dense vectors used in semantic search. However, since the majority of dimensions are usually zeros, we store them differently and just keep the non-zero dimensions. Until now, Qdrant has not been able to handle sparse vectors natively. Some were trying to convert them to dense vectors, but that was not the best solution or a suggested way. We even wrote a piece with [our thoughts on building a hybrid search](/articles/hybrid-search/), and we encouraged you to use a different tool for keyword lookup. Things have changed since then, as so many of you wanted a single tool for sparse and dense vectors. And responding to this [popular](https://github.com/qdrant/qdrant/issues/1678) [demand](https://github.com/qdrant/qdrant/issues/1135), we've now introduced sparse vectors! If you're coming across the topic of sparse vectors for the first time, our [Brief History of Search](/documentation/overview/vector-search/) explains the difference between sparse and dense vectors. Check out the [sparse vectors article](../sparse-vectors/) and [sparse vectors index docs](/documentation/concepts/indexing/#sparse-vector-index) for more details on what this new index means for Qdrant users. ### Discovery API The recently launched [Discovery API](/documentation/concepts/explore/#discovery-api) extends the range of scenarios for leveraging vectors. While its interface mirrors the [Recommendation API](/documentation/concepts/explore/#recommendation-api), it focuses on refining the search parameters for greater precision. The concept of 'context' refers to a collection of positive-negative pairs that define zones within a space. Each pair effectively divides the space into positive or negative segments. This concept guides the search operation to prioritize points based on their inclusion within positive zones or their avoidance of negative zones. Essentially, the search algorithm favors points that fall within multiple positive zones or steer clear of negative ones. The Discovery API can be used in two ways - either with or without the target point. The first case is called a **discovery search**, while the second is called a **context search**. #### Discovery search *Discovery search* is an operation that uses a target point to find the most relevant points in the collection, while performing the search in the preferred areas only. That is basically a search operation with more control over the search space. ![Discovery search visualization](/articles_data/qdrant-1.7.x/discovery-search.png) Please refer to the [Discovery API documentation on discovery search](/documentation/concepts/explore/#discovery-search) for more details and the internal mechanics of the operation. #### Context search The mode of *context search* is similar to the discovery search, but it does not use a target point. Instead, the `context` is used to navigate the [HNSW graph](https://arxiv.org/abs/1603.09320) towards preferred zones. It is expected that the results in that mode will be diverse, and not centered around one point. *Context Search* could serve as a solution for individuals seeking a more exploratory approach to navigate the vector space. ![Context search visualization](/articles_data/qdrant-1.7.x/context-search.png) ### User-defined sharding Qdrant's collections are divided into shards. A single **shard** is a self-contained store of points, which can be moved between nodes. Up till now, the points were distributed among shards by using a consistent hashing algorithm, so that shards were managing non-intersecting subsets of points. The latter one remains true, but now you can define your own sharding and decide which points should be stored on which shard. Sounds cool, right? But why would you need that? Well, there are multiple scenarios in which you may want to use custom sharding. For example, you may want to store some points on a dedicated node, or you may want to store points from the same user on the same shard and While the existing behavior is still the default one, you can now define the shards when you create a collection. Then, you can assign each point to a shard by providing a `shard_key` in the `upsert` operation. What's more, you can also search over the selected shards only, by providing the `shard_key` parameter in the search operation. ```http request POST /collections/my_collection/points/search { "vector": [0.29, 0.81, 0.75, 0.11], "shard_key": ["cats", "dogs"], "limit": 10, "with_payload": true, } ``` If you want to know more about the user-defined sharding, please refer to the [sharding documentation](/documentation/guides/distributed_deployment/#sharding). ### Snapshot-based shard transfer That's a really more in depth technical improvement for the distributed mode users, that we implemented a new options the shard transfer mechanism. The new approach is based on the snapshot of the shard, which is transferred to the target node. Moving shards is required for dynamical scaling of the cluster. Your data can migrate between nodes, and the way you move it is crucial for the performance of the whole system. The good old `stream_records` method (still the default one) transmits all the records between the machines and indexes them on the target node. In the case of moving the shard, it's necessary to recreate the HNSW index each time. However, with the introduction of the new `snapshot` approach, the snapshot itself, inclusive of all data and potentially quantized content, is transferred to the target node. This comprehensive snapshot includes the entire index, enabling the target node to seamlessly load it and promptly begin handling requests without the need for index recreation. There are multiple scenarios in which you may prefer one over the other. Please check out the docs of the [shard transfer method](/documentation/guides/distributed_deployment/#shard-transfer-method) for more details and head-to-head comparison. As for now, the old `stream_records` method is still the default one, but we may decide to change it in the future. ## Minor improvements Beyond introducing new features, Qdrant 1.7.0 enhances performance and addresses various minor issues. Here's a rundown of the key improvements: 1. Improvement of HNSW Index Building on High CPU Systems ([PR#2869](https://github.com/qdrant/qdrant/pull/2869)). 2. Improving [Search Tail Latencies](https://github.com/qdrant/qdrant/pull/2931): improvement for high CPU systems with many parallel searches, directly impacting the user experience by reducing latency. 3. [Adding Index for Geo Map Payloads](https://github.com/qdrant/qdrant/pull/2768): index for geo map payloads can significantly improve search performance, especially for applications involving geographical data. 4. Stability of Consensus on Big High Load Clusters: enhancing the stability of consensus in large, high-load environments is critical for ensuring the reliability and scalability of the system ([PR#3013](https://github.com/qdrant/qdrant/pull/3013), [PR#3026](https://github.com/qdrant/qdrant/pull/3026), [PR#2942](https://github.com/qdrant/qdrant/pull/2942), [PR#3103](https://github.com/qdrant/qdrant/pull/3103), [PR#3054](https://github.com/qdrant/qdrant/pull/3054)). 5. Configurable Timeout for Searches: allowing users to configure the timeout for searches provides greater flexibility and can help optimize system performance under different operational conditions ([PR#2748](https://github.com/qdrant/qdrant/pull/2748), [PR#2771](https://github.com/qdrant/qdrant/pull/2771)). ## Release notes [Our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.7.0) are a place to go if you are interested in more details. Please remember that Qdrant is an open source project, so feel free to [contribute](https://github.com/qdrant/qdrant/issues)!
qdrant-landing/content/articles/qdrant-1.8.x.md
--- title: "Qdrant 1.8.0 - Major Performance Enhancements" draft: false slug: qdrant-1.8.x short_description: "Faster sparse vectors.Optimized indexation. Optional CPU resource management." description: "Much faster sparse vectors, optimized indexation of text fields and optional CPU resource management configuration. " social_preview_image: /articles_data/qdrant-1.8.x/social_preview.png small_preview_image: /articles_data/qdrant-1.8.x/icon.svg preview_dir: /articles_data/qdrant-1.8.x/preview weight: -140 date: 2024-03-06T00:00:00-08:00 author: David Myriel, Mike Jang featured: false tags: - vector search - new features - sparse vectors - hybrid search - CPU resource management - text field index --- [Qdrant 1.8.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). This time around, we have focused on Qdrant's internals. Our goal was to optimize performance, so that your existing setup can run faster and save on compute. Here is what we've been up to: - **Faster sparse vectors:** Hybrid search is up to 16x faster now! - **CPU resource management:** You can allocate CPU threads for faster indexing. - **Better indexing performance:** We optimized text indexing on the backend. ## Faster search with sparse vectors Search throughput is now up to 16 times faster for sparse vectors. If you are [using Qdrant for hybrid search](/articles/sparse-vectors/), this means that you can now handle up to sixteen times as many queries. This improvement comes from extensive backend optimizations aimed at increasing efficiency and capacity. What this means for your setup: - **Query speed:** The time it takes to run a search query has been significantly reduced. - **Search capacity:** Qdrant can now handle a much larger volume of search requests. - **User experience:** Results will appear faster, leading to a smoother experience for the user. - **Scalability:** You can easily accomodate rapidly growing users or an expanding dataset. ### Sparse vectors benchmark Performance results are publicly available for you to test. Qdrant's R&D developed a dedicated [open-source benchmarking tool](https://github.com/qdrant/sparse-vectors-benchmark) just to test sparse vector performance. A real-life simulation of sparse vector queries was run against the [NeurIPS 2023 dataset](https://big-ann-benchmarks.com/neurips23.html). All tests were done on an 8 CPU machine on Azure. Latency (y-axis) has dropped significantly for queries. You can see the before/after here: ![dropping latency](/articles_data/qdrant-1.8.x/benchmark.png) **Figure 1:** Dropping latency in sparse vector search queries across versions 1.7-1.8. The colors within both scatter plots show the frequency of results. The red dots show that the highest concentration is around 2200ms (before) and 135ms (after). This tells us that latency for sparse vectors queries dropped by about a factor of 16. Therefore, the time it takes to retrieve an answer with Qdrant is that much shorter. This performance increase can have a dramatic effect on hybrid search implementations. [Read more about how to set this up.](/articles/sparse-vectors/) FYI, sparse vectors were released in [Qdrant v.1.7.0](/articles/qdrant-1.7.x/#sparse-vectors). They are stored using a different index, so first [check out the documentation](/documentation/concepts/indexing/#sparse-vector-index) if you want to try an implementation. ## CPU resource management Indexing is Qdrant’s most resource-intensive process. Now you can account for this by allocating compute use specifically to indexing. You can assign a number CPU resources towards indexing and leave the rest for search. As a result, indexes will build faster, and search quality will remain unaffected. This isn't mandatory, as Qdrant is by default tuned to strike the right balance between indexing and search. However, if you wish to define specific CPU usage, you will need to do so from `config.yaml`. This version introduces a `optimizer_cpu_budget` parameter to control the maximum number of CPUs used for indexing. > Read more about `config.yaml` in the [configuration file](/documentation/guides/configuration/). ```yaml # CPU budget, how many CPUs (threads) to allocate for an optimization job. optimizer_cpu_budget: 0 ``` - If left at 0, Qdrant will keep 1 or more CPUs unallocated - depending on CPU size. - If the setting is positive, Qdrant will use this exact number of CPUs for indexing. - If the setting is negative, Qdrant will subtract this number of CPUs from the available CPUs for indexing. For most users, the default `optimizer_cpu_budget` setting will work well. We only recommend you use this if your indexing load is significant. Our backend leverages dynamic CPU saturation to increase indexing speed. For that reason, the impact on search query performance ends up being minimal. Ultimately, you will be able to strike a the best possible balance between indexing times and search performance. This configuration can be done at any time, but it requires a restart of Qdrant. Changing it affects both existing and new collections. > **Note:** This feature is not configurable on [Qdrant Cloud](https://qdrant.to/cloud). ## Better indexing for text data In order to minimize your RAM expenditure, we have developed a new way to index specific types of data. Please keep in mind that this is a backend improvement, and you won't need to configure anything. > Going forward, if you are indexing immutable text fields, we estimate a 10% reduction in RAM loads. Our benchmark result is based on a system that uses 64GB of RAM. If you are using less RAM, this reduction might be higher than 10%. Immutable text fields are static and do not change once they are added to Qdrant. These entries usually represent some type of an attribute, description or a tag. Vectors associated with them can be indexed more efficiently, since you don’t need to re-index them anymore. Conversely, mutable fields are dynamic and can be modified after their initial creation. Please keep in mind that they will continue to require additional RAM. This approach ensures stability in the vector search index, with faster and more consistent operations. We achieved this by setting up a field index which helps minimize what is stored. To improve search performance we have also optimized the way we load documents for searches with a text field index. Now our backend loads documents mostly sequentially and in increasing order. ## Minor improvements and new features Beyond these enhancements, [Qdrant v1.8.0](https://github.com/qdrant/qdrant/releases/tag/v1.8.0) adds and improves on several smaller features: 1. **Order points by payload:** In addition to searching for semantic results, you might want to retrieve results by specific metadata (such as price). You can now use Scroll API to [order points by payload key](/documentation/concepts/points/#order-points-by-payload-key). 2. **Datetime support:** We have implemented [datetime support for the payload index](/documentation/concepts/filtering/#datetime-range). Prior to this, if you wanted to search for a specific datetime range, you would have had to convert dates to UNIX timestamps. ([PR#3320](https://github.com/qdrant/qdrant/issues/3320)) 3. **Check collection existence:** You can check whether a collection exists via the `/exists` endpoint to the `/collections/{collection_name}`. You will get a true/false response. ([PR#3472](https://github.com/qdrant/qdrant/pull/3472)). 4. **Find points** whose payloads match more than the minimal amount of conditions. We included the `min_should` match feature for a condition to be `true` ([PR#3331](https://github.com/qdrant/qdrant/pull/3466/)). 5. **Modify nested fields:** We have improved the `set_payload` API, adding the ability to update nested fields ([PR#3548](https://github.com/qdrant/qdrant/pull/3548)). ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). Qdrant is an open source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
qdrant-landing/content/articles/quantum-quantization.md
--- title: Vector Search in constant time short_description: Apply Quantum Computing to your search engine description: Quantum Quantization enables vector search in constant time. This article will discuss the concept of quantum quantization for ANN vector search. preview_dir: /articles_data/quantum-quantization/preview social_preview_image: /articles_data/quantum-quantization/social_preview.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 1000 author: Prankstorm Team draft: false author_link: https://www.youtube.com/watch?v=dQw4w9WgXcQ date: 2023-04-01T00:48:00.000Z --- The advent of quantum computing has revolutionized many areas of science and technology, and one of the most intriguing developments has been its potential application to artificial neural networks (ANNs). One area where quantum computing can significantly improve performance is in vector search, a critical component of many machine learning tasks. In this article, we will discuss the concept of quantum quantization for ANN vector search, focusing on the conversion of float32 to qbit vectors and the ability to perform vector search on arbitrary-sized databases in constant time. ## Quantum Quantization and Entanglement Quantum quantization is a novel approach that leverages the power of quantum computing to speed up the search process in ANNs. By converting traditional float32 vectors into qbit vectors, we can create quantum entanglement between the qbits. Quantum entanglement is a unique phenomenon in which the states of two or more particles become interdependent, regardless of the distance between them. This property of quantum systems can be harnessed to create highly efficient vector search algorithms. The conversion of float32 vectors to qbit vectors can be represented by the following formula: ```text qbit_vector = Q( float32_vector ) ``` where Q is the quantum quantization function that transforms the float32_vector into a quantum entangled qbit_vector. ## Vector Search in Constant Time The primary advantage of using quantum quantization for ANN vector search is the ability to search through an arbitrary-sized database in constant time. The key to performing vector search in constant time with quantum quantization is to use a quantum algorithm called Grover's algorithm. Grover's algorithm is a quantum search algorithm that finds the location of a marked item in an unsorted database in O(√N) time, where N is the size of the database. This is a significant improvement over classical algorithms, which require O(N) time to solve the same problem. However, the is one another trick, which allows to improve Grover's algorithm performanse dramatically. This trick is called transposition and it allows to reduce the number of Grover's iterations from O(√N) to O(√D), where D - is a dimension of the vector space. And since the dimension of the vector space is much smaller than the number of vectors, and usually is a constant, this trick allows to reduce the number of Grover's iterations from O(√N) to O(√D) = O(1). Check out our [Quantum Quantization PR](https://github.com/qdrant/qdrant/pull/1639) on GitHub.
qdrant-landing/content/articles/rag-is-dead.md
--- title: "Is RAG Dead? The Role of Vector Databases in Vector Search | Qdrant" short_description: Learn how Qdrant’s vector database enhances enterprise AI with superior accuracy and cost-effectiveness. description: Uncover the necessity of vector databases for RAG and learn how Qdrant's vector database empowers enterprise AI with unmatched accuracy and cost-effectiveness. social_preview_image: /articles_data/rag-is-dead/preview/social_preview.jpg small_preview_image: /articles_data/rag-is-dead/icon.svg preview_dir: /articles_data/rag-is-dead/preview weight: -131 author: David Myriel author_link: https://github.com/davidmyriel date: 2024-02-27T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - gemini 1.5 --- # Is RAG Dead? The Role of Vector Databases in AI Efficiency and Vector Search When Anthropic came out with a context window of 100K tokens, they said: “*[Vector search](https://qdrant.tech/solutions/) is dead. LLMs are getting more accurate and won’t need RAG anymore.*” Google’s Gemini 1.5 now offers a context window of 10 million tokens. [Their supporting paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf) claims victory over accuracy issues, even when applying Greg Kamradt’s [NIAH methodology](https://twitter.com/GregKamradt/status/1722386725635580292). *It’s over. [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) (Retrieval Augmented Generation) must be completely obsolete now. Right?* No. Larger context windows are never the solution. Let me repeat. Never. They require more computational resources and lead to slower processing times. The community is already stress testing Gemini 1.5: ![RAG and Gemini 1.5](/articles_data/rag-is-dead/rag-is-dead-1.png) This is not surprising. LLMs require massive amounts of compute and memory to run. To cite Grant, running such a model by itself “would deplete a small coal mine to generate each completion”. Also, who is waiting 30 seconds for a response? ## Context stuffing is not the solution > Relying on context is expensive, and it doesn’t improve response quality in real-world applications. Retrieval based on [vector search](https://qdrant.tech/solutions/) offers much higher precision. If you solely rely on an [LLM](https://qdrant.tech/articles/what-is-rag-in-ai/) to perfect retrieval and precision, you are doing it wrong. A large context window makes it harder to focus on relevant information. This increases the risk of errors or hallucinations in its responses. Google found Gemini 1.5 significantly more accurate than GPT-4 at shorter context lengths and “a very small decrease in recall towards 1M tokens”. The recall is still below 0.8. ![Gemini 1.5 Data](/articles_data/rag-is-dead/rag-is-dead-2.png) We don’t think 60-80% is good enough. The LLM might retrieve enough relevant facts in its context window, but it still loses up to 40% of the available information. > The whole point of vector search is to circumvent this process by efficiently picking the information your app needs to generate the best response. A [vector database](https://qdrant.tech/) keeps the compute load low and the query response fast. You don’t need to wait for the LLM at all. Qdrant’s benchmark results are strongly in favor of accuracy and efficiency. We recommend that you consider them before deciding that an LLM is enough. Take a look at our [open-source benchmark reports](/benchmarks/) and [try out the tests](https://github.com/qdrant/vector-db-benchmark) yourself. ## Vector search in compound systems The future of AI lies in careful system engineering. As per [Zaharia et al.](https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/), results from Databricks find that “60% of LLM applications use some form of RAG, while 30% use multi-step chains.” Even Gemini 1.5 demonstrates the need for a complex strategy. When looking at [Google’s MMLU Benchmark](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), the model was called 32 times to reach a score of 90.0% accuracy. This shows us that even a basic compound arrangement is superior to monolithic models. As a retrieval system, a [vector database](https://qdrant.tech/) perfectly fits the need for compound systems. Introducing them into your design opens the possibilities for superior applications of LLMs. It is superior because it’s faster, more accurate, and much cheaper to run. > The key advantage of RAG is that it allows an LLM to pull in real-time information from up-to-date internal and external knowledge sources, making it more dynamic and adaptable to new information. - Oliver Molander, CEO of IMAGINAI > ## Qdrant scales to enterprise RAG scenarios People still don’t understand the economic benefit of vector databases. Why would a large corporate AI system need a standalone vector database like [Qdrant](https://qdrant.tech/)? In our minds, this is the most important question. Let’s pretend that LLMs cease struggling with context thresholds altogether. **How much would all of this cost?** If you are running a RAG solution in an enterprise environment with petabytes of private data, your compute bill will be unimaginable. Let's assume 1 cent per 1K input tokens (which is the current GPT-4 Turbo pricing). Whatever you are doing, every time you go 100 thousand tokens deep, it will cost you $1. That’s a buck a question. > According to our estimations, vector search queries are **at least** 100 million times cheaper than queries made by LLMs. Conversely, the only up-front investment with vector databases is the indexing (which requires more compute). After this step, everything else is a breeze. Once setup, Qdrant easily scales via [features like Multitenancy and Sharding](/articles/multitenancy/). This lets you scale up your reliance on the vector retrieval process and minimize your use of the compute-heavy LLMs. As an optimization measure, Qdrant is irreplaceable. Julien Simon from HuggingFace says it best: > RAG is not a workaround for limited context size. For mission-critical enterprise use cases, RAG is a way to leverage high-value, proprietary company knowledge that will never be found in public datasets used for LLM training. At the moment, the best place to index and query this knowledge is some sort of vector index. In addition, RAG downgrades the LLM to a writing assistant. Since built-in knowledge becomes much less important, a nice small 7B open-source model usually does the trick at a fraction of the cost of a huge generic model. ## Get superior accuracy with Qdrant's vector database As LLMs continue to require enormous computing power, users will need to leverage vector search and [RAG](https://qdrant.tech/). Our customers remind us of this fact every day. As a product, [our vector database](https://qdrant.tech/) is highly scalable and business-friendly. We develop our features strategically to follow our company’s Unix philosophy. We want to keep Qdrant compact, efficient and with a focused purpose. This purpose is to empower our customers to use it however they see fit. When large enterprises release their generative AI into production, they need to keep costs under control, while retaining the best possible quality of responses. Qdrant has the [vector search solutions](https://qdrant.tech/solutions/) to do just that. Revolutionize your vector search capabilities and get started with [a Qdrant demo](https://qdrant.tech/contact-us/).
qdrant-landing/content/articles/rapid-rag-optimization-with-qdrant-and-quotient.md
--- title: "Optimizing RAG Through an Evaluation-Based Methodology" short_description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. social_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview/social_preview.jpg small_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/icon.svg preview_dir: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview weight: -131 author: Atita Arora author_link: https://github.com/atarora date: 2024-06-12T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - quotient - optimization - rag --- In today's fast-paced, information-rich world, AI is revolutionizing knowledge management. The systematic process of capturing, distributing, and effectively using knowledge within an organization is one of the fields in which AI provides exceptional value today. > The potential for AI-powered knowledge management increases when leveraging Retrieval Augmented Generation (RAG), a methodology that enables LLMs to access a vast, diverse repository of factual information from knowledge stores, such as vector databases. This process enhances the accuracy, relevance, and reliability of generated text, thereby mitigating the risk of faulty, incorrect, or nonsensical results sometimes associated with traditional LLMs. This method not only ensures that the answers are contextually relevant but also up-to-date, reflecting the latest insights and data available. While RAG enhances the accuracy, relevance, and reliability of traditional LLM solutions, **an evaluation strategy can further help teams ensure their AI products meet these benchmarks of success.** ## Relevant tools for this experiment In this article, we’ll break down a RAG Optimization workflow experiment that demonstrates that evaluation is essential to build a successful RAG strategy. We will use Qdrant and Quotient for this experiment. [Qdrant](https://qdrant.tech/) is a vector database and vector similarity search engine designed for efficient storage and retrieval of high-dimensional vectors. Because Qdrant offers efficient indexing and searching capabilities, it is ideal for implementing RAG solutions, where quickly and accurately retrieving relevant information from extremely large datasets is crucial. Qdrant also offers a wealth of additional features, such as quantization, multivector support and multi-tenancy. Alongside Qdrant we will use Quotient, which provides a seamless way to evaluate your RAG implementation, accelerating and improving the experimentation process. [Quotient](https://www.quotientai.co/) is a platform that provides tooling for AI developers to build evaluation frameworks and conduct experiments on their products. Evaluation is how teams surface the shortcomings of their applications and improve performance in key benchmarks such as faithfulness, and semantic similarity. Iteration is key to building innovative AI products that will deliver value to end users. > 💡 The [accompanying notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient) for this exercise can be found on GitHub for future reference. ## Summary of key findings 1. **Irrelevance and Hallucinations**: When the documents retrieved are irrelevant, evidenced by low scores in both Chunk Relevance and Context Relevance, the model is prone to generating inaccurate or fabricated information. 2. **Optimizing Document Retrieval**: By retrieving a greater number of documents and reducing the chunk size, we observed improved outcomes in the model's performance. 3. **Adaptive Retrieval Needs**: Certain queries may benefit from accessing more documents. Implementing a dynamic retrieval strategy that adjusts based on the query could enhance accuracy. 4. **Influence of Model and Prompt Variations**: Alterations in language models or the prompts used can significantly impact the quality of the generated responses, suggesting that fine-tuning these elements could optimize performance. Let us walk you through how we arrived at these findings! ## Building a RAG pipeline To evaluate a RAG pipeline , we will have to build a RAG Pipeline first. In the interest of simplicity, we are building a Naive RAG in this article. There are certainly other versions of RAG : ![shades_of_rag.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/shades_of_rag.png) The illustration below depicts how we can leverage a RAG Evaluation framework to assess the quality of RAG Application. ![qdrant_and_quotient.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/qdrant_and_quotient.png) We are going to build a RAG application using Qdrant’s Documentation and the premeditated [hugging face dataset]([https://huggingface.co/datasets/atitaarora/qdrant_doc](https://huggingface.co/datasets/atitaarora/qdrant_doc)). We will then assess our RAG application’s ability to answer questions about Qdrant. To prepare our knowledge store we will use Qdrant, which can be leveraged in 3 different ways as below : ```python ##Uncomment to initialise qdrant client in memory #client = qdrant_client.QdrantClient( # location=":memory:", #) ##Uncomment below to connect to Qdrant Cloud client = qdrant_client.QdrantClient( os.environ.get("QDRANT_URL"), api_key=os.environ.get("QDRANT_API_KEY"), ) ## Uncomment below to connect to local Qdrant #client = qdrant_client.QdrantClient("http://localhost:6333") ``` We will be using [Qdrant Cloud](https://cloud.qdrant.io/login) so it is a good idea to provide the `QDRANT_URL` and `QDRANT_API_KEY` as environment variables for easier access. Moving on, we will need to define the collection name as : ```python COLLECTION_NAME = "qdrant-docs-quotient" ``` In this case , we may need to create different collections based on the experiments we conduct. To help us provide seamless embedding creations throughout the experiment, we will use Qdrant’s native embedding provider [Fastembed]([https://qdrant.github.io/fastembed/](https://qdrant.github.io/fastembed/)) which supports [many different models]([https://qdrant.github.io/fastembed/examples/Supported_Models/](https://qdrant.github.io/fastembed/examples/Supported_Models/)) including dense as well as sparse vector models. We can initialize and switch the embedding model of our choice as below : ```python ## Declaring the intended Embedding Model with Fastembed from fastembed.embedding import TextEmbedding ## General Fastembed specific operations ##Initilising embedding model ## Using Default Model - BAAI/bge-small-en-v1.5 embedding_model = TextEmbedding() ## For custom model supported by Fastembed #embedding_model = TextEmbedding(model_name="BAAI/bge-small-en", max_length=512) #embedding_model = TextEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2", max_length=384) ## Verify the chosen Embedding model embedding_model.model_name ``` Before implementing RAG, we need to prepare and index our data in Qdrant. This involves converting textual data into vectors using a suitable encoder (e.g., sentence transformers), and storing these vectors in Qdrant for retrieval. ```python from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.docstore.document import Document as LangchainDocument ## Load the dataset with qdrant documentation dataset = load_dataset("atitaarora/qdrant_doc", split="train") ## Dataset to langchain document langchain_docs = [ LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]}) for doc in dataset ] len(langchain_docs) #Outputs #240 ``` You can preview documents in the dataset as below : ```python ## Here's an example of what a document in our dataset looks like print(dataset[100]['text']) ``` ## Evaluation dataset To measure the quality of our RAG setup, we will need a representative evaluation dataset. This dataset should contain realistic questions and the expected answers. Additionally, including the expected contexts for which your RAG pipeline is designed to retrieve information would be beneficial. We will be using a [prebuilt evaluation dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc_qna). If you are struggling to make an evaluation dataset for your use case , you can use your documents and some techniques described in this [notebook](https://github.com/qdrant/qdrant-rag-eval/blob/master/synthetic_qna/notebook/Synthetic_question_generation.ipynb) ### Building the RAG pipeline We establish the data preprocessing parameters essential for the RAG pipeline and configure the Qdrant vector database according to the specified criteria. Key parameters under consideration are: - **Chunk size** - **Chunk overlap** - **Embedding model** - **Number of documents retrieved (retrieval window)** Following the ingestion of data in Qdrant, we proceed to retrieve pertinent documents corresponding to each query. These documents are then seamlessly integrated into our evaluation dataset, enriching the contextual information within the designated **`context`** column to fulfil the evaluation aspect. Next we define methods to take care of logistics with respect to adding documents to Qdrant ```python def add_documents(client, collection_name, chunk_size, chunk_overlap, embedding_model_name): """ This function adds documents to the desired Qdrant collection given the specified RAG parameters. """ ## Processing each document with desired TEXT_SPLITTER_ALGO, CHUNK_SIZE, CHUNK_OVERLAP text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap, add_start_index=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = [] for doc in langchain_docs: docs_processed += text_splitter.split_documents([doc]) ## Processing documents to be encoded by Fastembed docs_contents = [] docs_metadatas = [] for doc in docs_processed: if hasattr(doc, 'page_content') and hasattr(doc, 'metadata'): docs_contents.append(doc.page_content) docs_metadatas.append(doc.metadata) else: # Handle the case where attributes are missing print("Warning: Some documents do not have 'page_content' or 'metadata' attributes.") print("processed: ", len(docs_processed)) print("content: ", len(docs_contents)) print("metadata: ", len(docs_metadatas)) ## Adding documents to Qdrant using desired embedding model client.set_model(embedding_model_name=embedding_model_name) client.add(collection_name=collection_name, metadata=docs_metadatas, documents=docs_contents) ``` and retrieving documents from Qdrant during our RAG Pipeline assessment. ```python def get_documents(collection_name, query, num_documents=3): """ This function retrieves the desired number of documents from the Qdrant collection given a query. It returns a list of the retrieved documents. """ search_results = client.query( collection_name=collection_name, query_text=query, limit=num_documents, ) results = [r.metadata["document"] for r in search_results] return results ``` ### Setting up Quotient You will need an account log in, which you can get by requesting access on [Quotient's website](https://www.quotientai.co/). Once you have an account, you can create an API key by running the `quotient authenticate` CLI command. <aside> 💡 Be sure to save your API key, since it will only be displayed once (Note: you will not have to repeat this step again until your API key expires). </aside> **Once you have your API key, make sure to set it as an environment variable called `QUOTIENT_API_KEY`** ```python # Import QuotientAI client and connect to QuotientAI from quotientai.client import QuotientClient from quotientai.utils import show_job_progress # IMPORTANT: be sure to set your API key as an environment variable called QUOTIENT_API_KEY # You will need this set before running the code below. You may also uncomment the following line and insert your API key: # os.environ['QUOTIENT_API_KEY'] = "YOUR_API_KEY" quotient = QuotientClient() ``` **QuotientAI** provides a seamless way to integrate *RAG evaluation* into your applications. Here, we'll see how to use it to evaluate text generated from an LLM, based on retrieved knowledge from the Qdrant vector database. After retrieving the top similar documents and populating the `context` column, we can submit the evaluation dataset to Quotient and execute an evaluation job. To run a job, all you need is your evaluation dataset and a `recipe`. ***A recipe is a combination of a prompt template and a specified LLM.*** **Quotient** orchestrates the evaluation run and handles version control and asset management throughout the experimentation process. ***Prior to assessing our RAG solution, it's crucial to outline our optimization goals.*** In the context of *question-answering on Qdrant documentation*, our focus extends beyond merely providing helpful responses. Ensuring the absence of any *inaccurate or misleading information* is paramount. In other words, **we want to minimize hallucinations** in the LLM outputs. For our evaluation, we will be considering the following metrics, with a focus on **Faithfulness**: - **Context Relevance** - **Chunk Relevance** - **Faithfulness** - **ROUGE-L** - **BERT Sentence Similarity** - **BERTScore** ### Evaluation in action The function below takes an evaluation dataset as input, which in this case contains questions and their corresponding answers. It retrieves relevant documents based on the questions in the dataset and populates the context field with this information from Qdrant. The prepared dataset is then submitted to QuotientAI for evaluation for the chosen metrics. After the evaluation is complete, the function displays aggregated statistics on the evaluation metrics followed by the summarized evaluation results. ```python def run_eval(eval_df, collection_name, recipe_id, num_docs=3, path="eval_dataset_qdrant_questions.csv"): """ This function evaluates the performance of a complete RAG pipeline on a given evaluation dataset. Given an evaluation dataset (containing questions and ground truth answers), this function retrieves relevant documents, populates the context field, and submits the dataset to QuotientAI for evaluation. Once the evaluation is complete, aggregated statistics on the evaluation metrics are displayed. The evaluation results are returned as a pandas dataframe. """ # Add context to each question by retrieving relevant documents eval_df['documents'] = eval_df.apply(lambda x: get_documents(collection_name=collection_name, query=x['input_text'], num_documents=num_docs), axis=1) eval_df['context'] = eval_df.apply(lambda x: "\n".join(x['documents']), axis=1) # Now we'll save the eval_df to a CSV eval_df.to_csv(path, index=False) # Upload the eval dataset to QuotientAI dataset = quotient.create_dataset( file_path=path, name="qdrant-questions-eval-v1", ) # Create a new task for the dataset task = quotient.create_task( dataset_id=dataset['id'], name='qdrant-questions-qa-v1', task_type='question_answering' ) # Run a job to evaluate the model job = quotient.create_job( task_id=task['id'], recipe_id=recipe_id, num_fewshot_examples=0, limit=500, metric_ids=[5, 7, 8, 11, 12, 13, 50], ) # Show the progress of the job show_job_progress(quotient, job['id']) # Once the job is complete, we can get our results data = quotient.get_eval_results(job_id=job['id']) # Add the results to a pandas dataframe to get statistics on performance df = pd.json_normalize(data, "results") df_stats = df[df.columns[df.columns.str.contains("metric|completion_time")]] df.columns = df.columns.str.replace("metric.", "") df_stats.columns = df_stats.columns.str.replace("metric.", "") metrics = { 'completion_time_ms':'Completion Time (ms)', 'chunk_relevance': 'Chunk Relevance', 'selfcheckgpt_nli_relevance':"Context Relevance", 'selfcheckgpt_nli':"Faithfulness", 'rougeL_fmeasure':"ROUGE-L", 'bert_score_f1':"BERTScore", 'bert_sentence_similarity': "BERT Sentence Similarity", 'completion_verbosity':"Completion Verbosity", 'verbosity_ratio':"Verbosity Ratio",} df = df.rename(columns=metrics) df_stats = df_stats.rename(columns=metrics) display(df_stats[metrics.values()].describe()) return df main_metrics = [ 'Context Relevance', 'Chunk Relevance', 'Faithfulness', 'ROUGE-L', 'BERT Sentence Similarity', 'BERTScore', ] ``` ## Experimentation Our approach is rooted in the belief that improvement thrives in an environment of exploration and discovery. By systematically testing and tweaking various components of the RAG pipeline, we aim to incrementally enhance its capabilities and performance. In the following section, we dive into the details of our experimentation process, outlining the specific experiments conducted and the insights gained. ### Experiment 1 - Baseline Parameters - **Embedding Model: `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We’ll process our documents based on configuration above and ingest them into Qdrant using `add_documents` method introduced earlier ```python #experiment1 - base config chunk_size = 512 chunk_overlap = 64 embedding_model_name = "BAAI/bge-small-en" num_docs = 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` Notice the `COLLECTION_NAME` which helps us segregate and identify our collections based on the experiments conducted. To proceed with the evaluation, let’s create the `evaluation recipe` up next ```python # Create a recipe for the generator model and prompt template recipe_mistral = quotient.create_recipe( model_id=10, prompt_template_id=1, name='mistral-7b-instruct-qa-with-rag', description='Mistral-7b-instruct using a prompt template that includes context.' ) recipe_mistral #Outputs recipe JSON with the used prompt template #'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '["input_text","context"]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None} ``` To get a list of your existing recipes, you can simply run: ```python quotient.list_recipes() ``` Notice the recipe template is a simplest prompt using `Question` from evaluation template `Context` from document chunks retrieved from Qdrant and `Answer` generated by the pipeline. To kick off the evaluation ```python # Kick off an evaluation job experiment_1 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_mistral.csv") ``` This may take few minutes (depending on the size of evaluation dataset!) We can look at the results from our first (baseline) experiment as below : ![experiment1_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_eval.png) Notice that we have a pretty **low average Chunk Relevance** and **very large standard deviations for both Chunk Relevance and Context Relevance**. Let's take a look at some of the lower performing datapoints with **poor Faithfulness**: ```python with pd.option_context('display.max_colwidth', 0): display(experiment_1[['content.input_text', 'content.answer','content.documents','Chunk Relevance','Context Relevance','Faithfulness'] ].sort_values(by='Faithfulness').head(2)) ``` ![experiment1_bad_examples.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_bad_examples.png) In instances where the retrieved documents are **irrelevant (where both Chunk Relevance and Context Relevance are low)**, the model also shows **tendencies to hallucinate** and **produce poor quality responses**. The quality of the retrieved text directly impacts the quality of the LLM-generated answer. Therefore, our focus will be on enhancing the RAG setup by **adjusting the chunking parameters**. ### Experiment 2 - Adjusting the chunk parameter Keeping all other parameters constant, we changed the `chunk size` and `chunk overlap` to see if we can improve our results. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `1024`** - **Chunk overlap: `128`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We will reprocess the data with the updated parameters above: ```python ## for iteration 2 - lets modify chunk configuration ## We will start with creating seperate collection to store vectors chunk_size = 1024 chunk_overlap = 128 embedding_model_name = "BAAI/bge-small-en" num_docs = 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 2152 #content: 2152 #metadata: 2152 ``` Followed by running evaluation : ![experiment2_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment2_eval.png) and **comparing it with the results from Experiment 1:** ![graph_exp1_vs_exp2.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_vs_exp2.png) We observed slight enhancements in our LLM completion metrics (including BERT Sentence Similarity, BERTScore, ROUGE-L, and Knowledge F1) with the increase in *chunk size*. However, it's noteworthy that there was a significant decrease in *Faithfulness*, which is the primary metric we are aiming to optimize. Moreover, *Context Relevance* demonstrated an increase, indicating that the RAG pipeline retrieved more relevant information required to address the query. Nonetheless, there was a considerable drop in *Chunk Relevance*, implying that a smaller portion of the retrieved documents contained pertinent information for answering the question. **The correlation between the rise in Context Relevance and the decline in Chunk Relevance suggests that retrieving more documents using the smaller chunk size might yield improved results.** ### Experiment 3 - Increasing the number of documents retrieved (retrieval window) This time, we are using the same RAG setup as `Experiment 1`, but increasing the number of retrieved documents from **3** to **5**. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We can use the collection from Experiment 1 and run evaluation with modified `num_docs` parameter as : ```python #collection name from Experiment 1 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" #running eval for experiment 3 experiment_3 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_mistral.csv") ``` Observe the results as below : ![experiment_3_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment_3_eval.png) Comparing the results with Experiment 1 and 2 : ![graph_exp1_exp2_exp3.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3.png) As anticipated, employing the smaller chunk size while retrieving a larger number of documents resulted in achieving the highest levels of both *Context Relevance* and *Chunk Relevance.* Additionally, it yielded the **best** (albeit marginal) *Faithfulness* score, indicating a *reduced occurrence of inaccuracies or hallucinations*. Looks like we have achieved a good hold on our chunking parameters but it is worth testing another embedding model to see if we can get better results. ### Experiment 4 - Changing the embedding model Let us try using **MiniLM** for this experiment ****Parameters : - **Embedding Model : `MiniLM-L6-v2`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We will have to create another collection for this experiment : ```python #experiment-4 chunk_size=512 chunk_overlap=64 embedding_model_name="sentence-transformers/all-MiniLM-L6-v2" num_docs=5 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` We will observe our evaluations as : ![experiment4_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment4_eval.png) Comparing these with our previous experiments : ![graph_exp1_exp2_exp3_exp4.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4.png) It appears that `bge-small` was more proficient in capturing the semantic nuances of the Qdrant Documentation. Up to this point, our experimentation has focused solely on the *retrieval aspect* of our RAG pipeline. Now, let's explore altering the *generation aspect* or LLM while retaining the optimal parameters identified in Experiment 3. ### Experiment 5 - Changing the LLM Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `GPT-3.5-turbo`** For this we can repurpose our collection from Experiment 3 while the evaluations to use a new recipe with **GPT-3.5-turbo** model. ```python #collection name from Experiment 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" # We have to create a recipe using the same prompt template and GPT-3.5-turbo recipe_gpt = quotient.create_recipe( model_id=5, prompt_template_id=1, name='gpt3.5-qa-with-rag-recipe-v1', description='GPT-3.5 using a prompt template that includes context.' ) recipe_gpt #Outputs #{'id': 495, # 'name': 'gpt3.5-qa-with-rag-recipe-v1', # 'description': 'GPT-3.5 using a prompt template that includes context.', # 'model_id': 5, # 'prompt_template_id': 1, # 'created_at': '2024-05-03T12:14:58.779585', # 'owner_profile_id': 34, # 'system_prompt_id': None, # 'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '["input_text","context"]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None}, # 'model': {'id': 5, # 'name': 'gpt-3.5-turbo', # 'endpoint': 'https://api.openai.com/v1/chat/completions', # 'revision': 'placeholder', # 'created_at': '2024-02-06T17:01:21.408454', # 'model_type': 'OpenAI', # 'description': 'Returns a maximum of 4K output tokens.', # 'owner_profile_id': None, # 'external_model_config_id': None, # 'instruction_template_cls': 'NoneType'}} ``` Running the evaluations as : ```python experiment_5 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_gpt['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_gpt.csv") ``` We observe : ![experiment5_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment5_eval.png) and comparing all the 5 experiments as below : ![graph_exp1_exp2_exp3_exp4_exp5.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4_exp5.png) **GPT-3.5 surpassed Mistral-7B in all metrics**! Notably, Experiment 5 exhibited the **lowest occurrence of hallucination**. ## Conclusions Let’s take a look at our results from all 5 experiments above ![overall_eval_results.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/overall_eval_results.png) We still have a long way to go in improving the retrieval performance of RAG, as indicated by our generally poor results thus far. It might be beneficial to **explore alternative embedding models** or **different retrieval strategies** to address this issue. The significant variations in *Context Relevance* suggest that **certain questions may necessitate retrieving more documents than others**. Therefore, investigating a **dynamic retrieval strategy** could be worthwhile. Furthermore, there's ongoing **exploration required on the generative aspect** of RAG. Modifying LLMs or prompts can substantially impact the overall quality of responses. This iterative process demonstrates how, starting from scratch, continual evaluation and adjustments throughout experimentation can lead to the development of an enhanced RAG system. ## Watch this workshop on YouTube > A workshop version of this article is [available on YouTube](https://www.youtube.com/watch?v=3MEMPZR1aZA). Follow along using our [GitHub notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient). <iframe width="560" height="315" src="https://www.youtube.com/embed/3MEMPZR1aZA?si=n38oTBMtH3LNCTzd" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
qdrant-landing/content/articles/scalar-quantization.md
--- title: "Scalar Quantization: Background, Practices & More | Qdrant" short_description: "Discover scalar quantization for optimized data storage and improved performance, including data compression benefits and efficiency enhancements." description: "Discover the efficiency of scalar quantization for optimized data storage and enhanced performance. Learn about its data compression benefits and efficiency improvements." social_preview_image: /articles_data/scalar-quantization/social_preview.png small_preview_image: /articles_data/scalar-quantization/scalar-quantization-icon.svg preview_dir: /articles_data/scalar-quantization/preview weight: 5 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-27T10:45:00+01:00 draft: false keywords: - vector search - scalar quantization - memory optimization --- # Efficiency Unleashed: The Power of Scalar Quantization High-dimensional vector embeddings can be memory-intensive, especially when working with large datasets consisting of millions of vectors. Memory footprint really starts being a concern when we scale things up. A simple choice of the data type used to store a single number impacts even billions of numbers and can drive the memory requirements crazy. The higher the precision of your type, the more accurately you can represent the numbers. The more accurate your vectors, the more precise is the distance calculation. But the advantages stop paying off when you need to order more and more memory. Qdrant chose `float32` as a default type used to store the numbers of your embeddings. So a single number needs 4 bytes of the memory and a 512-dimensional vector occupies 2 kB. That's only the memory used to store the vector. There is also an overhead of the HNSW graph, so as a rule of thumb we estimate the memory size with the following formula: ```text memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes ``` While Qdrant offers various options to store some parts of the data on disk, starting from version 1.1.0, you can also optimize your memory by compressing the embeddings. We've implemented the mechanism of **Scalar Quantization**! It turns out to have not only a positive impact on memory but also on the performance. ## Scalar quantization Scalar quantization is a data compression technique that converts floating point values into integers. In case of Qdrant `float32` gets converted into `int8`, so a single number needs 75% less memory. It's not a simple rounding though! It's a process that makes that transformation partially reversible, so we can also revert integers back to floats with a small loss of precision. ### Theoretical background Assume we have a collection of `float32` vectors and denote a single value as `f32`. In reality neural embeddings do not cover a whole range represented by the floating point numbers, but rather a small subrange. Since we know all the other vectors, we can establish some statistics of all the numbers. For example, the distribution of the values will be typically normal: ![A distribution of the vector values](/articles_data/scalar-quantization/float32-distribution.png) Our example shows that 99% of the values come from a `[-2.0, 5.0]` range. And the conversion to `int8` will surely lose some precision, so we rather prefer keeping the representation accuracy within the range of 99% of the most probable values and ignoring the precision of the outliers. There might be a different choice of the range width, actually, any value from a range `[0, 1]`, where `0` means empty range, and `1` would keep all the values. That's a hyperparameter of the procedure called `quantile`. A value of `0.95` or `0.99` is typically a reasonable choice, but in general `quantile ∈ [0, 1]`. #### Conversion to integers Let's talk about the conversion to `int8`. Integers also have a finite set of values that might be represented. Within a single byte they may represent up to 256 different values, either from `[-128, 127]` or `[0, 255]`. ![Value ranges represented by int8](/articles_data/scalar-quantization/int8-value-range.png) Since we put some boundaries on the numbers that might be represented by the `f32`, and `i8` has some natural boundaries, the process of converting the values between those two ranges is quite natural: $$ f32 = \alpha \times i8 + offset $$ $$ i8 = \frac{f32 - offset}{\alpha} $$ The parameters $ \alpha $ and $ offset $ has to be calculated for a given set of vectors, but that comes easily by putting the minimum and maximum of the represented range for both `f32` and `i8`. ![Float32 to int8 conversion](/articles_data/scalar-quantization/float32-to-int8-conversion.png) For the unsigned `int8` it will go as following: $$ \begin{equation} \begin{cases} -2 = \alpha \times 0 + offset \\\\ 5 = \alpha \times 255 + offset \end{cases} \end{equation} $$ In case of signed `int8`, we'll just change the represented range boundaries: $$ \begin{equation} \begin{cases} -2 = \alpha \times (-128) + offset \\\\ 5 = \alpha \times 127 + offset \end{cases} \end{equation} $$ For any set of vector values we can simply calculate the $ \alpha $ and $ offset $ and those values have to be stored along with the collection to enable to conversion between the types. #### Distance calculation We do not store the vectors in the collections represented by `int8` instead of `float32` just for the sake of compressing the memory. But the coordinates are being used while we calculate the distance between the vectors. Both dot product and cosine distance requires multiplying the corresponding coordinates of two vectors, so that's the operation we perform quite often on `float32`. Here is how it would look like if we perform the conversion to `int8`: $$ f32 \times f32' = $$ $$ = (\alpha \times i8 + offset) \times (\alpha \times i8' + offset) = $$ $$ = \alpha^{2} \times i8 \times i8' + \underbrace{offset \times \alpha \times i8' + offset \times \alpha \times i8 + offset^{2}}_\text{pre-compute} $$ The first term, $ \alpha^{2} \times i8 \times i8' $ has to be calculated when we measure the distance as it depends on both vectors. However, both the second and the third term ($ offset \times \alpha \times i8' $ and $ offset \times \alpha \times i8 $ respectively), depend only on a single vector and those might be precomputed and kept for each vector. The last term, $ offset^{2} $ does not depend on any of the values, so it might be even computed once and reused. If we had to calculate all the terms to measure the distance, the performance could have been even worse than without the conversion. But thanks for the fact we can precompute the majority of the terms, things are getting simpler. And in turns out the scalar quantization has a positive impact not only on the memory usage, but also on the performance. As usual, we performed some benchmarks to support this statement! ## Benchmarks We simply used the same approach as we use in all [the other benchmarks we publish](/benchmarks/). Both [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Gist-960](https://github.com/erikbern/ann-benchmarks/) datasets were chosen to make the comparison between non-quantized and quantized vectors. The results are summarized in the tables: #### Arxiv-titles-384-angular-no-filters <table> <thead> <tr> <th colspan="2"></th> <th colspan="2">ef = 128</th> <th colspan="2">ef = 256</th> <th colspan="2">ef = 512</th> </tr> <tr> <th></th> <th><small>Upload and indexing time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> </tr> </thead> <tbody> <tr> <th>Non-quantized vectors</th> <td>649 s</td> <td>0.989</td> <td>0.0094</td> <td>0.994</td> <td>0.0932</td> <td>0.996</td> <td>0.161</td> </tr> <tr> <th>Scalar Quantization</th> <td>496 s</td> <td>0.986</td> <td>0.0037</td> <td>0.993</td> <td>0.060</td> <td>0.996</td> <td>0.115</td> </tr> <tr> <td>Difference</td> <td><span style="color: green;">-23.57%</span></td> <td><span style="color: red;">-0.3%</span></td> <td><span style="color: green;">-60.64%</span></td> <td><span style="color: red;">-0.1%</span></td> <td><span style="color: green;">-35.62%</span></td> <td>0%</td> <td><span style="color: green;">-28.57%</span></td> </tr> </tbody> </table> A slight decrease in search precision results in a considerable improvement in the latency. Unless you aim for the highest precision possible, you should not notice the difference in your search quality. #### Gist-960 <table> <thead> <tr> <th colspan="2"></th> <th colspan="2">ef = 128</th> <th colspan="2">ef = 256</th> <th colspan="2">ef = 512</th> </tr> <tr> <th></th> <th><small>Upload and indexing time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> </tr> </thead> <tbody> <tr> <th>Non-quantized vectors</th> <td>452</td> <td>0.802</td> <td>0.077</td> <td>0.887</td> <td>0.135</td> <td>0.941</td> <td>0.231</td> </tr> <tr> <th>Scalar Quantization</th> <td>312</td> <td>0.802</td> <td>0.043</td> <td>0.888</td> <td>0.077</td> <td>0.941</td> <td>0.135</td> </tr> <tr> <td>Difference</td> <td><span style="color: green;">-30.79%</span></td> <td>0%</td> <td><span style="color: green;">-44,16%</span></td> <td><span style="color: green;">+0.11%</span></td> <td><span style="color: green;">-42.96%</span></td> <td>0%</td> <td><span style="color: green;">-41,56%</span></td> </tr> </tbody> </table> In all the cases, the decrease in search precision is negligible, but we keep a latency reduction of at least 28.57%, even up to 60,64%, while searching. As a rule of thumb, the higher the dimensionality of the vectors, the lower the precision loss. ### Oversampling and rescoring A distinctive feature of the Qdrant architecture is the ability to combine the search for quantized and original vectors in a single query. This enables the best combination of speed, accuracy, and RAM usage. Qdrant stores the original vectors, so it is possible to rescore the top-k results with the original vectors after doing the neighbours search in quantized space. That obviously has some impact on the performance, but in order to measure how big it is, we made the comparison in different search scenarios. We used a machine with a very slow network-mounted disk and tested the following scenarios with different amounts of allowed RAM: | Setup | RPS | Precision | |-----------------------------|------|-----------| | 4.5Gb memory | 600 | 0.99 | | 4.5Gb memory + SQ + rescore | 1000 | 0.989 | And another group with more strict memory limits: | Setup | RPS | Precision | |------------------------------|------|-----------| | 2Gb memory | 2 | 0.99 | | 2Gb memory + SQ + rescore | 30 | 0.989 | | 2Gb memory + SQ + no rescore | 1200 | 0.974 | In those experiments, throughput was mainly defined by the number of disk reads, and quantization efficiently reduces it by allowing more vectors in RAM. Read more about on-disk storage in Qdrant and how we measure its performance in our article: [Minimal RAM you need to serve a million vectors ](/articles/memory-consumption/). The mechanism of Scalar Quantization with rescoring disabled pushes the limits of low-end machines even further. It seems like handling lots of requests does not require an expensive setup if you can agree to a small decrease in the search precision. ### Accessing best practices Qdrant documentation on [Scalar Quantization](/documentation/quantization/#setting-up-quantization-in-qdrant) is a great resource describing different scenarios and strategies to achieve up to 4x lower memory footprint and even up to 2x performance increase.
qdrant-landing/content/articles/search-as-you-type.md
--- title: Semantic Search As You Type short_description: "Instant search using Qdrant" description: To show off Qdrant's performance, we show how to do a quick search-as-you-type that will come back within a few milliseconds. social_preview_image: /articles_data/search-as-you-type/preview/social_preview.jpg small_preview_image: /articles_data/search-as-you-type/icon.svg preview_dir: /articles_data/search-as-you-type/preview weight: -2 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-08-14T00:00:00+01:00 draft: false keywords: search, semantic, vector, llm, integration, benchmark, recommend, performance, rust --- Qdrant is one of the fastest vector search engines out there, so while looking for a demo to show off, we came upon the idea to do a search-as-you-type box with a fully semantic search backend. Now we already have a semantic/keyword hybrid search on our website. But that one is written in Python, which incurs some overhead for the interpreter. Naturally, I wanted to see how fast I could go using Rust. Since Qdrant doesn't embed by itself, I had to decide on an embedding model. The prior version used the [SentenceTransformers](https://www.sbert.net/) package, which in turn employs Bert-based [All-MiniLM-L6-V2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main) model. This model is battle-tested and delivers fair results at speed, so not experimenting on this front I took an [ONNX version](https://huggingface.co/optimum/all-MiniLM-L6-v2/tree/main) and ran that within the service. The workflow looks like this: ![Search Qdrant by Embedding](/articles_data/search-as-you-type/Qdrant_Search_by_Embedding.png) This will, after tokenizing and embedding send a `/collections/site/points/search` POST request to Qdrant, sending the following JSON: ```json POST collections/site/points/search { "vector": [-0.06716014,-0.056464013, ...(382 values omitted)], "limit": 5, "with_payload": true, } ``` Even with avoiding a network round-trip, the embedding still takes some time. As always in optimization, if you cannot do the work faster, a good solution is to avoid work altogether (please don't tell my employer). This can be done by pre-computing common prefixes and calculating embeddings for them, then storing them in a `prefix_cache` collection. Now the [`recommend`](https://docs.rs/qdrant-client/latest/qdrant_client/client/struct.QdrantClient.html#method.recommend) API method can find the best matches without doing any embedding. For now, I use short (up to and including 5 letters) prefixes, but I can also parse the logs to get the most common search terms and add them to the cache later. ![Qdrant Recommendation](/articles_data/search-as-you-type/Qdrant_Recommendation.png) Making that work requires setting up the `prefix_cache` collection with points that have the prefix as their `point_id` and the embedding as their `vector`, which lets us do the lookup with no search or index. The `prefix_to_id` function currently uses the `u64` variant of `PointId`, which can hold eight bytes, enough for this use. If the need arises, one could instead encode the names as UUID, hashing the input. Since I know all our prefixes are within 8 bytes, I decided against this for now. The `recommend` endpoint works roughly the same as `search_points`, but instead of searching for a vector, Qdrant searches for one or more points (you can also give negative example points the search engine will try to avoid in the results). It was built to help drive recommendation engines, saving the round-trip of sending the current point's vector back to Qdrant to find more similar ones. However Qdrant goes a bit further by allowing us to select a different collection to lookup the points, which allows us to keep our `prefix_cache` collection separate from the site data. So in our case, Qdrant first looks up the point from the `prefix_cache`, takes its vector and searches for that in the `site` collection, using the precomputed embeddings from the cache. The API endpoint expects a POST of the following JSON to `/collections/site/points/recommend`: ```json POST collections/site/points/recommend { "positive": [1936024932], "limit": 5, "with_payload": true, "lookup_from": { "collection": "prefix_cache" } } ``` Now I have, in the best Rust tradition, a blazingly fast semantic search. To demo it, I used our [Qdrant documentation website](/documentation/)'s page search, replacing our previous Python implementation. So in order to not just spew empty words, here is a benchmark, showing different queries that exercise different code paths. Since the operations themselves are far faster than the network whose fickle nature would have swamped most measurable differences, I benchmarked both the Python and Rust services locally. I'm measuring both versions on the same AMD Ryzen 9 5900HX with 16GB RAM running Linux. The table shows the average time and error bound in milliseconds. I only measured up to a thousand concurrent requests. None of the services showed any slowdown with more requests in that range. I do not expect our service to become DDOS'd, so I didn't benchmark with more load. Without further ado, here are the results: | query length | Short | Long | |---------------|-----------|------------| | Python 🐍 | 16 ± 4 ms | 16 ± 4 ms | | Rust 🦀 | 1½ ± ½ ms | 5 ± 1 ms | The Rust version consistently outperforms the Python version and offers a semantic search even on few-character queries. If the prefix cache is hit (as in the short query length), the semantic search can even get more than ten times faster than the Python version. The general speed-up is due to both the relatively lower overhead of Rust + Actix Web compared to Python + FastAPI (even if that already performs admirably), as well as using ONNX Runtime instead of SentenceTransformers for the embedding. The prefix cache gives the Rust version a real boost by doing a semantic search without doing any embedding work. As an aside, while the millisecond differences shown here may mean relatively little for our users, whose latency will be dominated by the network in between, when typing, every millisecond more or less can make a difference in user perception. Also search-as-you-type generates between three and five times as much load as a plain search, so the service will experience more traffic. Less time per request means being able to handle more of them. Mission accomplished! But wait, there's more! ### Prioritizing Exact Matches and Headings To improve on the quality of the results, Qdrant can do multiple searches in parallel, and then the service puts the results in sequence, taking the first best matches. The extended code searches: 1. Text matches in titles 2. Text matches in body (paragraphs or lists) 3. Semantic matches in titles 4. Any Semantic matches Those are put together by taking them in the above order, deduplicating as necessary. ![merge workflow](/articles_data/search-as-you-type/sayt_merge.png) Instead of sending a `search` or `recommend` request, one can also send a `search/batch` or `recommend/batch` request, respectively. Each of those contain a `"searches"` property with any number of search/recommend JSON requests: ```json POST collections/site/points/search/batch { "searches": [ { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "text", "match": { "text": <query> }}, { "key": "tag", "match": { "any": ["h1", "h2", "h3"] }}, ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "body", "match": { "text": <query> }} ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "tag", "match": { "any": ["h1", "h2", "h3"] }} ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], ..., }, ] } ``` As the queries are done in a batch request, there isn't any additional network overhead and only very modest computation overhead, yet the results will be better in many cases. The only additional complexity is to flatten the result lists and take the first 5 results, deduplicating by point ID. Now there is one final problem: The query may be short enough to take the recommend code path, but still not be in the prefix cache. In that case, doing the search *sequentially* would mean two round-trips between the service and the Qdrant instance. The solution is to *concurrently* start both requests and take the first successful non-empty result. ![sequential vs. concurrent flow](/articles_data/search-as-you-type/sayt_concurrency.png) While this means more load for the Qdrant vector search engine, this is not the limiting factor. The relevant data is already in cache in many cases, so the overhead stays within acceptable bounds, and the maximum latency in case of prefix cache misses is measurably reduced. The code is available on the [Qdrant github](https://github.com/qdrant/page-search) To sum up: Rust is fast, recommend lets us use precomputed embeddings, batch requests are awesome and one can do a semantic search in mere milliseconds.
qdrant-landing/content/articles/seed-round.md
--- title: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. short_description: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. description: We announce Qdrant seed round investment and share our thoughts on Vector Databases and New AI Age. preview_dir: /articles_data/seed-round/preview social_preview_image: /articles_data/seed-round/seed-social.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 6 author: Andre Zayarni draft: false author_link: https://www.linkedin.com/in/zayarni date: 2023-04-19T00:42:00.000Z --- > Vector databases are here to stay. The New Age of AI is powered by vector embeddings, and vector databases are a foundational part of the stack. At Qdrant, we are working on cutting-edge open-source vector similarity search solutions to power fantastic AI applications with the best possible performance and excellent developer experience. > > Our 7.5M seed funding – led by [Unusual Ventures](https://www.unusual.vc/), awesome angels, and existing investors – will help us bring these innovations to engineers and empower them to make the most of their unstructured data and the awesome power of LLMs at any scale. We are thrilled to announce that we just raised our seed round from the best possible investor we could imagine for this stage. Let’s talk about fundraising later – it is a story itself that I could probably write a bestselling book about. First, let's dive into a bit of background about our project, our progress, and future plans. ## A need for vector databases. Unstructured data is growing exponentially, and we are all part of a huge unstructured data workforce. This blog post is unstructured data; your visit here produces unstructured and semi-structured data with every web interaction, as does every photo you take or email you send. The global datasphere will grow to [165 zettabytes by 2025](https://github.com/qdrant/qdrant/pull/1639), and about 80% of that will be unstructured. At the same time, the rising demand for AI is vastly outpacing existing infrastructure. Around 90% of machine learning research results fail to reach production because of a lack of tools. {{< figure src=/articles_data/seed-round/demand.png caption="Demand for AI tools" alt="Vector Databases Demand" >}} Thankfully there’s a new generation of tools that let developers work with unstructured data in the form of vector embeddings, which are deep representations of objects obtained from a neural network model. A vector database, also known as a vector similarity search engine or approximate nearest neighbour (ANN) search database, is a database designed to store, manage, and search high-dimensional data with an additional payload. Vector Databases turn research prototypes into commercial AI products. Vector search solutions are industry agnostic and bring solutions for a number of use cases, including classic ones like semantic search, matching engines, and recommender systems to more novel applications like anomaly detection, working with time series, or biomedical data. The biggest limitation is to have a neural network encoder in place for the data type you are working with. {{< figure src=/articles_data/seed-round/use-cases.png caption="Vector Search Use Cases" alt="Vector Search Use Cases" >}} With the rise of large language models (LLMs), Vector Databases have become the fundamental building block of the new AI Stack. They let developers build even more advanced applications by extending the “knowledge base” of LLMs-based applications like ChatGPT with real-time and real-world data. A new AI product category, “Co-Pilot for X,” was born and is already affecting how we work. Starting from producing content to developing software. And this is just the beginning, there are even more types of novel applications being developed on top of this stack. {{< figure src=/articles_data/seed-round/ai-stack.png caption="New AI Stack" alt="New AI Stack" >}} ## Enter Qdrant. ## At the same time, adoption has only begun. Vector Search Databases are replacing VSS libraries like FAISS, etc., which, despite their disadvantages, are still used by ~90% of projects out there They’re hard-coupled to the application code, lack of production-ready features like basic CRUD operations or advanced filtering, are a nightmare to maintain and scale and have many other difficulties that make life hard for developers. The current Qdrant ecosystem consists of excellent products to work with vector embeddings. We launched our managed vector database solution, Qdrant Cloud, early this year, and it is already serving more than 1,000 Qdrant clusters. We are extending our offering now with managed on-premise solutions for enterprise customers. {{< figure src=/articles_data/seed-round/ecosystem.png caption="Qdrant Ecosystem" alt="Qdrant Vector Database Ecosystem" >}} Our plan for the current [open-source roadmap](https://github.com/qdrant/qdrant/blob/master/docs/roadmap/README.md) is to make billion-scale vector search affordable. Our recent release of the [Scalar Quantization](/articles/scalar-quantization/) improves both memory usage (x4) as well as speed (x2). Upcoming [Product Quantization](https://www.irisa.fr/texmex/people/jegou/papers/jegou_searching_with_quantization.pdf) will introduce even another option with more memory saving. Stay tuned. Qdrant started more than two years ago with the mission of building a vector database powered by a well-thought-out tech stack. Using Rust as the system programming language and technical architecture decision during the development of the engine made Qdrant the leading and one of the most popular vector database solutions. Our unique custom modification of the [HNSW algorithm](/articles/filtrable-hnsw/) for Approximate Nearest Neighbor Search (ANN) allows querying the result with a state-of-the-art speed and applying filters without compromising on results. Cloud-native support for distributed deployment and replications makes the engine suitable for high-throughput applications with real-time latency requirements. Rust brings stability, efficiency, and the possibility to make optimization on a very low level. In general, we always aim for the best possible results in [performance](/benchmarks/), code quality, and feature set. Most importantly, we want to say a big thank you to our [open-source community](https://qdrant.to/discord), our adopters, our contributors, and our customers. Your active participation in the development of our products has helped make Qdrant the best vector database on the market. I cannot imagine how we could do what we’re doing without the community or without being open-source and having the TRUST of the engineers. Thanks to all of you! I also want to thank our team. Thank you for your patience and trust. Together we are strong. Let’s continue doing great things together. ## Fundraising ## The whole process took only a couple of days, we got several offers, and most probably, we would get more with different conditions. We decided to go with Unusual Ventures because they truly understand how things work in the open-source space. They just did it right. Here is a big piece of advice for all investors interested in open-source: Dive into the community, and see and feel the traction and product feedback instead of looking at glossy pitch decks. With Unusual on our side, we have an active operational partner instead of one who simply writes a check. That help is much more important than overpriced valuations and big shiny names. Ultimately, the community and adopters will decide what products win and lose, not VCs. Companies don’t need crazy valuations to create products that customers love. You do not need Ph.D. to innovate. You do not need to over-engineer to build a scalable solution. You do not need ex-FANG people to have a great team. You need clear focus, a passion for what you’re building, and the know-how to do it well. We know how. PS: This text is written by me in an old-school way without any ChatGPT help. Sometimes you just need inspiration instead of AI ;-)
qdrant-landing/content/articles/serverless.md
--- title: Serverless Semantic Search short_description: "Need to setup a server to offer semantic search? Think again!" description: "Create a serverless semantic search engine using nothing but Qdrant and free cloud services." social_preview_image: /articles_data/serverless/social_preview.png small_preview_image: /articles_data/serverless/icon.svg preview_dir: /articles_data/serverless/preview weight: 1 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-07-12T10:00:00+01:00 draft: false keywords: rust, serverless, lambda, semantic, search --- Do you want to insert a semantic search function into your website or online app? Now you can do so - without spending any money! In this example, you will learn how to create a free prototype search engine for your own non-commercial purposes. You may find all of the assets for this tutorial on [GitHub](https://github.com/qdrant/examples/tree/master/lambda-search). ## Ingredients * A [Rust](https://rust-lang.org) toolchain * [cargo lambda](https://cargo-lambda.info) (install via package manager, [download](https://github.com/cargo-lambda/cargo-lambda/releases) binary or `cargo install cargo-lambda`) * The [AWS CLI](https://aws.amazon.com/cli) * Qdrant instance ([free tier](https://cloud.qdrant.io) available) * An embedding provider service of your choice (see our [Embeddings docs](/documentation/embeddings/). You may be able to get credits from [AI Grant](https://aigrant.org), also Cohere has a [rate-limited non-commercial free tier](https://cohere.com/pricing)) * AWS Lambda account (12-month free tier available) ## What you're going to build You'll combine the embedding provider and the Qdrant instance to a neat semantic search, calling both services from a small Lambda function. ![lambda integration diagram](/articles_data/serverless/lambda_integration.png) Now lets look at how to work with each ingredient before connecting them. ## Rust and cargo-lambda You want your function to be quick, lean and safe, so using Rust is a no-brainer. To compile Rust code for use within Lambda functions, the `cargo-lambda` subcommand has been built. `cargo-lambda` can put your Rust code in a zip file that AWS Lambda can then deploy on a no-frills `provided.al2` runtime. To interface with AWS Lambda, you will need a Rust project with the following dependencies in your `Cargo.toml`: ```toml [dependencies] tokio = { version = "1", features = ["macros"] } lambda_http = { version = "0.8", default-features = false, features = ["apigw_http"] } lambda_runtime = "0.8" ``` This gives you an interface consisting of an entry point to start the Lambda runtime and a way to register your handler for HTTP calls. Put the following snippet into `src/helloworld.rs`: ```rust use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response}; /// This is your callback function for responding to requests at your URL async fn function_handler(_req: Request) -> Result<Response<Body>, Error> { Response::from_text("Hello, Lambda!") } #[tokio::main] async fn main() { run(service_fn(function_handler)).await } ``` You can also use a closure to bind other arguments to your function handler (the `service_fn` call then becomes `service_fn(|req| function_handler(req, ...))`). Also if you want to extract parameters from the request, you can do so using the [Request](https://docs.rs/lambda_http/latest/lambda_http/type.Request.html) methods (e.g. `query_string_parameters` or `query_string_parameters_ref`). Add the following to your `Cargo.toml` to define the binary: ```toml [[bin]] name = "helloworld" path = "src/helloworld.rs" ``` On the AWS side, you need to setup a Lambda and IAM role to use with your function. ![create lambda web page](/articles_data/serverless/create_lambda.png) Choose your function name, select "Provide your own bootstrap on Amazon Linux 2". As architecture, use `arm64`. You will also activate a function URL. Here it is up to you if you want to protect it via IAM or leave it open, but be aware that open end points can be accessed by anyone, potentially costing money if there is too much traffic. By default, this will also create a basic role. To look up the role, you can go into the Function overview: ![function overview](/articles_data/serverless/lambda_overview.png) Click on the "Info" link near the "▸ Function overview" heading, and select the "Permissions" tab on the left. You will find the "Role name" directly under *Execution role*. Note it down for later. ![function overview](/articles_data/serverless/lambda_role.png) To test that your "Hello, Lambda" service works, you can compile and upload the function: ```bash $ export LAMBDA_FUNCTION_NAME=hello $ export LAMBDA_ROLE=<role name from lambda web ui> $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --bin helloworld --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Delete the old empty definition $ aws lambda delete-function-url-config --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ aws lambda delete-function --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ # Upload the function $ aws lambda create-function --function-name $LAMBDA_FUNCTION_NAME \ --handler bootstrap \ --architectures arm64 \ --zip-file fileb://./target/lambda/helloworld/bootstrap.zip \ --runtime provided.al2 \ --region $LAMBDA_REGION \ --role $LAMBDA_ROLE \ --tracing-config Mode=Active $ # Add the function URL $ aws lambda add-permission \ --function-name $LAMBDA_FUNCTION_NAME \ --action lambda:InvokeFunctionUrl \ --principal "*" \ --function-url-auth-type "NONE" \ --region $LAMBDA_REGION \ --statement-id url $ # Here for simplicity unauthenticated URL access. Beware! $ aws lambda create-function-url-config \ --function-name $LAMBDA_FUNCTION_NAME \ --region $LAMBDA_REGION \ --cors "AllowOrigins=*,AllowMethods=*,AllowHeaders=*" \ --auth-type NONE ``` Now you can go to your *Function Overview* and click on the Function URL. You should see something like this: ```text Hello, Lambda! ``` Bearer ! You have set up a Lambda function in Rust. On to the next ingredient: ## Embedding Most providers supply a simple https GET or POST interface you can use with an API key, which you have to supply in an authentication header. If you are using this for non-commercial purposes, the rate limited trial key from Cohere is just a few clicks away. Go to [their welcome page](https://dashboard.cohere.ai/welcome/register), register and you'll be able to get to the dashboard, which has an "API keys" menu entry which will bring you to the following page: [cohere dashboard](/articles_data/serverless/cohere-dashboard.png) From there you can click on the ⎘ symbol next to your API key to copy it to the clipboard. *Don't put your API key in the code!* Instead read it from an env variable you can set in the lambda environment. This avoids accidentally putting your key into a public repo. Now all you need to get embeddings is a bit of code. First you need to extend your dependencies with `reqwest` and also add `anyhow` for easier error handling: ```toml anyhow = "1.0" reqwest = { version = "0.11.18", default-features = false, features = ["json", "rustls-tls"] } serde = "1.0" ``` Now given the API key from above, you can make a call to get the embedding vectors: ```rust use anyhow::Result; use serde::Deserialize; use reqwest::Client; #[derive(Deserialize)] struct CohereResponse { outputs: Vec<Vec<f32>> } pub async fn embed(client: &Client, text: &str, api_key: &str) -> Result<Vec<Vec<f32>>> { let CohereResponse { outputs } = client .post("https://api.cohere.ai/embed") .header("Authorization", &format!("Bearer {api_key}")) .header("Content-Type", "application/json") .header("Cohere-Version", "2021-11-08") .body(format!("{{\"text\":[\"{text}\"],\"model\":\"small\"}}")) .send() .await? .json() .await?; Ok(outputs) } ``` Note that this may return multiple vectors if the text overflows the input dimensions. Cohere's `small` model has 1024 output dimensions. Other providers have similar interfaces. Consult our [Embeddings docs](/documentation/embeddings/) for further information. See how little code it took to get the embedding? While you're at it, it's a good idea to write a small test to check if embedding works and the vectors are of the expected size: ```rust #[tokio::test] async fn check_embedding() { // ignore this test if API_KEY isn't set let Ok(api_key) = &std::env::var("API_KEY") else { return; } let embedding = crate::embed("What is semantic search?", api_key).unwrap()[0]; // Cohere's `small` model has 1024 output dimensions. assert_eq!(1024, embedding.len()); } ``` Run this while setting the `API_KEY` environment variable to check if the embedding works. ## Qdrant search Now that you have embeddings, it's time to put them into your Qdrant. You could of course use `curl` or `python` to set up your collection and upload the points, but as you already have Rust including some code to obtain the embeddings, you can stay in Rust, adding `qdrant-client` to the mix. ```rust use anyhow::Result; use qdrant_client::prelude::*; use qdrant_client::qdrant::{VectorsConfig, VectorParams}; use qdrant_client::qdrant::vectors_config::Config; use std::collections::HashMap; fn setup<'i>( embed_client: &reqwest::Client, embed_api_key: &str, qdrant_url: &str, api_key: Option<&str>, collection_name: &str, data: impl Iterator<Item = (&'i str, HashMap<String, Value>)>, ) -> Result<()> { let mut config = QdrantClientConfig::from_url(qdrant_url); config.api_key = api_key; let client = QdrantClient::new(Some(config))?; // create the collections if !client.has_collection(collection_name).await? { client .create_collection(&CreateCollection { collection_name: collection_name.into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 1024, // output dimensions from above distance: Distance::Cosine as i32, ..Default::default() })), }), ..Default::default() }) .await?; } let mut id_counter = 0_u64; let points = data.map(|(text, payload)| { let id = std::mem::replace(&mut id_counter, *id_counter + 1); let vectors = Some(embed(embed_client, text, embed_api_key).unwrap()); PointStruct { id, vectors, payload } }).collect(); client.upsert_points(collection_name, points, None).await?; Ok(()) } ``` Depending on whether you want to efficiently filter the data, you can also add some indexes. I'm leaving this out for brevity, but you can look at the [example code](https://github.com/qdrant/examples/tree/master/lambda-search) containing this operation. Also this does not implement chunking (splitting the data to upsert in multiple requests, which avoids timeout errors). Add a suitable `main` method and you can run this code to insert the points (or just use the binary from the example). Be sure to include the port in the `qdrant_url`. Now that you have the points inserted, you can search them by embedding: ```rust use anyhow::Result; use qdrant_client::prelude::*; pub async fn search( text: &str, collection_name: String, client: &Client, api_key: &str, qdrant: &QdrantClient, ) -> Result<Vec<ScoredPoint>> { Ok(qdrant.search_points(&SearchPoints { collection_name, limit: 5, // use what fits your use case here with_payload: Some(true.into()), vector: embed(client, text, api_key)?, ..Default::default() }).await?.result) } ``` You can also filter by adding a `filter: ...` field to the `SearchPoints`, and you will likely want to process the result further, but the example code already does that, so feel free to start from there in case you need this functionality. ## Putting it all together Now that you have all the parts, it's time to join them up. Now copying and wiring up the snippets above is left as an exercise to the reader. Impatient minds can peruse the [example repo](https://github.com/qdrant/examples/tree/master/lambda-search) instead. You'll want to extend the `main` method a bit to connect with the Client once at the start, also get API keys from the environment so you don't need to compile them into the code. To do that, you can get them with `std::env::var(_)` from the rust code and set the environment from the AWS console. ```bash $ export QDRANT_URI=<qour Qdrant instance URI including port> $ export QDRANT_API_KEY=<your Qdrant API key> $ export COHERE_API_KEY=<your Cohere API key> $ export COLLECTION_NAME=site-cohere $ aws lambda update-function-configuration \ --function-name $LAMBDA_FUNCTION_NAME \ --environment "Variables={QDRANT_URI=$QDRANT_URI,\ QDRANT_API_KEY=$QDRANT_API_KEY,COHERE_API_KEY=${COHERE_API_KEY},\ COLLECTION_NAME=${COLLECTION_NAME}"` ``` In any event, you will arrive at one command line program to insert your data and one Lambda function. The former can just be `cargo run` to set up the collection. For the latter, you can again call `cargo lambda` and the AWS console: ```bash $ export LAMBDA_FUNCTION_NAME=search $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Update the function $ aws lambda update-function-code --function-name $LAMBDA_FUNCTION_NAME \ --zip-file fileb://./target/lambda/page-search/bootstrap.zip \ --region $LAMBDA_REGION ``` ## Discussion Lambda works by spinning up your function once the URL is called, so they don't need to keep the compute on hand unless it is actually used. This means that the first call will be burdened by some 1-2 seconds of latency for loading the function, later calls will resolve faster. Of course, there is also the latency for calling the embeddings provider and Qdrant. On the other hand, the free tier doesn't cost a thing, so you certainly get what you pay for. And for many use cases, a result within one or two seconds is acceptable. Rust minimizes the overhead for the function, both in terms of file size and runtime. Using an embedding service means you don't need to care about the details. Knowing the URL, API key and embedding size is sufficient. Finally, with free tiers for both Lambda and Qdrant as well as free credits for the embedding provider, the only cost is your time to set everything up. Who could argue with free?
qdrant-landing/content/articles/sparse-vectors.md
--- title: "What is a Sparse Vector? How to Achieve Vector-based Hybrid Search" short_description: "Discover sparse vectors, their function, and significance in modern data processing, including methods like SPLADE for efficient use." description: "Learn what sparse vectors are, how they work, and their importance in modern data processing. Explore methods like SPLADE for creating and leveraging sparse vectors efficiently." social_preview_image: /articles_data/sparse-vectors/social_preview.png small_preview_image: /articles_data/sparse-vectors/sparse-vectors-icon.svg preview_dir: /articles_data/sparse-vectors/preview weight: -100 author: Nirant Kasliwal author_link: https://nirantk.com/about date: 2023-12-09T13:00:00+03:00 draft: false keywords: - sparse vectors - SPLADE - hybrid search - vector search --- Think of a library with a vast index card system. Each index card only has a few keywords marked out (sparse vector) of a large possible set for each book (document). This is what sparse vectors enable for text. ## What are sparse and dense vectors? Sparse vectors are like the Marie Kondo of data—keeping only what sparks joy (or relevance, in this case). Consider a simplified example of 2 documents, each with 200 words. A dense vector would have several hundred non-zero values, whereas a sparse vector could have, much fewer, say only 20 non-zero values. In this example: We assume it selects only 2 words or tokens from each document. The rest of the values are zero. This is why it's called a sparse vector. ```python dense = [0.2, 0.3, 0.5, 0.7, ...] # several hundred floats sparse = [{331: 0.5}, {14136: 0.7}] # 20 key value pairs ``` The numbers 331 and 14136 map to specific tokens in the vocabulary e.g. `['chocolate', 'icecream']`. The rest of the values are zero. This is why it's called a sparse vector. The tokens aren't always words though, sometimes they can be sub-words: `['ch', 'ocolate']` too. They're pivotal in information retrieval, especially in ranking and search systems. BM25, a standard ranking function used by search engines like [Elasticsearch](https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), exemplifies this. BM25 calculates the relevance of documents to a given search query. BM25's capabilities are well-established, yet it has its limitations. BM25 relies solely on the frequency of words in a document and does not attempt to comprehend the meaning or the contextual importance of the words. Additionally, it requires the computation of the entire corpus's statistics in advance, posing a challenge for large datasets. Sparse vectors harness the power of neural networks to surmount these limitations while retaining the ability to query exact words and phrases. They excel in handling large text data, making them crucial in modern data processing a and marking an advancement over traditional methods such as BM25. # Understanding sparse vectors Sparse Vectors are a representation where each dimension corresponds to a word or subword, greatly aiding in interpreting document rankings. This clarity is why sparse vectors are essential in modern search and recommendation systems, complimenting the meaning-rich embedding or dense vectors. Dense vectors from models like OpenAI Ada-002 or Sentence Transformers contain non-zero values for every element. In contrast, sparse vectors focus on relative word weights per document, with most values being zero. This results in a more efficient and interpretable system, especially in text-heavy applications like search. Sparse Vectors shine in domains and scenarios where many rare keywords or specialized terms are present. For example, in the medical domain, many rare terms are not present in the general vocabulary, so general-purpose dense vectors cannot capture the nuances of the domain. | Feature | Sparse Vectors | Dense Vectors | |---------------------------|---------------------------------------------|----------------------------------------------| | **Data Representation** | Majority of elements are zero | All elements are non-zero | | **Computational Efficiency** | Generally higher, especially in operations involving zero elements | Lower, as operations are performed on all elements | | **Information Density** | Less dense, focuses on key features | Highly dense, capturing nuanced relationships | | **Example Applications** | Text search, Hybrid search | [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), many general machine learning tasks | Where do sparse vectors fail though? They're not great at capturing nuanced relationships between words. For example, they can't capture the relationship between "king" and "queen" as well as dense vectors. # SPLADE Let's check out [SPLADE](https://europe.naverlabs.com/research/computer-science/splade-a-sparse-bi-encoder-bert-based-model-achieves-effective-and-efficient-full-text-document-ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), an excellent way to make sparse vectors. Let's look at some numbers first. Higher is better: | Model | MRR@10 (MS MARCO Dev) | Type | |--------------------|---------|----------------| | BM25 | 0.184 | Sparse | | TCT-ColBERT | 0.359 | Dense | | doc2query-T5 [link](https://github.com/castorini/docTTTTTquery) | 0.277 | Sparse | | SPLADE | 0.322 | Sparse | | SPLADE-max | 0.340 | Sparse | | SPLADE-doc | 0.322 | Sparse | | DistilSPLADE-max | 0.368 | Sparse | All numbers are from [SPLADEv2](https://arxiv.org/abs/2109.10086). MRR is [Mean Reciprocal Rank](https://www.wikiwand.com/en/Mean_reciprocal_rank#References), a standard metric for ranking. [MS MARCO](https://microsoft.github.io/MSMARCO-Passage-Ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is a dataset for evaluating ranking and retrieval for passages. SPLADE is quite flexible as a method, with regularization knobs that can be tuned to obtain [different models](https://github.com/naver/splade) as well: > SPLADE is more a class of models rather than a model per se: depending on the regularization magnitude, we can obtain different models (from very sparse to models doing intense query/doc expansion) with different properties and performance. First, let's look at how to create a sparse vector. Then, we'll look at the concepts behind SPLADE. ## Creating a sparse vector We'll explore two different ways to create a sparse vector. The higher performance way to create a sparse vector from dedicated document and query encoders. We'll look at a simpler approach -- here we will use the same model for both document and query. We will get a dictionary of token ids and their corresponding weights for a sample text - representing a document. If you'd like to follow along, here's a [Colab Notebook](https://colab.research.google.com/gist/NirantK/ad658be3abefc09b17ce29f45255e14e/splade-single-encoder.ipynb), [alternate link](https://gist.github.com/NirantK/ad658be3abefc09b17ce29f45255e14e) with all the code. ### Setting Up ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model_id = "naver/splade-cocondenser-ensembledistil" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = """Arthur Robert Ashe Jr. (July 10, 1943 – February 6, 1993) was an American professional tennis player. He won three Grand Slam titles in singles and two in doubles.""" ``` ### Computing the sparse vector ```python import torch def compute_vector(text): """ Computes a vector from logits and attention mask using ReLU, log, and max operations. """ tokens = tokenizer(text, return_tensors="pt") output = model(**tokens) logits, attention_mask = output.logits, tokens.attention_mask relu_log = torch.log(1 + torch.relu(logits)) weighted_log = relu_log * attention_mask.unsqueeze(-1) max_val, _ = torch.max(weighted_log, dim=1) vec = max_val.squeeze() return vec, tokens vec, tokens = compute_vector(text) print(vec.shape) ``` You'll notice that there are 38 tokens in the text based on this tokenizer. This will be different from the number of tokens in the vector. In a TF-IDF, we'd assign weights only to these tokens or words. In SPLADE, we assign weights to all the tokens in the vocabulary using this vector using our learned model. ## Term expansion and weights ```python def extract_and_map_sparse_vector(vector, tokenizer): """ Extracts non-zero elements from a given vector and maps these elements to their human-readable tokens using a tokenizer. The function creates and returns a sorted dictionary where keys are the tokens corresponding to non-zero elements in the vector, and values are the weights of these elements, sorted in descending order of weights. This function is useful in NLP tasks where you need to understand the significance of different tokens based on a model's output vector. It first identifies non-zero values in the vector, maps them to tokens, and sorts them by weight for better interpretability. Args: vector (torch.Tensor): A PyTorch tensor from which to extract non-zero elements. tokenizer: The tokenizer used for tokenization in the model, providing the mapping from tokens to indices. Returns: dict: A sorted dictionary mapping human-readable tokens to their corresponding non-zero weights. """ # Extract indices and values of non-zero elements in the vector cols = vector.nonzero().squeeze().cpu().tolist() weights = vector[cols].cpu().tolist() # Map indices to tokens and create a dictionary idx2token = {idx: token for token, idx in tokenizer.get_vocab().items()} token_weight_dict = { idx2token[idx]: round(weight, 2) for idx, weight in zip(cols, weights) } # Sort the dictionary by weights in descending order sorted_token_weight_dict = { k: v for k, v in sorted( token_weight_dict.items(), key=lambda item: item[1], reverse=True ) } return sorted_token_weight_dict # Usage example sorted_tokens = extract_and_map_sparse_vector(vec, tokenizer) sorted_tokens ``` There will be 102 sorted tokens in total. This has expanded to include tokens that weren't in the original text. This is the term expansion we will talk about next. Here are some terms that are added: "Berlin", and "founder" - despite having no mention of Arthur's race (which leads to Owen's Berlin win) and his work as the founder of Arthur Ashe Institute for Urban Health. Here are the top few `sorted_tokens` with a weight of more than 1: ```python { "ashe": 2.95, "arthur": 2.61, "tennis": 2.22, "robert": 1.74, "jr": 1.55, "he": 1.39, "founder": 1.36, "doubles": 1.24, "won": 1.22, "slam": 1.22, "died": 1.19, "singles": 1.1, "was": 1.07, "player": 1.06, "titles": 0.99, ... } ``` If you're interested in using the higher-performance approach, check out the following models: 1. [naver/efficient-splade-VI-BT-large-doc](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) 2. [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) ## Why SPLADE works: term expansion Consider a query "solar energy advantages". SPLADE might expand this to include terms like "renewable," "sustainable," and "photovoltaic," which are contextually relevant but not explicitly mentioned. This process is called term expansion, and it's a key component of SPLADE. SPLADE learns the query/document expansion to include other relevant terms. This is a crucial advantage over other sparse methods which include the exact word, but completely miss the contextually relevant ones. This expansion has a direct relationship with what we can control when making a SPLADE model: Sparsity via Regularisation. The number of tokens (BERT wordpieces) we use to represent each document. If we use more tokens, we can represent more terms, but the vectors become denser. This number is typically between 20 to 200 per document. As a reference point, the dense BERT vector is 768 dimensions, OpenAI Embedding is 1536 dimensions, and the sparse vector is 30 dimensions. For example, assume a 1M document corpus. Say, we use 100 sparse token ids + weights per document. Correspondingly, dense BERT vector would be 768M floats, the OpenAI Embedding would be 1.536B floats, and the sparse vector would be a maximum of 100M integers + 100M floats. This could mean a **10x reduction in memory usage**, which is a huge win for large-scale systems: | Vector Type | Memory (GB) | |-------------------|-------------------------| | Dense BERT Vector | 6.144 | | OpenAI Embedding | 12.288 | | Sparse Vector | 1.12 | ## How SPLADE works: leveraging BERT SPLADE leverages a transformer architecture to generate sparse representations of documents and queries, enabling efficient retrieval. Let's dive into the process. The output logits from the transformer backbone are inputs upon which SPLADE builds. The transformer architecture can be something familiar like BERT. Rather than producing dense probability distributions, SPLADE utilizes these logits to construct sparse vectors—think of them as a distilled essence of tokens, where each dimension corresponds to a term from the vocabulary and its associated weight in the context of the given document or query. This sparsity is critical; it mirrors the probability distributions from a typical [Masked Language Modeling](http://jalammar.github.io/illustrated-bert/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) task but is tuned for retrieval effectiveness, emphasizing terms that are both: 1. Contextually relevant: Terms that represent a document well should be given more weight. 2. Discriminative across documents: Terms that a document has, and other documents don't, should be given more weight. The token-level distributions that you'd expect in a standard transformer model are now transformed into token-level importance scores in SPLADE. These scores reflect the significance of each term in the context of the document or query, guiding the model to allocate more weight to terms that are likely to be more meaningful for retrieval purposes. The resulting sparse vectors are not only memory-efficient but also tailored for precise matching in the high-dimensional space of a search engine like Qdrant. ## Interpreting SPLADE A downside of dense vectors is that they are not interpretable, making it difficult to understand why a document is relevant to a query. SPLADE importance estimation can provide insights into the 'why' behind a document's relevance to a query. By shedding light on which tokens contribute most to the retrieval score, SPLADE offers some degree of interpretability alongside performance, a rare feat in the realm of neural IR systems. For engineers working on search, this transparency is invaluable. ## Known limitations of SPLADE ### Pooling strategy The switch to max pooling in SPLADE improved its performance on the MS MARCO and TREC datasets. However, this indicates a potential limitation of the baseline SPLADE pooling method, suggesting that SPLADE's performance is sensitive to the choice of pooling strategy​​. ### Document and query Eecoder The SPLADE model variant that uses a document encoder with max pooling but no query encoder reaches the same performance level as the prior SPLADE model. This suggests a limitation in the necessity of a query encoder, potentially affecting the efficiency of the model​​. ## Other sparse vector methods SPLADE is not the only method to create sparse vectors. Essentially, sparse vectors are a superset of TF-IDF and BM25, which are the most popular text retrieval methods. In other words, you can create a sparse vector using the term frequency and inverse document frequency (TF-IDF) to reproduce the BM25 score exactly. Additionally, attention weights from Sentence Transformers can be used to create sparse vectors. This method preserves the ability to query exact words and phrases but avoids the computational overhead of query expansion used in SPLADE. We will cover these methods in detail in a future article. ## Leveraging sparse vectors in Qdrant for hybrid search Qdrant supports a separate index for Sparse Vectors. This enables you to use the same collection for both dense and sparse vectors. Each "Point" in Qdrant can have both dense and sparse vectors. But let's first take a look at how you can work with sparse vectors in Qdrant. ## Practical implementation in Python Let's dive into how Qdrant handles sparse vectors with an example. Here is what we will cover: 1. Setting Up Qdrant Client: Initially, we establish a connection with Qdrant using the QdrantClient. This setup is crucial for subsequent operations. 2. Creating a Collection with Sparse Vector Support: In Qdrant, a collection is a container for your vectors. Here, we create a collection specifically designed to support sparse vectors. This is done using the recreate_collection method where we define the parameters for sparse vectors, such as setting the index configuration. 3. Inserting Sparse Vectors: Once the collection is set up, we can insert sparse vectors into it. This involves defining the sparse vector with its indices and values, and then upserting this point into the collection. 4. Querying with Sparse Vectors: To perform a search, we first prepare a query vector. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. 5. Retrieving and Interpreting Results: The search operation returns results that include the id of the matching document, its score, and other relevant details. The score is a crucial aspect, reflecting the similarity between the query and the documents in the collection. ### 1. Set up ```python # Qdrant client setup client = QdrantClient(":memory:") # Define collection name COLLECTION_NAME = "example_collection" # Insert sparse vector into Qdrant collection point_id = 1 # Assign a unique ID for the point ``` ### 2. Create a collection with sparse vector support ```python client.recreate_collection( collection_name=COLLECTION_NAME, vectors_config={}, sparse_vectors_config={ "text": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` ### 3. Insert sparse vectors Here, we see the process of inserting a sparse vector into the Qdrant collection. This step is key to building a dataset that can be quickly retrieved in the first stage of the retrieval process, utilizing the efficiency of sparse vectors. Since this is for demonstration purposes, we insert only one point with Sparse Vector and no dense vector. ```python client.upsert( collection_name=COLLECTION_NAME, points=[ models.PointStruct( id=point_id, payload={}, # Add any additional payload if necessary vector={ "text": models.SparseVector( indices=indices.tolist(), values=values.tolist() ) }, ) ], ) ``` By upserting points with sparse vectors, we prepare our dataset for rapid first-stage retrieval, laying the groundwork for subsequent detailed analysis using dense vectors. Notice that we use "text" to denote the name of the sparse vector. Those familiar with the Qdrant API will notice that the extra care taken to be consistent with the existing named vectors API -- this is to make it easier to use sparse vectors in existing codebases. As always, you're able to **apply payload filters**, shard keys, and other advanced features you've come to expect from Qdrant. To make things easier for you, the indices and values don't have to be sorted before upsert. Qdrant will sort them when the index is persisted e.g. on disk. ### 4. Query with sparse vectors We use the same process to prepare a query vector as well. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. ```python # Preparing a query vector query_text = "Who was Arthur Ashe?" query_vec, query_tokens = compute_vector(query_text) query_vec.shape query_indices = query_vec.nonzero().numpy().flatten() query_values = query_vec.detach().numpy()[indices] ``` In this example, we use the same model for both document and query. This is not a requirement, but it's a simpler approach. ### 5. Retrieve and interpret results After setting up the collection and inserting sparse vectors, the next critical step is retrieving and interpreting the results. This process involves executing a search query and then analyzing the returned results. ```python # Searching for similar documents result = client.search( collection_name=COLLECTION_NAME, query_vector=models.NamedSparseVector( name="text", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), with_vectors=True, ) result ``` In the above code, we execute a search against our collection using the prepared sparse vector query. The `client.search` method takes the collection name and the query vector as inputs. The query vector is constructed using the `models.NamedSparseVector`, which includes the indices and values derived from the query text. This is a crucial step in efficiently retrieving relevant documents. ```python ScoredPoint( id=1, version=0, score=3.4292831420898438, payload={}, vector={ "text": SparseVector( indices=[2001, 2002, 2010, 2018, 2032, ...], values=[ 1.0660614967346191, 1.391068458557129, 0.8903818726539612, 0.2502821087837219, ..., ], ) }, ) ``` The result, as shown above, is a `ScoredPoint` object containing the ID of the retrieved document, its version, a similarity score, and the sparse vector. The score is a key element as it quantifies the similarity between the query and the document, based on their respective vectors. To understand how this scoring works, we use the familiar dot product method: $$\text{Similarity}(\text{Query}, \text{Document}) = \sum_{i \in I} \text{Query}_i \times \text{Document}_i$$ This formula calculates the similarity score by multiplying corresponding elements of the query and document vectors and summing these products. This method is particularly effective with sparse vectors, where many elements are zero, leading to a computationally efficient process. The higher the score, the greater the similarity between the query and the document, making it a valuable metric for assessing the relevance of the retrieved documents. ## Hybrid search: combining sparse and dense vectors By combining search results from both dense and sparse vectors, you can achieve a hybrid search that is both efficient and accurate. Results from sparse vectors will guarantee, that all results with the required keywords are returned, while dense vectors will cover the semantically similar results. The mixture of dense and sparse results can be presented directly to the user, or used as a first stage of a two-stage retrieval process. Let's see how you can make a hybrid search query in Qdrant. First, you need to create a collection with both dense and sparse vectors: ```python client.recreate_collection( collection_name=COLLECTION_NAME, vectors_config={ "text-dense": models.VectorParams( size=1536, # OpenAI Embeddings distance=models.Distance.COSINE, ) }, sparse_vectors_config={ "text-sparse": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` Then, assuming you have upserted both dense and sparse vectors, you can query them together: ```python query_text = "Who was Arthur Ashe?" # Compute sparse and dense vectors query_indices, query_values = compute_sparse_vector(query_text) query_dense_vector = compute_dense_vector(query_text) client.search_batch( collection_name=COLLECTION_NAME, requests=[ models.SearchRequest( vector=models.NamedVector( name="text-dense", vector=query_dense_vector, ), limit=10, ), models.SearchRequest( vector=models.NamedSparseVector( name="text-sparse", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), limit=10, ), ], ) ``` The result will be a pair of result lists, one for dense and one for sparse vectors. Having those results, there are several ways to combine them: ### Mixing or fusion You can mix the results from both dense and sparse vectors, based purely on their relative scores. This is a simple and effective approach, but it doesn't take into account the semantic similarity between the results. Among the [popular mixing methods](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18) are: - Reciprocal Ranked Fusion (RRF) - Relative Score Fusion (RSF) - Distribution-Based Score Fusion (DBSF) {{< figure src=/articles_data/sparse-vectors/mixture.png caption="Relative Score Fusion" width=80% >}} [Ranx](https://github.com/AmenRa/ranx) is a great library for mixing results from different sources. ### Re-ranking You can use obtained results as a first stage of a two-stage retrieval process. In the second stage, you can re-rank the results from the first stage using a more complex model, such as [Cross-Encoders](https://www.sbert.net/examples/applications/cross-encoder/README.html) or services like [Cohere Rerank](https://txt.cohere.com/rerank/). And that's it! You've successfully achieved hybrid search with Qdrant! ## Additional resources For those who want to dive deeper, here are the top papers on the topic most of which have code available: 1. Problem Motivation: [Sparse Overcomplete Word Vector Representations](https://ar5iv.org/abs/1506.02004?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval](https://ar5iv.org/abs/2109.10086?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking](https://ar5iv.org/abs/2107.05720?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. Late Interaction - [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](https://ar5iv.org/abs/2112.01488?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SparseEmbed: Learning Sparse Lexical Representations with Contextual Embeddings for Retrieval](https://research.google/pubs/pub52289/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) **Why just read when you can try it out?** We've packed an easy-to-use Colab for you on how to make a Sparse Vector: [Sparse Vectors Single Encoder Demo](https://colab.research.google.com/drive/1wa2Yr5BCOgV0MTOFFTude99BOXCLHXky?usp=sharing). Run it, tinker with it, and start seeing the magic unfold in your projects. We can't wait to hear how you use it! ## Conclusion Alright, folks, let's wrap it up. Better search isn't a 'nice-to-have,' it's a game-changer, and Qdrant can get you there. Got questions? Our [Discord community](https://qdrant.to/discord?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is teeming with answers. If you enjoyed reading this, why not sign up for our [newsletter](/subscribe/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) to stay ahead of the curve. And, of course, a big thanks to you, our readers, for pushing us to make ranking better for everyone.
qdrant-landing/content/articles/triplet-loss.md
--- title: Triplet Loss - Advanced Intro short_description: "What are the advantages of Triplet Loss and how to efficiently implement it?" description: "What are the advantages of Triplet Loss over Contrastive loss and how to efficiently implement it?" social_preview_image: /articles_data/triplet-loss/social_preview.jpg preview_dir: /articles_data/triplet-loss/preview small_preview_image: /articles_data/triplet-loss/icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-03-24T15:12:00+03:00 # aliases: [ /articles/triplet-loss/ ] --- ## What is Triplet Loss? Triplet Loss was first introduced in [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832) in 2015, and it has been one of the most popular loss functions for supervised similarity or metric learning ever since. In its simplest explanation, Triplet Loss encourages that dissimilar pairs be distant from any similar pairs by at least a certain margin value. Mathematically, the loss value can be calculated as $L=max(d(a,p) - d(a,n) + m, 0)$, where: - $p$, i.e., positive, is a sample that has the same label as $a$, i.e., anchor, - $n$, i.e., negative, is another sample that has a label different from $a$, - $d$ is a function to measure the distance between these three samples, - and $m$ is a margin value to keep negative samples far apart. The paper uses Euclidean distance, but it is equally valid to use any other distance metric, e.g., cosine distance. The function has a learning objective that can be visualized as in the following: {{< figure src=/articles_data/triplet-loss/loss_objective.png caption="Triplet Loss learning objective" >}} Notice that Triplet Loss does not have a side effect of urging to encode anchor and positive samples into the same point in the vector space as in Contrastive Loss. This lets Triplet Loss tolerate some intra-class variance, unlike Contrastive Loss, as the latter forces the distance between an anchor and any positive essentially to $0$. In other terms, Triplet Loss allows to stretch clusters in such a way as to include outliers while still ensuring a margin between samples from different clusters, e.g., negative pairs. Additionally, Triplet Loss is less greedy. Unlike Contrastive Loss, it is already satisfied when different samples are easily distinguishable from similar ones. It does not change the distances in a positive cluster if there is no interference from negative examples. This is due to the fact that Triplet Loss tries to ensure a margin between distances of negative pairs and distances of positive pairs. However, Contrastive Loss takes into account the margin value only when comparing dissimilar pairs, and it does not care at all where similar pairs are at that moment. This means that Contrastive Loss may reach a local minimum earlier, while Triplet Loss may continue to organize the vector space in a better state. Let's demonstrate how two loss functions organize the vector space by animations. For simpler visualization, the vectors are represented by points in a 2-dimensional space, and they are selected randomly from a normal distribution. {{< figure src=/articles_data/triplet-loss/contrastive.gif caption="Animation that shows how Contrastive Loss moves points in the course of training." >}} {{< figure src=/articles_data/triplet-loss/triplet.gif caption="Animation that shows how Triplet Loss moves points in the course of training." >}} From mathematical interpretations of the two-loss functions, it is clear that Triplet Loss is theoretically stronger, but Triplet Loss has additional tricks that help it work better. Most importantly, Triplet Loss introduce online triplet mining strategies, e.g., automatically forming the most useful triplets. ## Why triplet mining matters? The formulation of Triplet Loss demonstrates that it works on three objects at a time: - `anchor`, - `positive` - a sample that has the same label as the anchor, - and `negative` - a sample with a different label from the anchor and the positive. In a naive implementation, we could form such triplets of samples at the beginning of each epoch and then feed batches of such triplets to the model throughout that epoch. This is called "offline strategy." However, this would not be so efficient for several reasons: - It needs to pass $3n$ samples to get a loss value of $n$ triplets. - Not all these triplets will be useful for the model to learn anything, e.g., yielding a positive loss value. - Even if we form "useful" triplets at the beginning of each epoch with one of the methods that I will be implementing in this series, they may become "useless" at some point in the epoch as the model weights will be constantly updated. Instead, we can get a batch of $n$ samples and their associated labels, and form triplets on the fly. That is called "online strategy." Normally, this gives $n^3$ possible triplets, but only a subset of such possible triplets will be actually valid. Even in this case, we will have a loss value calculated from much more triplets than the offline strategy. Given a triplet of `(a, p, n)`, it is valid only if: - `a` and `p` has the same label, - `a` and `p` are distinct samples, - and `n` has a different label from `a` and `p`. These constraints may seem to be requiring expensive computation with nested loops, but it can be efficiently implemented with tricks such as distance matrix, masking, and broadcasting. The rest of this series will focus on the implementation of these tricks. ## Distance matrix A distance matrix is a matrix of shape $(n, n)$ to hold distance values between all possible pairs made from items in two $n$-sized collections. This matrix can be used to vectorize calculations that would need inefficient loops otherwise. Its calculation can be optimized as well, and we will implement [Euclidean Distance Matrix Trick (PDF)](https://www.robots.ox.ac.uk/~albanie/notes/Euclidean_distance_trick.pdf) explained by Samuel Albanie. You may want to read this three-page document for the full intuition of the trick, but a brief explanation is as follows: - Calculate the dot product of two collections of vectors, e.g., embeddings in our case. - Extract the diagonal from this matrix that holds the squared Euclidean norm of each embedding. - Calculate the squared Euclidean distance matrix based on the following equation: $||a - b||^2 = ||a||^2 - 2 ⟨a, b⟩ + ||b||^2$ - Get the square root of this matrix for non-squared distances. We will implement it in PyTorch, so let's start with imports. ```python import torch import torch.nn as nn import torch.nn.functional as F eps = 1e-8 # an arbitrary small value to be used for numerical stability tricks ``` --- ```python def euclidean_distance_matrix(x): """Efficient computation of Euclidean distance matrix Args: x: Input tensor of shape (batch_size, embedding_dim) Returns: Distance matrix of shape (batch_size, batch_size) """ # step 1 - compute the dot product # shape: (batch_size, batch_size) dot_product = torch.mm(x, x.t()) # step 2 - extract the squared Euclidean norm from the diagonal # shape: (batch_size,) squared_norm = torch.diag(dot_product) # step 3 - compute squared Euclidean distances # shape: (batch_size, batch_size) distance_matrix = squared_norm.unsqueeze(0) - 2 * dot_product + squared_norm.unsqueeze(1) # get rid of negative distances due to numerical instabilities distance_matrix = F.relu(distance_matrix) # step 4 - compute the non-squared distances # handle numerical stability # derivative of the square root operation applied to 0 is infinite # we need to handle by setting any 0 to eps mask = (distance_matrix == 0.0).float() # use this mask to set indices with a value of 0 to eps distance_matrix += mask * eps # now it is safe to get the square root distance_matrix = torch.sqrt(distance_matrix) # undo the trick for numerical stability distance_matrix *= (1.0 - mask) return distance_matrix ``` ## Invalid triplet masking Now that we can compute a distance matrix for all possible pairs of embeddings in a batch, we can apply broadcasting to enumerate distance differences for all possible triplets and represent them in a tensor of shape `(batch_size, batch_size, batch_size)`. However, only a subset of these $n^3$ triplets are actually valid as I mentioned earlier, and we need a corresponding mask to compute the loss value correctly. We will implement such a helper function in three steps: - Compute a mask for distinct indices, e.g., `(i != j and j != k)`. - Compute a mask for valid anchor-positive-negative triplets, e.g., `labels[i] == labels[j] and labels[j] != labels[k]`. - Combine two masks. ```python def get_triplet_mask(labels): """compute a mask for valid triplets Args: labels: Batch of integer labels. shape: (batch_size,) Returns: Mask tensor to indicate which triplets are actually valid. Shape: (batch_size, batch_size, batch_size) A triplet is valid if: `labels[i] == labels[j] and labels[i] != labels[k]` and `i`, `j`, `k` are different. """ # step 1 - get a mask for distinct indices # shape: (batch_size, batch_size) indices_equal = torch.eye(labels.size()[0], dtype=torch.bool, device=labels.device) indices_not_equal = torch.logical_not(indices_equal) # shape: (batch_size, batch_size, 1) i_not_equal_j = indices_not_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_not_equal_k = indices_not_equal.unsqueeze(1) # shape: (1, batch_size, batch_size) j_not_equal_k = indices_not_equal.unsqueeze(0) # Shape: (batch_size, batch_size, batch_size) distinct_indices = torch.logical_and(torch.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k) # step 2 - get a mask for valid anchor-positive-negative triplets # shape: (batch_size, batch_size) labels_equal = labels.unsqueeze(0) == labels.unsqueeze(1) # shape: (batch_size, batch_size, 1) i_equal_j = labels_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_equal_k = labels_equal.unsqueeze(1) # shape: (batch_size, batch_size, batch_size) valid_indices = torch.logical_and(i_equal_j, torch.logical_not(i_equal_k)) # step 3 - combine two masks mask = torch.logical_and(distinct_indices, valid_indices) return mask ``` ## Batch-all strategy for online triplet mining Now we are ready for actually implementing Triplet Loss itself. Triplet Loss involves several strategies to form or select triplets, and the simplest one is to use all valid triplets that can be formed from samples in a batch. This can be achieved in four easy steps thanks to utility functions we've already implemented: - Get a distance matrix of all possible pairs that can be formed from embeddings in a batch. - Apply broadcasting to this matrix to compute loss values for all possible triplets. - Set loss values of invalid or easy triplets to $0$. - Average the remaining positive values to return a scalar loss. I will start by implementing this strategy, and more complex ones will follow as separate posts. ```python class BatchAllTtripletLoss(nn.Module): """Uses all valid triplets to compute Triplet loss Args: margin: Margin value in the Triplet Loss equation """ def __init__(self, margin=1.): super().__init__() self.margin = margin def forward(self, embeddings, labels): """computes loss value. Args: embeddings: Batch of embeddings, e.g., output of the encoder. shape: (batch_size, embedding_dim) labels: Batch of integer labels associated with embeddings. shape: (batch_size,) Returns: Scalar loss value. """ # step 1 - get distance matrix # shape: (batch_size, batch_size) distance_matrix = euclidean_distance_matrix(embeddings) # step 2 - compute loss values for all triplets by applying broadcasting to distance matrix # shape: (batch_size, batch_size, 1) anchor_positive_dists = distance_matrix.unsqueeze(2) # shape: (batch_size, 1, batch_size) anchor_negative_dists = distance_matrix.unsqueeze(1) # get loss values for all possible n^3 triplets # shape: (batch_size, batch_size, batch_size) triplet_loss = anchor_positive_dists - anchor_negative_dists + self.margin # step 3 - filter out invalid or easy triplets by setting their loss values to 0 # shape: (batch_size, batch_size, batch_size) mask = get_triplet_mask(labels) triplet_loss *= mask # easy triplets have negative loss values triplet_loss = F.relu(triplet_loss) # step 4 - compute scalar loss value by averaging positive losses num_positive_losses = (triplet_loss > eps).float().sum() triplet_loss = triplet_loss.sum() / (num_positive_losses + eps) return triplet_loss ``` ## Conclusion I mentioned that Triplet Loss is different from Contrastive Loss not only mathematically but also in its sample selection strategies, and I implemented the batch-all strategy for online triplet mining in this post efficiently by using several tricks. There are other more complicated strategies such as batch-hard and batch-semihard mining, but their implementations, and discussions of the tricks I used for efficiency in this post, are worth separate posts of their own. The future posts will cover such topics and additional discussions on some tricks to avoid vector collapsing and control intra-class and inter-class variance.
qdrant-landing/content/articles/vector-similarity-beyond-search.md
--- title: "Vector Similarity: Going Beyond Full-Text Search | Qdrant" short_description: Explore how vector similarity enhances data discovery beyond full-text search, including diversity sampling and more! description: Discover how vector similarity expands data exploration beyond full-text search. Explore diversity sampling and more for enhanced data discovery! preview_dir: /articles_data/vector-similarity-beyond-search/preview small_preview_image: /articles_data/vector-similarity-beyond-search/icon.svg social_preview_image: /articles_data/vector-similarity-beyond-search/preview/social_preview.jpg weight: -1 author: Luis Cossío author_link: https://coszio.github.io/ date: 2023-08-08T08:00:00+03:00 draft: false keywords: - vector similarity - exploration - dissimilarity - discovery - diversity - recommendation --- # Vector Similarity: Unleashing Data Insights Beyond Traditional Search When making use of unstructured data, there are traditional go-to solutions that are well-known for developers: - **Full-text search** when you need to find documents that contain a particular word or phrase. - **[Vector search](https://qdrant.tech/documentation/overview/vector-search/)** when you need to find documents that are semantically similar to a given query. Sometimes people mix those two approaches, so it might look like the vector similarity is just an extension of full-text search. However, in this article, we will explore some promising new techniques that can be used to expand the use-case of unstructured data and demonstrate that vector similarity creates its own stack of data exploration tools. ## What is vector similarity search? Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines. From dissimilarity search to diversity and recommendation, these methods can expand the cases in which vectors are useful. Vector Databases, which are designed to store and process immense amounts of vectors, are the first candidates to implement these new techniques and allow users to exploit their data to its fullest. ## Vector similarity search vs. full-text search While there is an intersection in the functionality of these two approaches, there is also a vast area of functions that is unique to each of them. For example, the exact phrase matching and counting of results are native to full-text search, while vector similarity support for this type of operation is limited. On the other hand, vector similarity easily allows cross-modal retrieval of images by text or vice-versa, which is impossible with full-text search. This mismatch in expectations might sometimes lead to confusion. Attempting to use a vector similarity as a full-text search can result in a range of frustrations, from slow response times to poor search results, to limited functionality. As an outcome, they are getting only a fraction of the benefits of vector similarity. {{< figure width=70% src=/articles_data/vector-similarity-beyond-search/venn-diagram.png caption="Full-text search and Vector Similarity Functionality overlap" >}} Below we will explore why the vector similarity stack deserves new interfaces and design patterns that will unlock the full potential of this technology, which can still be used in conjunction with full-text search. ## New ways to interact with similarities Having a vector representation of unstructured data unlocks new ways of interacting with it. For example, it can be used to measure semantic similarity between words, to cluster words or documents based on their meaning, to find related images, or even to generate new text. However, these interactions can go beyond finding their nearest neighbors (kNN). There are several other techniques that can be leveraged by vector representations beyond the traditional kNN search. These include dissimilarity search, diversity search, recommendations, and discovery functions. ## Dissimilarity ssearch The Dissimilarity —or farthest— search is the most straightforward concept after the nearest search, which can’t be reproduced in a traditional full-text search. It aims to find the most un-similar or distant documents across the collection. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/dissimilarity.png caption="Dissimilarity Search" >}} Unlike full-text match, Vector similarity can compare any pair of documents (or points) and assign a similarity score. It doesn’t rely on keywords or other metadata. With vector similarity, we can easily achieve a dissimilarity search by inverting the search objective from maximizing similarity to minimizing it. The dissimilarity search can find items in areas where previously no other search could be used. Let’s look at a few examples. ### Case: mislabeling detection For example, we have a dataset of furniture in which we have classified our items into what kind of furniture they are: tables, chairs, lamps, etc. To ensure our catalog is accurate, we can use a dissimilarity search to highlight items that are most likely mislabeled. To do this, we only need to search for the most dissimilar items using the embedding of the category title itself as a query. This can be too broad, so, by combining it with filters —a [Qdrant superpower](/articles/filtrable-hnsw/)—, we can narrow down the search to a specific category. {{< figure src=/articles_data/vector-similarity-beyond-search/mislabelling.png caption="Mislabeling Detection" >}} The output of this search can be further processed with heavier models or human supervision to detect actual mislabeling. ### Case: outlier detection In some cases, we might not even have labels, but it is still possible to try to detect anomalies in our dataset. Dissimilarity search can be used for this purpose as well. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/anomaly-detection.png caption="Anomaly Detection" >}} The only thing we need is a bunch of reference points that we consider "normal". Then we can search for the most dissimilar points to this reference set and use them as candidates for further analysis. ## Diversity search Even with no input provided vector, (dis-)similarity can improve an overall selection of items from the dataset. The naive approach is to do random sampling. However, unless our dataset has a uniform distribution, the results of such sampling might be biased toward more frequent types of items. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-random.png caption="Example of random sampling" >}} The similarity information can increase the diversity of those results and make the first overview more interesting. That is especially useful when users do not yet know what they are looking for and want to explore the dataset. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-force.png caption="Example of similarity-based sampling" >}} The power of vector similarity, in the context of being able to compare any two points, allows making a diverse selection of the collection possible without any labeling efforts. By maximizing the distance between all points in the response, we can have an algorithm that will sequentially output dissimilar results. {{< figure src=/articles_data/vector-similarity-beyond-search/diversity.png caption="Diversity Search" >}} Some forms of diversity sampling are already used in the industry and are known as [Maximum Margin Relevance](https://python.langchain.com/docs/integrations/vectorstores/qdrant#maximum-marginal-relevance-search-mmr) (MMR). Techniques like this were developed to enhance similarity on a universal search API. However, there is still room for new ideas, particularly regarding diversity retrieval. By utilizing more advanced vector-native engines, it could be possible to take use cases to the next level and achieve even better results. ## Vector similarity recommendations Vector similarity can go above a single query vector. It can combine multiple positive and negative examples for a more accurate retrieval. Building a recommendation API in a vector database can take advantage of using already stored vectors as part of the queries, by specifying the point id. Doing this, we can skip query-time neural network inference, and make the recommendation search faster. There are multiple ways to implement recommendations with vectors. ### Vector-features recommendations The first approach is to take all positive and negative examples and average them to create a single query vector. In this technique, the more significant components of positive vectors are canceled out by the negative ones, and the resulting vector is a combination of all the features present in the positive examples, but not in the negative ones. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/feature-based-recommendations.png caption="Vector-Features Based Recommendations" >}} This approach is already implemented in Qdrant, and while it works great when the vectors are assumed to have each of their dimensions represent some kind of feature of the data, sometimes distances are a better tool to judge negative and positive examples. ### Relative distance recommendations Another approach is to use the distance between negative examples to the candidates to help them create exclusion areas. In this technique, we perform searches near the positive examples while excluding the points that are closer to a negative example than to a positive one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/relative-distance-recommendations.png caption="Relative Distance Recommendations" >}} The main use-case of both approaches —of course— is to take some history of user interactions and recommend new items based on it. ## Discovery In many exploration scenarios, the desired destination is not known in advance. The search process in this case can consist of multiple steps, where each step would provide a little more information to guide the search in the right direction. To get more intuition about the possible ways to implement this approach, let’s take a look at how similarity modes are trained in the first place: The most well-known loss function used to train similarity models is a [triplet-loss](https://en.wikipedia.org/wiki/Triplet_loss). In this loss, the model is trained by fitting the information of relative similarity of 3 objects: the Anchor, Positive, and Negative examples. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/triplet-loss.png caption="Triplet Loss" >}} Using the same mechanics, we can look at the training process from the other side. Given a trained model, the user can provide positive and negative examples, and the goal of the discovery process is then to find suitable anchors across the stored collection of vectors. <!-- ToDo: image where we know positive and nagative --> {{< figure width=60% src=/articles_data/vector-similarity-beyond-search/discovery.png caption="Reversed triplet loss" >}} Multiple positive-negative pairs can be provided to make the discovery process more accurate. Worth mentioning, that as well as in NN training, the dataset may contain noise and some portion of contradictory information, so a discovery process should be tolerant of this kind of data imperfections. <!-- Image with multiple pairs --> {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-noise.png caption="Sample pairs" >}} The important difference between this and the recommendation method is that the positive-negative pairs in the discovery method don’t assume that the final result should be close to positive, it only assumes that it should be closer than the negative one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-vs-recommendations.png caption="Discovery vs Recommendation" >}} In combination with filtering or similarity search, the additional context information provided by the discovery pairs can be used as a re-ranking factor. ## A new API stack for vector databases When you introduce vector similarity capabilities into your text search engine, you extend its functionality. However, it doesn't work the other way around, as the vector similarity as a concept is much broader than some task-specific implementations of full-text search. [Vector databases](https://qdrant.tech/), which introduce built-in full-text functionality, must make several compromises: - Choose a specific full-text search variant. - Either sacrifice API consistency or limit vector similarity functionality to only basic kNN search. - Introduce additional complexity to the system. Qdrant, on the contrary, puts vector similarity in the center of its API and architecture, such that it allows us to move towards a new stack of vector-native operations. We believe that this is the future of vector databases, and we are excited to see what new use-cases will be unlocked by these techniques. ## Key takeaways: - Vector similarity offers advanced data exploration tools beyond traditional full-text search, including dissimilarity search, diversity sampling, and recommendation systems. - Practical applications of vector similarity include improving data quality through mislabeling detection and anomaly identification. - Enhanced user experiences are achieved by leveraging advanced search techniques, providing users with intuitive data exploration, and improving decision-making processes. Ready to unlock the full potential of your data? [Try a free demo](https://qdrant.tech/contact-us/) to explore how vector similarity can revolutionize your data insights and drive smarter decision-making.
qdrant-landing/content/articles/web-ui-gsoc.md
--- title: Google Summer of Code 2023 - Web UI for Visualization and Exploration short_description: Gsoc'23 Web UI for Visualization and Exploration description: My journey as a Google Summer of Code 2023 student working on the "Web UI for Visualization and Exploration" project for Qdrant. preview_dir: /articles_data/web-ui-gsoc/preview small_preview_image: /articles_data/web-ui-gsoc/icon.svg social_preview_image: /articles_data/web-ui-gsoc/preview/social_preview.jpg weight: -20 author: Kartik Gupta author_link: https://kartik-gupta-ij.vercel.app/ date: 2023-08-28T08:00:00+03:00 draft: false keywords: - vector reduction - console - gsoc'23 - vector similarity - exploration - recommendation --- ## Introduction Hello everyone! My name is Kartik Gupta, and I am thrilled to share my coding journey as part of the Google Summer of Code 2023 program. This summer, I had the incredible opportunity to work on an exciting project titled "Web UI for Visualization and Exploration" for Qdrant, a vector search engine. In this article, I will take you through my experience, challenges, and achievements during this enriching coding journey. ## Project Overview Qdrant is a powerful vector search engine widely used for similarity search and clustering. However, it lacked a user-friendly web-based UI for data visualization and exploration. My project aimed to bridge this gap by developing a web-based user interface that allows users to easily interact with and explore their vector data. ## Milestones and Achievements The project was divided into six milestones, each focusing on a specific aspect of the web UI development. Let's go through each of them and my achievements during the coding period. **1. Designing a friendly UI on Figma** I started by designing the user interface on Figma, ensuring it was easy to use, visually appealing, and responsive on different devices. I focused on usability and accessibility to create a seamless user experience. ( [Figma Design](https://www.figma.com/file/z54cAcOErNjlVBsZ1DrXyD/Qdant?type=design&node-id=0-1&mode=design&t=Pu22zO2AMFuGhklG-0)) **2. Building the layout** The layout route served as a landing page with an overview of the application's features and navigation links to other routes. **3. Creating a view collection route** This route enabled users to view a list of collections available in the application. Users could click on a collection to see more details, including the data and vectors associated with it. {{< figure src=/articles_data/web-ui-gsoc/collections-page.png caption="Collection Page" alt="Collection Page" >}} **4. Developing a data page with "find similar" functionality** I implemented a data page where users could search for data and find similar data using a recommendation API. The recommendation API suggested similar data based on the Data's selected ID, providing valuable insights. {{< figure src=/articles_data/web-ui-gsoc/points-page.png caption="Points Page" alt="Points Page" >}} **5. Developing query editor page libraries** This milestone involved creating a query editor page that allowed users to write queries in a custom language. The editor provided syntax highlighting, autocomplete, and error-checking features for a seamless query writing experience. {{< figure src=/articles_data/web-ui-gsoc/console-page.png caption="Query Editor Page" alt="Query Editor Page" >}} **6. Developing a route for visualizing vector data points** This is done by the reduction of n-dimensional vector in 2-D points and they are displayed with their respective payloads. {{< figure src=/articles_data/web-ui-gsoc/visualization-page.png caption="Vector Visuliztion Page" alt="visualization-page" >}} ## Challenges and Learning Throughout the project, I encountered a series of challenges that stretched my engineering capabilities and provided unique growth opportunities. From mastering new libraries and technologies to ensuring the user interface (UI) was both visually appealing and user-friendly, every obstacle became a stepping stone toward enhancing my skills as a developer. However, each challenge provided an opportunity to learn and grow as a developer. I acquired valuable experience in vector search and dimension reduction techniques. The most significant learning for me was the importance of effective project management. Setting realistic timelines, collaborating with mentors, and staying proactive with feedback allowed me to complete the milestones efficiently. ### Technical Learning and Skill Development One of the most significant aspects of this journey was diving into the intricate world of vector search and dimension reduction techniques. These areas, previously unfamiliar to me, required rigorous study and exploration. Learning how to process vast amounts of data efficiently and extract meaningful insights through these techniques was both challenging and rewarding. ### Effective Project Management Undoubtedly, the most impactful lesson was the art of effective project management. I quickly grasped the importance of setting realistic timelines and goals. Collaborating closely with mentors and maintaining proactive communication proved indispensable. This approach enabled me to navigate the complex development process and successfully achieve the project's milestones. ### Overcoming Technical Challenges #### Autocomplete Feature in Console One particularly intriguing challenge emerged while working on the autocomplete feature within the console. Finding a solution was proving elusive until a breakthrough came from an unexpected direction. My mentor, Andrey, proposed creating a separate module that could support autocomplete based on OpenAPI for our custom language. This ingenious approach not only resolved the issue but also showcased the power of collaborative problem-solving. #### Optimization with Web Workers The high-processing demands of vector reduction posed another significant challenge. Initially, this task was straining browsers and causing performance issues. The solution materialized in the form of web workers—an independent processing instance that alleviated the strain on browsers. However, a new question arose: how to terminate these workers effectively? With invaluable insights from my mentor, I gained a deeper understanding of web worker dynamics and successfully tackled this challenge. #### Console Integration Complexity Integrating the console interaction into the application presented multifaceted challenges. Crafting a custom language in Monaco, parsing text to make API requests, and synchronizing the entire process demanded meticulous attention to detail. Overcoming these hurdles was a testament to the complexity of real-world engineering endeavours. #### Codelens Multiplicity Issue An unexpected issue cropped up during the development process: the codelen (run button) registered multiple times, leading to undesired behaviour. This hiccup underscored the importance of thorough testing and debugging, even in seemingly straightforward features. ### Key Learning Points Amidst these challenges, I garnered valuable insights that have significantly enriched my engineering prowess: **Vector Reduction Techniques**: Navigating the realm of vector reduction techniques provided a deep understanding of how to process and interpret data efficiently. This knowledge opens up new avenues for developing data-driven applications in the future. **Web Workers Efficiency**: Mastering the intricacies of web workers not only resolved performance concerns but also expanded my repertoire of optimization strategies. This newfound proficiency will undoubtedly find relevance in various future projects. **Monaco Editor and UI Frameworks**: Working extensively with the Monaco Editor, Material-UI (MUI), and Vite enriched my familiarity with these essential tools. I honed my skills in integrating complex UI components seamlessly into applications. ## Areas for Improvement and Future Enhancements While reflecting on this transformative journey, I recognize several areas that offer room for improvement and future enhancements: 1. Enhanced Autocomplete: Further refining the autocomplete feature to support key-value suggestions in JSON structures could greatly enhance the user experience. 2. Error Detection in Console: Integrating the console's error checker with OpenAPI could enhance its accuracy in identifying errors and offering precise suggestions for improvement. 3. Expanded Vector Visualization: Exploring additional visualization methods and optimizing their performance could elevate the utility of the vector visualization route. ## Conclusion Participating in the Google Summer of Code 2023 and working on the "Web UI for Visualization and Exploration" project has been an immensely rewarding experience. I am grateful for the opportunity to contribute to Qdrant and develop a user-friendly interface for vector data exploration. I want to express my gratitude to my mentors and the entire Qdrant community for their support and guidance throughout this journey. This experience has not only improved my coding skills but also instilled a deeper passion for web development and data analysis. As my coding journey continues beyond this project, I look forward to applying the knowledge and experience gained here to future endeavours. I am excited to see how Qdrant evolves with the newly developed web UI and how it positively impacts users worldwide. Thank you for joining me on this coding adventure, and I hope to share more exciting projects in the future! Happy coding!
qdrant-landing/content/articles/what-are-embeddings.md
--- title: "What are Vector Embeddings? - Revolutionize Your Search Experience" draft: false slug: what-are-embeddings? short_description: Explore the power of vector embeddings. Learn to use numerical machine learning representations to build a personalized Neural Search Service with Fastembed. description: Discover the power of vector embeddings. Learn how to harness the potential of numerical machine learning representations to create a personalized Neural Search Service with FastEmbed. preview_dir: /articles_data/what-are-embeddings/preview weight: -102 social_preview_image: /articles_data/what-are-embeddings/preview/social-preview.jpg small_preview_image: /articles_data/what-are-embeddings/icon.svg date: 2024-02-06T15:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - vector-search - vector-database - embeddings - machine-learning - artificial intelligence --- > **Embeddings** are numerical machine learning representations of the semantic of the input data. They capture the meaning of complex, high-dimensional data, like text, images, or audio, into vectors. Enabling algorithms to process and analyze the data more efficiently. You know when you’re scrolling through your social media feeds and the content just feels incredibly tailored to you? There's the news you care about, followed by a perfect tutorial with your favorite tech stack, and then a meme that makes you laugh so hard you snort. Or what about how YouTube recommends videos you ended up loving. It’s by creators you've never even heard of and you didn’t even send YouTube a note about your ideal content lineup. This is the magic of embeddings. These are the result of **deep learning models** analyzing the data of your interactions online. From your likes, shares, comments, searches, the kind of content you linger on, and even the content you decide to skip. It also allows the algorithm to predict future content that you are likely to appreciate. The same embeddings can be repurposed for search, ads, and other features, creating a highly personalized user experience. ![How embeddings are applied to perform recommendantions and other use cases](/articles_data/what-are-embeddings/Embeddings-Use-Case.jpg) They make [high-dimensional](https://www.sciencedirect.com/topics/computer-science/high-dimensional-data) data more manageable. This reduces storage requirements, improves computational efficiency, and makes sense of a ton of **unstructured** data. ## Why use vector embeddings? The **nuances** of natural language or the hidden **meaning** in large datasets of images, sounds, or user interactions are hard to fit into a table. Traditional relational databases can't efficiently query most types of data being currently used and produced, making the **retrieval** of this information very limited. In the embeddings space, synonyms tend to appear in similar contexts and end up having similar embeddings. The space is a system smart enough to understand that "pretty" and "attractive" are playing for the same team. Without being explicitly told so. That’s the magic. At their core, vector embeddings are about semantics. They take the idea that "a word is known by the company it keeps" and apply it on a grand scale. ![Example of how synonyms are placed closer together in the embeddings space](/articles_data/what-are-embeddings/Similar-Embeddings.jpg) This capability is crucial for creating search systems, recommendation engines, retrieval augmented generation (RAG) and any application that benefits from a deep understanding of content. ## How do embeddings work? Embeddings are created through neural networks. They capture complex relationships and semantics into [dense vectors](https://www1.se.cuhk.edu.hk/~seem5680/lecture/semantics-with-dense-vectors-2018.pdf) which are more suitable for machine learning and data processing applications. They can then project these vectors into a proper **high-dimensional** space, specifically, a [Vector Database](/articles/what-is-a-vector-database/). ![The process for turning raw data into embeddings and placing them into the vector space](/articles_data/what-are-embeddings/How-Embeddings-Work.jpg) The meaning of a data point is implicitly defined by its **position** on the vector space. After the vectors are stored, we can use their spatial properties to perform [nearest neighbor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search#:~:text=Nearest%20neighbor%20search%20(NNS)%2C,the%20larger%20the%20function%20values.). These searches retrieve semantically similar items based on how close they are in this space. > The quality of the vector representations drives the performance. The embedding model that works best for you depends on your use case. ### Creating vector embeddings Embeddings translate the complexities of human language to a format that computers can understand. It uses neural networks to assign **numerical values** to the input data, in a way that similar data has similar values. ![The process of using Neural Networks to create vector embeddings](/articles_data/what-are-embeddings/How-Do-Embeddings-Work_.jpg) For example, if I want to make my computer understand the word 'right', I can assign a number like 1.3. So when my computer sees 1.3, it sees the word 'right’. Now I want to make my computer understand the context of the word ‘right’. I can use a two-dimensional vector, such as [1.3, 0.8], to represent 'right'. The first number 1.3 still identifies the word 'right', but the second number 0.8 specifies the context. We can introduce more dimensions to capture more nuances. For example, a third dimension could represent formality of the word, a fourth could indicate its emotional connotation (positive, neutral, negative), and so on. The evolution of this concept led to the development of embedding models like [Word2Vec](https://en.wikipedia.org/wiki/Word2vec) and [GloVe](https://en.wikipedia.org/wiki/GloVe). They learn to understand the context in which words appear to generate high-dimensional vectors for each word, capturing far more complex properties. ![How Word2Vec model creates the embeddings for a word](/articles_data/what-are-embeddings/Word2Vec-model.jpg) However, these models still have limitations. They generate a single vector per word, based on its usage across texts. This means all the nuances of the word "right" are blended into one vector representation. That is not enough information for computers to fully understand the context. So, how do we help computers grasp the nuances of language in different contexts? In other words, how do we differentiate between: * "your answer is right" * "turn right at the corner" * "everyone has the right to freedom of speech" Each of these sentences use the word 'right', with different meanings. More advanced models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) and [GPT](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) use deep learning models based on the [transformer architecture](https://arxiv.org/abs/1706.03762), which helps computers consider the full context of a word. These models pay attention to the entire context. The model understands the specific use of a word in its **surroundings**, and then creates different embeddings for each. ![How the BERT model creates the embeddings for a word](/articles_data/what-are-embeddings/BERT-model.jpg) But how does this process of understanding and interpreting work in practice? Think of the term: "biophilic design", for example. To generate its embedding, the transformer architecture can use the following contexts: * "Biophilic design incorporates natural elements into architectural planning." * "Offices with biophilic design elements report higher employee well-being." * "...plant life, natural light, and water features are key aspects of biophilic design." And then it compares contexts to known architectural and design principles: * "Sustainable designs prioritize environmental harmony." * "Ergonomic spaces enhance user comfort and health." The model creates a vector embedding for "biophilic design" that encapsulates the concept of integrating natural elements into man-made environments. Augmented with attributes that highlight the correlation between this integration and its positive impact on health, well-being, and environmental sustainability. ### Integration with embedding APIs Selecting the right embedding model for your use case is crucial to your application performance. Qdrant makes it easier by offering seamless integration with the best selection of embedding APIs, including [Cohere](/documentation/embeddings/cohere/), [Gemini](/documentation/embeddings/gemini/), [Jina Embeddings](/documentation/embeddings/jina-embeddings/), [OpenAI](/documentation/embeddings/openai/), [Aleph Alpha](/documentation/embeddings/aleph-alpha/), [Fastembed](https://github.com/qdrant/fastembed), and [AWS Bedrock](/documentation/embeddings/bedrock/). If you’re looking for NLP and rapid prototyping, including language translation, question-answering, and text generation, OpenAI is a great choice. Gemini is ideal for image search, duplicate detection, and clustering tasks. Fastembed, which we’ll use on the example below, is designed for efficiency and speed, great for applications needing low-latency responses, such as autocomplete and instant content recommendations. We plan to go deeper into selecting the best model based on performance, cost, integration ease, and scalability in a future post. ## Create a neural search service with Fastmbed Now that you’re familiar with the core concepts around vector embeddings, how about start building your own [Neural Search Service](/documentation/tutorials/neural-search/)? Tutorial guides you through a practical application of how to use Qdrant for document management based on descriptions of companies from [startups-list.com](https://www.startups-list.com/). From embedding data, integrating it with Qdrant's vector database, constructing a search API, and finally deploying your solution with FastAPI. Check out what the final version of this project looks like on the [live online demo](https://qdrant.to/semantic-search-demo). Let us know what you’re building with embeddings! Join our [Discord](https://discord.gg/qdrant-907569970500743200) community and share your projects!
qdrant-landing/content/articles/what-is-a-vector-database.md
--- title: "What is a Vector Database?" draft: false slug: what-is-a-vector-database? short_description: What is a Vector Database? description: An overview of vector databases, detailing their functionalities, architecture, and diverse use cases in modern data processing. preview_dir: /articles_data/what-is-a-vector-database/preview weight: -100 social_preview_image: /articles_data/what-is-a-vector-database/preview/social-preview.jpg small_preview_image: /articles_data/what-is-a-vector-database/icon.svg date: 2024-01-25T09:29:33-03:00 author: Sabrina Aquino featured: true tags: - vector-search - vector-database - embeddings aliases: [ /blog/what-is-a-vector-database/ ] --- > A Vector Database is a specialized database system designed for efficiently indexing, querying, and retrieving high-dimensional vector data. Those systems enable advanced data analysis and similarity-search operations that extend well beyond the traditional, structured query approach of conventional databases. ## Why use a Vector Database? The data flood is real. In 2024, we're drowning in unstructured data like images, text, and audio, that don’t fit into neatly organized tables. Still, we need a way to easily tap into the value within this chaos of almost 330 million terabytes of data being created each day. Traditional databases, even with extensions that provide some vector handling capabilities, struggle with the complexities and demands of high-dimensional vector data. Handling of vector data is extremely resource-intensive. A traditional vector is around 6Kb. You can see how scaling to millions of vectors can demand substantial system memory and computational resources. Which is at least very challenging for traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap) databases to manage. ![](/articles_data/what-is-a-vector-database/Why-Use-Vector-Database.jpg) Vector databases allow you to understand the **context** or **conceptual similarity** of unstructured data by representing them as **vectors**, enabling advanced analysis and retrieval based on data similarity. For example, in recommendation systems, vector databases can analyze user behavior and item characteristics to suggest products or content with a high degree of personal relevance. In search engines and research databases, they enhance the user experience by providing results that are **semantically** similar to the query. They do not rely solely on the exact words typed into the search bar. If you're new to the vector search space, this article explains the key concepts and relationships that you need to know. So let's get into it. ## What is Vector Data? To understand vector databases, let's begin by defining what is a 'vector' or 'vector data'. Vectors are a **numerical representation** of some type of complex information. To represent textual data, for example, it will encapsulate the nuances of language, such as semantics and context. With an image, the vector data encapsulates aspects like color, texture, and shape. The **dimensions** relate to the complexity and the amount of information each image contains. Each pixel in an image can be seen as one dimension, as it holds data (like color intensity values for red, green, and blue channels in a color image). So even a small image with thousands of pixels translates to thousands of dimensions. So from now on, when we talk about high-dimensional data, we mean that the data contains a large number of data points (pixels, features, semantics, syntax). The **creation** of vector data (so we can store this high-dimensional data on our vector database) is primarily done through **embeddings**. ![](/articles_data/what-is-a-vector-database/Vector-Data.jpg) ### How do Embeddings Work? Embeddings translate this high-dimensional data into a more manageable, **lower-dimensional** vector form that's more suitable for machine learning and data processing applications, typically through **neural network models**. In creating dimensions for text, for example, the process involves analyzing the text to capture its linguistic elements. Transformer-based neural networks like **BERT** (Bidirectional Encoder Representations from Transformers) and **GPT** (Generative Pre-trained Transformer), are widely used for creating text embeddings. Each layer extracts different levels of features, such as context, semantics, and syntax. ![](/articles_data/what-is-a-vector-database/How-Do-Embeddings-Work_.jpg) The final layers of the network condense this information into a vector that is a compact, lower-dimensional representation of the image but still retains the essential information. ## Core Functionalities of Vector Databases ### What is Indexing? Have you ever tried to find a specific face in a massive crowd photo? Well, vector databases face a similar challenge when dealing with tons of high-dimensional vectors. Now, imagine dividing the crowd into smaller groups based on hair color, then eye color, then clothing style. Each layer gets you closer to who you’re looking for. Vector databases use similar **multi-layered** structures called indexes to organize vectors based on their "likeness." This way, finding similar images becomes a quick hop across related groups, instead of scanning every picture one by one. ![](/articles_data/what-is-a-vector-database/Indexing.jpg) Different indexing methods exist, each with its strengths. [HNSW](/articles/filtrable-hnsw/) balances speed and accuracy like a well-connected network of shortcuts in the crowd. Others, like IVF or Product Quantization, focus on specific tasks or memory efficiency. #### What is Binary Quantization? Quantization is a technique used for reducing the total size of the database. It works by compressing vectors into a more compact representation at the cost of accuracy. [Binary Quantization](/articles/binary-quantization/) is a fast indexing and data compression method used by Qdrant. It supports vector comparisons, which can dramatically speed up query processing times (up to 40x faster!). Think of each data point as a ruler. Binary quantization splits this ruler in half at a certain point, marking everything above as "1" and everything below as "0". This [binarization](https://deepai.org/machine-learning-glossary-and-terms/binarization) process results in a string of bits, representing the original vector. ![](/articles_data/what-is-a-vector-database/Binary-Quant.png) This "quantized" code is much smaller and easier to compare. Especially for OpenAI embeddings, this type of quantization has proven to achieve a massive performance improvement at a lower cost of accuracy. ### What is Similarity Search? [Similarity search](/documentation/concepts/search/) allows you to search not by keywords but by meaning. This way you can do searches such as similar songs that evoke the same mood, finding images that match your artistic vision, or even exploring emotional patterns in text. The way it works is, when the user queries the database, this query is also converted into a vector (the query vector). The [vector search](/documentation/overview/vector-search/) starts at the top layer of the HNSW index, where the algorithm quickly identifies the area of the graph likely to contain vectors closest to the query vector. The algorithm compares your query vector to all the others, using metrics like "distance" or "similarity" to gauge how close they are. The search then moves down progressively narrowing down to more closely related vectors. The goal is to narrow down the dataset to the most relevant items. The image below illustrates this. ![](/articles_data/what-is-a-vector-database/Similarity-Search-and-Retrieval.jpg) Once the closest vectors are identified at the bottom layer, these points translate back to actual data, like images or music, representing your search results. ### Scalability Vector databases often deal with datasets that comprise billions of high-dimensional vectors. This data isn't just large in volume but also complex in nature, requiring more computing power and memory to process. Scalable systems can handle this increased complexity without performance degradation. This is achieved through a combination of a **distributed architecture**, **dynamic resource allocation**, **data partitioning**, **load balancing**, and **optimization techniques**. Systems like Qdrant exemplify scalability in vector databases. It leverages Rust's efficiency in **memory management** and **performance**, which allows handling of large-scale data with optimized resource usage. ### Efficient Query Processing The key to efficient query processing in these databases is linked to their **indexing methods**, which enable quick navigation through complex data structures. By mapping and accessing the high-dimensional vector space, HNSW and similar indexing techniques significantly reduce the time needed to locate and retrieve relevant data. ![](/articles_data/what-is-a-vector-database/search-query.jpg) Other techniques like **handling computational load** and **parallel processing** are used for performance, especially when managing multiple simultaneous queries. Complementing them, **strategic caching** is also employed to store frequently accessed data, facilitating a quicker retrieval for subsequent queries. ### Using Metadata and Filters Filters use metadata to refine search queries within the database. For example, in a database containing text documents, a user might want to search for documents not only based on textual similarity but also filter the results by publication date or author. When a query is made, the system can use **both** the vector data and the metadata to process the query. In other words, the database doesn’t just look for the closest vectors. It also considers the additional criteria set by the metadata filters, creating a more customizable search experience. ![](/articles_data/what-is-a-vector-database/metadata.jpg) ### Data Security and Access Control Vector databases often store sensitive information. This could include personal data in customer databases, confidential images, or proprietary text documents. Ensuring data security means protecting this information from unauthorized access, breaches, and other forms of cyber threats. At Qdrant, this includes mechanisms such as: - User authentication - Encryption for data at rest and in transit - Keeping audit trails - Advanced database monitoring and anomaly detection ## Architecture of a Vector Database A vector database is made of multiple different entities and relations. Here's a high-level overview of Qdrant's terminologies and how they fit into the larger picture: ![](/articles_data/what-is-a-vector-database/Architecture-of-a-Vector-Database.jpg) **Collections**: [Collections](/documentation/concepts/collections/) are a named set of data points, where each point is a vector with an associated payload. All vectors within a collection must have the same dimensionality and be comparable using a single metric. **Distance Metrics**: These metrics are used to measure the similarity between vectors. The choice of distance metric is made when creating a collection. It depends on the nature of the vectors and how they were generated, considering the neural network used for the encoding. **Points**: Each [point](/documentation/concepts/points/) consists of a **vector** and can also include an optional **identifier** (ID) and **[payload](/documentation/concepts/payload/)**. The vector represents the high-dimensional data and the payload carries metadata information in a JSON format, giving the data point more context or attributes. **Storage Options**: There are two primary storage options. The in-memory storage option keeps all vectors in RAM, which allows for the highest speed in data access since disk access is only required for persistence. Alternatively, the Memmap storage option creates a virtual address space linked with the file on disk, giving a balance between memory usage and access speed. **Clients**: Qdrant supports various programming languages for client interaction, such as Python, Go, Rust, and Typescript. This way developers can connect to and interact with Qdrant using the programming language they prefer. ### Vector Database Use Cases If we had to summarize the use cases for vector databases into a single word, it would be "match". They are great at finding non-obvious ways to correspond or “match” data with a given query. Whether it's through similarity in images, text, user preferences, or patterns in data. Here’s some examples on how to take advantage of using vector databases: **Personalized recommendation systems** to analyze and interpret complex user data, such as preferences, behaviors, and interactions. For example, on Spotify, if a user frequently listens to the same song or skips it, the recommendation engine takes note of this to personalize future suggestions. **Semantic search** allows for systems to be able to capture the deeper semantic meaning of words and text. In modern search engines, if someone searches for "tips for planting in spring," it tries to understand the intent and contextual meaning behind the query. It doesn’t try just matching the words themselves. Here’s an example of a [vector search engine for Startups](https://demo.qdrant.tech/) made with Qdrant: ![](/articles_data/what-is-a-vector-database/semantic-search.png) There are many other use cases like for **fraud detection and anomaly analysis** used in sectors like finance and cybersecurity, to detect anomalies and potential fraud. And **Content-Based Image Retrieval (CBIR)** for images by comparing vector representations rather than metadata or tags. Those are just a few examples. The ability of vector databases to “match” data with queries makes them essential for multiple types of applications. Here are some more [use cases examples](/use-cases/) you can take a look at. ### Starting Your First Vector Database Project Now that you're familiar with the core concepts around vector databases, it’s time to get our hands dirty. [Start by building your own semantic search engine](/documentation/tutorials/search-beginners/) for science fiction books in just about 5 minutes with the help of Qdrant. You can also watch our [video tutorial](https://www.youtube.com/watch?v=AASiqmtKo54). Feeling ready to dive into a more complex project? Take the next step and get started building an actual [Neural Search Service with a complete API and a dataset](/documentation/tutorials/neural-search/). Let’s get into action!
qdrant-landing/content/articles/what-is-rag-in-ai.md
--- title: "What is RAG: Understanding Retrieval-Augmented Generation" draft: false slug: what-is-rag-in-ai? short_description: What is RAG? description: Explore how RAG enables LLMs to retrieve and utilize relevant external data when generating responses, rather than being limited to their original training data alone. preview_dir: /articles_data/what-is-rag-in-ai/preview weight: -150 social_preview_image: /articles_data/what-is-rag-in-ai/preview/social_preview.jpg small_preview_image: /articles_data/what-is-rag-in-ai/icon.svg date: 2024-03-19T9:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - retrieval augmented generation - what is rag - embeddings - llm rag - rag application --- > Retrieval-augmented generation (RAG) integrates external information retrieval into the process of generating responses by Large Language Models (LLMs). It searches a database for information beyond its pre-trained knowledge base, significantly improving the accuracy and relevance of the generated responses. Language models have exploded on the internet ever since ChatGPT came out, and rightfully so. They can write essays, code entire programs, and even make memes (though we’re still deciding on whether that's a good thing). But as brilliant as these chatbots become, they still have **limitations** in tasks requiring external knowledge and factual information. Yes, it can describe the honeybee's waggle dance in excruciating detail. But they become far more valuable if they can generate insights from **any data** that we provide, rather than just their original training data. Since retraining those large language models from scratch costs millions of dollars and takes months, we need better ways to give our existing LLMs access to our custom data. While you could be more creative with your prompts, it is only a short-term solution. LLMs can consider only a **limited** amount of text in their responses, known as a [context window](https://www.hopsworks.ai/dictionary/context-window-for-llms). Some models like GPT-3 can see up to around 12 pages of text (that’s 4,096 tokens of context). That’s not good enough for most knowledge bases. ![How a RAG works](/articles_data/what-is-rag-in-ai/how-rag-works.jpg) The image above shows how a basic RAG system works. Before forwarding the question to the LLM, we have a layer that searches our knowledge base for the "relevant knowledge" to answer the user query. Specifically, in this case, the spending data from the last month. Our LLM can now generate a **relevant non-hallucinated** response about our budget. As your data grows, you’ll need efficient ways to identify the most relevant information for your LLM's limited memory. This is where you’ll want a proper way to store and retrieve the specific data you’ll need for your query, without needing the LLM to remember it. **Vector databases** store information as **vector embeddings**. This format supports efficient similarity searches to retrieve relevant data for your query. For example, Qdrant is specifically designed to perform fast, even in scenarios dealing with billions of vectors. This article will focus on RAG systems and architecture. If you’re interested in learning more about vector search, we recommend the following articles: [What is a Vector Database?](/articles/what-is-a-vector-database/) and [What are Vector Embeddings?](/articles/what-are-embeddings/). ## RAG architecture At its core, a RAG architecture includes the **retriever** and the **generator**. Let's start by understanding what each of these components does. ### The Retriever When you ask a question to the retriever, it uses **similarity search** to scan through a vast knowledge base of vector embeddings. It then pulls out the most **relevant** vectors to help answer that query. There are a few different techniques it can use to know what’s relevant: #### How indexing works in RAG retrievers The indexing process organizes the data into your vector database in a way that makes it easily searchable. This allows the RAG to access relevant information when responding to a query. ![How indexing works](/articles_data/what-is-rag-in-ai/how-indexing-works.jpg) As shown in the image above, here’s the process: * Start with a _loader_ that gathers _documents_ containing your data. These documents could be anything from articles and books to web pages and social media posts. * Next, a _splitter_ divides the documents into smaller chunks, typically sentences or paragraphs. * This is because RAG models work better with smaller pieces of text. In the diagram, these are _document snippets_. * Each text chunk is then fed into an _embedding machine_. This machine uses complex algorithms to convert the text into [vector embeddings](/articles/what-are-embeddings/). All the generated vector embeddings are stored in a knowledge base of indexed information. This supports efficient retrieval of similar pieces of information when needed. #### Query vectorization Once you have vectorized your knowledge base you can do the same to the user query. When the model sees a new query, it uses the same preprocessing and embedding techniques. This ensures that the query vector is compatible with the document vectors in the index. ![How retrieval works](/articles_data/what-is-rag-in-ai/how-retrieval-works.jpg) #### Retrieval of relevant documents When the system needs to find the most relevant documents or passages to answer a query, it utilizes vector similarity techniques. **Vector similarity** is a fundamental concept in machine learning and natural language processing (NLP) that quantifies the resemblance between vectors, which are mathematical representations of data points. The system can employ different vector similarity strategies depending on the type of vectors used to represent the data: ##### Sparse vector representations A sparse vector is characterized by a high dimensionality, with most of its elements being zero. The classic approach is **keyword search**, which scans documents for the exact words or phrases in the query. The search creates sparse vector representations of documents by counting word occurrences and inversely weighting common words. Queries with rarer words get prioritized. ![Sparse vector representation](/articles_data/what-is-rag-in-ai/sparse-vectors.jpg) [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) (Term Frequency-Inverse Document Frequency) and [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) are two classic related algorithms. They're simple and computationally efficient. However, they can struggle with synonyms and don't always capture semantic similarities. If you’re interested in going deeper, refer to our article on [Sparse Vectors](/articles/sparse-vectors/). ##### Dense vector embeddings This approach uses large language models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) to encode the query and passages into dense vector embeddings. These models are compact numerical representations that capture semantic meaning. Vector databases like Qdrant store these embeddings, allowing retrieval based on **semantic similarity** rather than just keywords using distance metrics like cosine similarity. This allows the retriever to match based on semantic understanding rather than just keywords. So if I ask about "compounds that cause BO," it can retrieve relevant info about "molecules that create body odor" even if those exact words weren't used. We explain more about it in our [What are Vector Embeddings](/articles/what-are-embeddings/) article. #### Hybrid search However, neither keyword search nor vector search are always perfect. Keyword search may miss relevant information expressed differently, while vector search can sometimes struggle with specificity or neglect important statistical word patterns. Hybrid methods aim to combine the strengths of different techniques. ![Hybrid search overview](/articles_data/what-is-rag-in-ai/hybrid-search.jpg) Some common hybrid approaches include: * Using keyword search to get an initial set of candidate documents. Next, the documents are re-ranked/re-scored using semantic vector representations. * Starting with semantic vectors to find generally topically relevant documents. Next, the documents are filtered/re-ranked e based on keyword matches or other metadata. * Considering both semantic vector closeness and statistical keyword patterns/weights in a combined scoring model. * Having multiple stages were different techniques. One example: start with an initial keyword retrieval, followed by semantic re-ranking, then a final re-ranking using even more complex models. When you combine the powers of different search methods in a complementary way, you can provide higher quality, more comprehensive results. Check out our article on [Hybrid Search](/articles/hybrid-search/) if you’d like to learn more. ### The Generator With the top relevant passages retrieved, it's now the generator's job to produce a final answer by synthesizing and expressing that information in natural language. The LLM is typically a model like GPT, BART or T5, trained on massive datasets to understand and generate human-like text. It now takes not only the query (or question) as input but also the relevant documents or passages that the retriever identified as potentially containing the answer to generate its response. ![How a Generator works](/articles_data/what-is-rag-in-ai/how-generation-works.png) The retriever and generator don't operate in isolation. The image bellow shows how the output of the retrieval feeds the generator to produce the final generated response. ![The entire architecture of a RAG system](/articles_data/what-is-rag-in-ai/rag-system.jpg) ## Where is RAG being used? Because of their more knowledgeable and contextual responses, we can find RAG models being applied in many areas today, especially those who need factual accuracy and knowledge depth. ### Real-World Applications: **Question answering:** This is perhaps the most prominent use case for RAG models. They power advanced question-answering systems that can retrieve relevant information from large knowledge bases and then generate fluent answers. **Language generation:** RAG enables more factual and contextualized text generation for contextualized text summarization from multiple sources **Data-to-text generation:** By retrieving relevant structured data, RAG models can generate product/business intelligence reports from databases or describing insights from data visualizations and charts **Multimedia understanding:** RAG isn't limited to text - it can retrieve multimodal information like images, video, and audio to enhance understanding. Answering questions about images/videos by retrieving relevant textual context. ## Creating your first RAG chatbot with Langchain, Groq, and OpenAI Are you ready to create your own RAG chatbot from the ground up? We have a video explaining everything from the beginning. Daniel Romero’s will guide you through: * Setting up your chatbot * Preprocessing and organizing data for your chatbot's use * Applying vector similarity search algorithms * Enhancing the efficiency and response quality After building your RAG chatbot, you'll be able to evaluate its performance against that of a chatbot powered solely by a Large Language Model (LLM). <div style="max-width: 640px; margin: 0 auto; padding-bottom: 1em"> <div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"> <iframe width="100%" height="100%" src="https://www.youtube.com/embed/O60-KuZZeQA" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe> </div> </div> ## What’s next? Have a RAG project you want to bring to life? Join our [Discord community](https://discord.gg/qdrant) where we’re always sharing tips and answering questions on vector search and retrieval. Learn more about how to properly evaluate your RAG responses: [Evaluating Retrieval Augmented Generation - a framework for assessment](https://superlinked.com/vectorhub/evaluating-retrieval-augmented-generation-a-framework-for-assessment).
qdrant-landing/content/articles/why-rust.md
--- title: Why Rust? short_description: "A short history on how we chose rust and what it has brought us" description: Qdrant could be built in any language. But it's written in Rust. Here*s why. social_preview_image: /articles_data/why-rust/preview/social_preview.jpg preview_dir: /articles_data/why-rust/preview weight: 10 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-05-11T10:00:00+01:00 draft: false keywords: rust, programming, development aliases: [ /articles/why_rust/ ] --- Looking at the [github repository](https://github.com/qdrant/qdrant), you can see that Qdrant is built in [Rust](https://rust-lang.org). Other offerings may be written in C++, Go, Java or even Python. So why does Qdrant chose Rust? Our founder Andrey had built the first prototype in C++, but didn’t trust his command of the language to scale to a production system (to be frank, he likened it to cutting his leg off). He was well versed in Java and Scala and also knew some Python. However, he considered neither a good fit: **Java** is also more than 30 years old now. With a throughput-optimized VM it can often at least play in the same ball park as native services, and the tooling is phenomenal. Also portability is surprisingly good, although the GC is not suited for low-memory applications and will generally take good amount of RAM to deliver good performance. That said, the focus on throughput led to the dreaded GC pauses that cause latency spikes. Also the fat runtime incurs high start-up delays, which need to be worked around. **Scala** also builds on the JVM, although there is a native compiler, there was the question of compatibility. So Scala shared the limitations of Java, and although it has some nice high-level amenities (of which Java only recently copied a subset), it still doesn’t offer the same level of control over memory layout as, say, C++, so it is similarly disqualified. **Python**, being just a bit younger than Java, is ubiquitous in ML projects, mostly owing to its tooling (notably jupyter notebooks), being easy to learn and integration in most ML stacks. It doesn’t have a traditional garbage collector, opting for ubiquitous reference counting instead, which somewhat helps memory consumption. With that said, unless you only use it as glue code over high-perf modules, you may find yourself waiting for results. Also getting complex python services to perform stably under load is a serious technical challenge. ## Into the Unknown So Andrey looked around at what younger languages would fit the challenge. After some searching, two contenders emerged: Go and Rust. Knowing neither, Andrey consulted the docs, and found hinself intrigued by Rust with its promise of Systems Programming without pervasive memory unsafety. This early decision has been validated time and again. When first learning Rust, the compiler’s error messages are very helpful (and have only improved in the meantime). It’s easy to keep memory profile low when one doesn’t have to wrestle a garbage collector and has complete control over stack and heap. Apart from the much advertised memory safety, many footguns one can run into when writing C++ have been meticulously designed out. And it’s much easier to parallelize a task if one doesn’t have to fear data races. With Qdrant written in Rust, we can offer cloud services that don’t keep us awake at night, thanks to Rust’s famed robustness. A current qdrant docker container comes in at just a bit over 50MB — try that for size. As for performance… have some [benchmarks](/benchmarks/). And we don’t have to compromise on ergonomics either, not for us nor for our users. Of course, there are downsides: Rust compile times are usually similar to C++’s, and though the learning curve has been considerably softened in the last years, it’s still no match for easy-entry languages like Python or Go. But learning it is a one-time cost. Contrast this with Go, where you may find [the apparent simplicity is only skin-deep](https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride). ## Smooth is Fast The complexity of the type system pays large dividends in bugs that didn’t even make it to a commit. The ecosystem for web services is also already quite advanced, perhaps not at the same point as Java, but certainly matching or outcompeting Go. Some people may think that the strict nature of Rust will slow down development, which is true only insofar as it won’t let you cut any corners. However, experience has conclusively shown that this is a net win. In fact, Rust lets us [ride the wall](https://the-race.com/nascar/bizarre-wall-riding-move-puts-chastain-into-nascar-folklore/), which makes us faster, not slower. The job market for Rust programmers is certainly not as big as that for Java or Python programmers, but the language has finally reached the mainstream, and we don’t have any problems getting and retaining top talent. And being an open source project, when we get contributions, we don’t have to check for a wide variety of errors that Rust already rules out. ## In Rust We Trust Finally, the Rust community is a very friendly bunch, and we are delighted to be part of that. And we don’t seem to be alone. Most large IT companies (notably Amazon, Google, Huawei, Meta and Microsoft) have already started investing in Rust. It’s in the Windows font system already and in the process of coming to the Linux kernel (build support has already been included). In machine learning applications, Rust has been tried and proven by the likes of Aleph Alpha and Huggingface, among many others. To sum up, choosing Rust was a lucky guess that has brought huge benefits to Qdrant. Rust continues to be our not-so-secret weapon.
qdrant-landing/content/articles/templates/release-post-template.md
--- title: "Qdrant x.y.0 - <include headline> #required; update version and headline" draft: true # Change to false to publish the article at /articles/ slug: qdrant-x.y.z # required; subtitute version number short_description: "Headline-like description." description: "Headline with more detail. Suggested limit: 140 characters. " # Follow instructions in https://github.com/qdrant/landing_page?tab=readme-ov-file#articles to create preview images # social_preview_image: /articles_data/<slug>/social_preview.jpg # This image will be used in social media previews, should be 1200x600px. Required. # small_preview_image: /articles_data/<slug>/icon.svg # This image will be used in the list of articles at the footer, should be 40x40px # preview_dir: /articles_data/<slug>/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. Negative numbers OK. author: <name> # Author of the article. Required. author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Not required. date: 2022-06-28T13:00:00+03:00 # Date of the article. Required. If the date is in the future it does not appear in the build tags: # Keywords for SEO - vector databases comparative benchmark - benchmark - performance - latency --- [Qdrant x.y.0 is out!]((https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Include headlines: - **Headline 1:** Description - **Headline 2:** Description - **Headline 3:** Description ## Related to headline 1 Description Highlights: - **Detail 1:** Description - **Detail 2:** Description - **Detail 3:** Description Include before / after information, ideally with graphs and/or numbers Include links to documentation Note limits, such as availability on Qdrant Cloud ## Minor improvements and new features Beyond these enhancements, [Qdrant vx.y.0](https://github.com/qdrant/qdrant/releases/tag/vx.y.0) adds and improves on several smaller features: 1. 1. ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Qdrant is an open source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
qdrant-landing/content/benchmarks/_index.md
--- title: Vector Database Benchmarks description: The first comparative benchmark and benchmarking framework for vector search engines and vector databases. keywords: - vector databases comparative benchmark - ANN Benchmark - Qdrant vs Milvus - Qdrant vs Weaviate - Qdrant vs Redis - Qdrant vs ElasticSearch - benchmark - performance - latency - RPS - comparison - vector search - embedding preview_image: /benchmarks/benchmark-1.png seo_schema: { "@context": "https://schema.org", "@type": "Article", "headline": "Vector Search Comparative Benchmarks", "image": [ "https://qdrant.tech/benchmarks/benchmark-1.png" ], "abstract": "The first comparative benchmark and benchmarking framework for vector search engines", "datePublished": "2022-08-23", "dateModified": "2022-08-23", "author": [{ "@type": "Organization", "name": "Qdrant", "url": "https://qdrant.tech" }] } ---
qdrant-landing/content/benchmarks/benchmark-faq.md
--- draft: false id: 3 title: Benchmarks F.A.Q. weight: 10 --- # Benchmarks F.A.Q. ## Are we biased? Probably, yes. Even if we try to be objective, we are not experts in using all the existing vector databases. We build Qdrant and know the most about it. Due to that, we could have missed some important tweaks in different vector search engines. However, we tried our best, kept scrolling the docs up and down, experimented with combinations of different configurations, and gave all of them an equal chance to stand out. If you believe you can do it better than us, our **benchmarks are fully [open-sourced](https://github.com/qdrant/vector-db-benchmark), and contributions are welcome**! ## What do we measure? There are several factors considered while deciding on which database to use. Of course, some of them support a different subset of functionalities, and those might be a key factor to make the decision. But in general, we all care about the search precision, speed, and resources required to achieve it. There is one important thing - **the speed of the vector databases should to be compared only if they achieve the same precision**. Otherwise, they could maximize the speed factors by providing inaccurate results, which everybody would rather avoid. Thus, our benchmark results are compared only at a specific search precision threshold. ## How we select hardware? In our experiments, we are not focusing on the absolute values of the metrics but rather on a relative comparison of different engines. What is important is the fact we used the same machine for all the tests. It was just wiped off between launching different engines. We selected an average machine, which you can easily rent from almost any cloud provider. No extra quota or custom configuration is required. ## Why you are not comparing with FAISS or Annoy? Libraries like FAISS provide a great tool to do experiments with vector search. But they are far away from real usage in production environments. If you are using FAISS in production, in the best case, you never need to update it in real-time. In the worst case, you have to create your custom wrapper around it to support CRUD, high availability, horizontal scalability, concurrent access, and so on. Some vector search engines even use FAISS under the hood, but a search engine is much more than just an indexing algorithm. We do, however, use the same benchmark datasets as the famous [ann-benchmarks project](https://github.com/erikbern/ann-benchmarks), so you can align your expectations for any practical reasons. ### Why we decided to test with the Python client There is no consensus when it comes to the best technology to run benchmarks. You’re free to choose Go, Java or Rust-based systems. But there are two main reasons for us to use Python for this: 1. While generating embeddings you're most likely going to use Python and python based ML frameworks. 2. Based on GitHub stars, python clients are one of the most popular clients across all the engines. From the user’s perspective, the crucial thing is the latency perceived while using a specific library - in most cases a Python client. Nobody can and even should redefine the whole technology stack, just because of using a specific search tool. That’s why we decided to focus primarily on official Python libraries, provided by the database authors. Those may use some different protocols under the hood, but at the end of the day, we do not care how the data is transferred, as long as it ends up in the target location. ## What about closed-source SaaS platforms? There are some vector databases available as SaaS only so that we couldn’t test them on the same machine as the rest of the systems. That makes the comparison unfair. That’s why we purely focused on testing the Open Source vector databases, so everybody may reproduce the benchmarks easily. This is not the final list, and we’ll continue benchmarking as many different engines as possible. ## How to reproduce the benchmark? The source code is available on [Github](https://github.com/qdrant/vector-db-benchmark) and has a `README.md` file describing the process of running the benchmark for a specific engine. ## How to contribute? We made the benchmark Open Source because we believe that it has to be transparent. We could have misconfigured one of the engines or just done it inefficiently. If you feel like you could help us out, check out our [benchmark repository](https://github.com/qdrant/vector-db-benchmark).
qdrant-landing/content/benchmarks/benchmarks-intro.md
--- draft: false id: 2 title: How vector search should be benchmarked? weight: 1 --- # Benchmarking Vector Databases At Qdrant, performance is the top-most priority. We always make sure that we use system resources efficiently so you get the **fastest and most accurate results at the cheapest cloud costs**. So all of our decisions from [choosing Rust](/articles/why-rust/), [io optimisations](/articles/io_uring/), [serverless support](/articles/serverless/), [binary quantization](/articles/binary-quantization/), to our [fastembed library](/articles/fastembed/) are all based on our principle. In this article, we will compare how Qdrant performs against the other vector search engines. Here are the principles we followed while designing these benchmarks: - We do comparative benchmarks, which means we focus on **relative numbers** rather than absolute numbers. - We use affordable hardware, so that you can reproduce the results easily. - We run benchmarks on the same exact machines to avoid any possible hardware bias. - All the benchmarks are [open-sourced](https://github.com/qdrant/vector-db-benchmark), so you can contribute and improve them. <details> <summary> Scenarios we tested </summary> 1. Upload & Search benchmark on single node [Benchmark](/benchmarks/single-node-speed-benchmark/) 2. Filtered search benchmark - [Benchmark](/benchmarks/#filtered-search-benchmark) 3. Memory consumption benchmark - Coming soon 4. Cluster mode benchmark - Coming soon </details> </br> Some of our experiment design decisions are described in the [F.A.Q Section](/benchmarks/#benchmarks-faq). Reach out to us on our [Discord channel](https://qdrant.to/discord) if you want to discuss anything related Qdrant or these benchmarks.
qdrant-landing/content/benchmarks/filtered-search-benchmark.md
--- draft: false id: 5 title: description: '<b> Updated: Feb 2023 </b>' filter_data: /benchmarks/filter-result-2023-02-03.json date: 2023-02-13 weight: 4 --- ## Filtered Results As you can see from the charts, there are three main patterns: - **Speed boost** - for some engines/queries, the filtered search is faster than the unfiltered one. It might happen if the filter is restrictive enough, to completely avoid the usage of the vector index. - **Speed downturn** - some engines struggle to keep high RPS, it might be related to the requirement of building a filtering mask for the dataset, as described above. - **Accuracy collapse** - some engines are loosing accuracy dramatically under some filters. It is related to the fact that the HNSW graph becomes disconnected, and the search becomes unreliable. Qdrant avoids all these problems and also benefits from the speed boost, as it implements an advanced [query planning strategy](/documentation/search/#query-planning). <aside role="status">The Filtering Benchmark is all about changes in performance between filter and un-filtered queries. Please refer to the search benchmark for absolute speed comparison.</aside>
qdrant-landing/content/benchmarks/filtered-search-intro.md
--- draft: false id: 4 title: Filtered search benchmark description: date: 2023-02-13 weight: 3 --- # Filtered search benchmark Applying filters to search results brings a whole new level of complexity. It is no longer enough to apply one algorithm to plain data. With filtering, it becomes a matter of the _cross-integration_ of the different indices. To measure how well different search engines perform in this scenario, we have prepared a set of **Filtered ANN Benchmark Datasets** - https://github.com/qdrant/ann-filtering-benchmark-datasets It is similar to the ones used in the [ann-benchmarks project](https://github.com/erikbern/ann-benchmarks/) but enriched with payload metadata and pre-generated filtering requests. It includes synthetic and real-world datasets with various filters, from keywords to geo-spatial queries. ### Why filtering is not trivial? Not many ANN algorithms are compatible with filtering. HNSW is one of the few of them, but search engines approach its integration in different ways: - Some use **post-filtering**, which applies filters after ANN search. It doesn't scale well as it either loses results or requires many candidates on the first stage. - Others use **pre-filtering**, which requires a binary mask of the whole dataset to be passed into the ANN algorithm. It is also not scalable, as the mask size grows linearly with the dataset size. On top of it, there is also a problem with search accuracy. It appears if too many vectors are filtered out, so the HNSW graph becomes disconnected. Qdrant uses a different approach, not requiring pre- or post-filtering while addressing the accuracy problem. Read more about the Qdrant approach in our [Filtrable HNSW](/articles/filtrable-hnsw/) article.
qdrant-landing/content/benchmarks/single-node-speed-benchmark-2022.md
--- draft: false id: 1 title: Single node benchmarks (2022) single_node_title: Single node benchmarks single_node_data: /benchmarks/result-2022-08-10.json preview_image: /benchmarks/benchmark-1.png date: 2022-08-23 weight: 2 Unlisted: true --- This is an archived version of Single node benchmarks. Please refer to the new version [here](/benchmarks/single-node-speed-benchmark/).
qdrant-landing/content/benchmarks/single-node-speed-benchmark.md
--- draft: false id: 1 title: Single node benchmarks description: | We benchmarked several vector databases using various configurations of them on different datasets to check how the results may vary. Those datasets may have different vector dimensionality but also vary in terms of the distance function being used. We also tried to capture the difference we can expect while using some different configuration parameters, for both the engine itself and the search operation separately. </br> </br> <b> Updated: January/June 2024 </b> single_node_title: Single node benchmarks single_node_data: /benchmarks/results-1-100-thread-2024-06-15.json preview_image: /benchmarks/benchmark-1.png date: 2022-08-23 weight: 2 Unlisted: false --- ## Observations Most of the engines have improved since [our last run](/benchmarks/single-node-speed-benchmark-2022/). Both life and software have trade-offs but some clearly do better: * **`Qdrant` achives highest RPS and lowest latencies in almost all the scenarios, no matter the precision threshold and the metric we choose.** It has also shown 4x RPS gains on one of the datasets. * `Elasticsearch` has become considerably fast for many cases but it's very slow in terms of indexing time. It can be 10x slower when storing 10M+ vectors of 96 dimensions! (32mins vs 5.5 hrs) * `Milvus` is the fastest when it comes to indexing time and maintains good precision. However, it's not on-par with others when it comes to RPS or latency when you have higher dimension embeddings or more number of vectors. * `Redis` is able to achieve good RPS but mostly for lower precision. It also achieved low latency with single thread, however its latency goes up quickly with more parallel requests. Part of this speed gain comes from their custom protocol. * `Weaviate` has improved the least since our last run. ## How to read the results - Choose the dataset and the metric you want to check. - Select a precision threshold that would be satisfactory for your usecase. This is important because ANN search is all about trading precision for speed. This means in any vector search benchmark, **two results must be compared only when you have similar precision**. However most benchmarks miss this critical aspect. - The table is sorted by the value of the selected metric (RPS / Latency / p95 latency / Index time), and the first entry is always the winner of the category 🏆 ### Latency vs RPS In our benchmark we test two main search usage scenarios that arise in practice. - **Requests-per-Second (RPS)**: Serve more requests per second in exchange of individual requests taking longer (i.e. higher latency). This is a typical scenario for a web application, where multiple users are searching at the same time. To simulate this scenario, we run client requests in parallel with multiple threads and measure how many requests the engine can handle per second. - **Latency**: React quickly to individual requests rather than serving more requests in parallel. This is a typical scenario for applications where server response time is critical. Self-driving cars, manufacturing robots, and other real-time systems are good examples of such applications. To simulate this scenario, we run client in a single thread and measure how long each request takes. ### Tested datasets Our [benchmark tool](https://github.com/qdrant/vector-db-benchmark) is inspired by [github.com/erikbern/ann-benchmarks](https://github.com/erikbern/ann-benchmarks/). We used the following datasets to test the performance of the engines on ANN Search tasks: <div class="table-responsive"> | Datasets | # Vectors | Dimensions | Distance | |---------------------------------------------------------------------------------------------------|-----------|------------|-------------------| | [dbpedia-openai-1M-angular](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) | 1M | 1536 | cosine | | [deep-image-96-angular](http://sites.skoltech.ru/compvision/noimi/) | 10M | 96 | cosine | | [gist-960-euclidean](http://corpus-texmex.irisa.fr/) | 1M | 960 | euclidean | | [glove-100-angular](https://nlp.stanford.edu/projects/glove/) | 1.2M | 100 | cosine | </div> ### Setup {{< figure src=/benchmarks/client-server.png caption="Benchmarks configuration" width=70% >}} - This was our setup for this experiment: - Client: 8 vcpus, 16 GiB memory, 64GiB storage (`Standard D8ls v5` on Azure Cloud) - Server: 8 vcpus, 32 GiB memory, 64GiB storage (`Standard D8s v3` on Azure Cloud) - The Python client uploads data to the server, waits for all required indexes to be constructed, and then performs searches with configured number of threads. We repeat this process with different configurations for each engine, and then select the best one for a given precision. - We ran all the engines in docker and limited their memory to 25GB. This was used to ensure fairness by avoiding the case of some engine configs being too greedy with RAM usage. This 25 GB limit is completely fair because even to serve the largest `dbpedia-openai-1M-1536-angular` dataset, one hardly needs `1M * 1536 * 4bytes * 1.5 = 8.6GB` of RAM (including vectors + index). Hence, we decided to provide all the engines with ~3x the requirement. Please note that some of the configs of some engines crashed on some datasets because of the 25 GB memory limit. That's why you might see fewer points for some engines on choosing higher precision thresholds.
qdrant-landing/content/blog/_index.md
--- title: Qdrant Blog subtitle: Check out our latest posts description: A place to learn how to become an expert traveler through vector space. Subscribe and we will update you on features and news. email_placeholder: Enter your email subscribe_button: Subscribe features_title: Features and News search_placeholder: What are you Looking for? aliases: # There is no need to add aliases for future new tags and categories! - /tags - /tags/case-study - /tags/dailymotion - /tags/recommender-system - /tags/binary-quantization - /tags/embeddings - /tags/openai - /tags/gsoc24 - /tags/open-source - /tags/summer-of-code - /tags/vector-database - /tags/artificial-intelligence - /tags/machine-learning - /tags/vector-search - /tags/case_study - /tags/dust - /tags/announcement - /tags/funding - /tags/series-a - /tags/azure - /tags/cloud - /tags/data-science - /tags/information-retrieval - /tags/benchmarks - /tags/performance - /tags/qdrant - /tags/blog - /tags/large-language-models - /tags/podcast - /tags/retrieval-augmented-generation - /tags/search - /tags/vector-search-engine - /tags/vector-image-search - /tags/vector-space-talks - /tags/retriever-ranker-architecture - /tags/semantic-search - /tags/llm - /tags/entity-matching-solution - /tags/real-time-processing - /tags/vector-space-talk - /tags/fastembed - /tags/quantized-emdedding-models - /tags/llm-recommendation-system - /tags/integrations - /tags/unstructured - /tags/integration - /tags/n8n - /tags/news - /tags/webinar - /tags/cohere - /tags/embedding-model - /tags/database - /tags/vector-search-database - /tags/neural-networks - /tags/similarity-search - /tags/embedding - /tags/corporate-news - /tags/nvidia - /tags/docarray - /tags/jina-integration - /categories - /categories/news - /categories/vector-search - /categories/webinar - /categories/vector-space-talk ---
qdrant-landing/content/blog/advancements-and-challenges-in-rag-systems-syed-asad-vector-space-talks-021.md
--- draft: false title: Advancements and Challenges in RAG Systems - Syed Asad | Vector Space Talks slug: rag-advancements-challenges short_description: Syed Asad talked about advanced rag systems and multimodal AI projects, discussing challenges, technologies, and model evaluations in the context of their work at Kiwi Tech. description: Syed Asad unfolds the challenges of developing multimodal RAG systems at Kiwi Tech, detailing the balance between accuracy and cost-efficiency, and exploring various tools and approaches like GPT 4 and Mixtral to enhance family tree apps and financial chatbots while navigating the hurdles of data privacy and infrastructure demands. preview_image: /blog/from_cms/syed-asad-cropped.png date: 2024-04-11T22:25:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Generative AI - KiwiTech --- > *"The problem with many of the vector databases is that they work fine, they are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.”*\ — Syed Asad > Syed Asad is an accomplished AI/ML Professional, specializing in LLM Operations and RAGs. With a focus on Image Processing and Massive Scale Vector Search Operations, he brings a wealth of expertise to the field. His dedication to advancing artificial intelligence and machine learning technologies has been instrumental in driving innovation and solving complex challenges. Syed continues to push the boundaries of AI/ML applications, contributing significantly to the ever-evolving landscape of the industry. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4Gm4TQsO2PzOGBp5U6Cj2e?si=JrG0kHDpRTeb2gLi5zdi4Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Advancements-and-Challenges-in-RAG-Systems---Syed-Asad--Vector-Space-Talks-021-e2i112h/a-ab4vnl8" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Prompt engineering is the new frontier in AI. Let’s find out about how critical its role is in controlling AI language models. In this episode, Demetrios and Syed gets to discuss about it. Syed also explores the retrieval augmented generation systems and machine learning technology at Kiwi Tech. This episode showcases the challenges and advancements in AI applications across various industries. Here are the highlights from this episode: 1. **Digital Family Tree:** Learn about the family tree app project that brings the past to life through video interactions with loved ones long gone. 2. **Multimodal Mayhem:** Discover the complexities of creating AI systems that can understand diverse accents and overcome transcription tribulations – all while being cost-effective! 3. **The Perfect Match:** Find out how semantic chunking is revolutionizing job matching in radiology and why getting the context right is non-negotiable. 4. **Quasar's Quantum Leap:** Syed shares the inside scoop on Quasar, a financial chatbot, and the AI magic that makes it tick. 5. **The Privacy Paradox:** Delve into the ever-present conflict between powerful AI outcomes and the essential quest to preserve data privacy. > Fun Fact: Syed Asad and his team at Kiwi Tech use a GPU-based approach with GPT 4 for their AI system named Quasar, addressing challenges like temperature control and mitigating hallucinatory responses. > ## Show notes: 00:00 Clients seek engaging multimedia apps over chatbots.\ 06:03 Challenges in multimodal rags: accent, transcription, cost.\ 08:18 AWS credits crucial, but costs skyrocket quickly.\ 10:59 Accurate procedures crucial, Qdrant excels in search.\ 14:46 Embraces AI for monitoring and research.\ 19:47 Seeking insights on ineffective marketing models and solutions.\ 23:40 GPT 4 useful, prompts need tracking tools\ 25:28 Discussing data localization and privacy, favoring Ollama.\ 29:21 Hallucination control and pricing are major concerns.\ 32:47 DeepEval, AI testing, LLM, potential, open source.\ 35:24 Filter for appropriate embedding model based on use case and size. ## More Quotes from Syed: *"Qdrant has the ease of use. I have trained people in my team who specializes with Qdrant, and they were initially using Weaviate and Pinecone.”*\ — Syed Asad *"What's happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. They want their apps or their LLM apps to be more engaging rather than a mere chatbot.”*\ — Syed Asad *"That is where the accuracy matters the most. And in this case, Qdrant has proved just commendable in giving excellent search results.”*\ — Syed Asad in Advancements in Medical Imaging Search ## Transcript: Demetrios: What is up, good people? How y'all doing? We are back for yet another vector space talks. I'm super excited to be with you today because we're gonna be talking about rags and rag systems. And from the most basic naive rag all the way to the most advanced rag, we've got it covered with our guest of honor, Asad. Where are you at, my man? There he is. What's going on, dude? Syed Asad: Yeah, everything is fine. Demetrios: Excellent, excellent. Well, I know we were talking before we went live, and you are currently in India. It is very late for you, so I appreciate you coming on here and doing this with us. You are also, for those who do not know, a senior engineer for AI and machine learning at Kiwi Tech. Can you break down what Kiwi tech is for us real fast? Syed Asad: Yeah, sure. Absolutely. So Kiwi tech is actually a software development, was actually a software development company focusing on software development, iOS and mobile apps. And right now we are in all focusing more on generative AI, machine learning and computer vision projects. So I am heading the AI part here. So. And we are having loads of projects here with, from basic to advanced rags, from naive to visual rags. So basically I'm doing rag in and out from morning to evening. Demetrios: Yeah, you can't get away from it, huh? Man, that is great. Syed Asad: Everywhere there is rag. Even, even the machine learning part, which was previously done by me, is all now into rags engineered AI. Yeah. Machine learning is just at the background now. Demetrios: Yeah, yeah, yeah. It's funny, I understand the demand for it because people are trying to see where they can get value in their companies with the new generative AI advancements. Syed Asad: Yeah. Demetrios: So I want to talk a lot about advance rags, considering the audience that we have. I would love to hear about the visual rags also, because that sounds very exciting. Can we start with the visual rags and what exactly you are doing, what you're working on when it comes to that? Syed Asad: Yeah, absolutely. So initially when I started working, so you all might be aware with the concept of frozen rags, the normal and the basic rag, there is a text retrieval system. You just query your data and all those things. So what is happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. So that is what is happening. So they want their apps or their LLM apps to be more engaging rather than a mere chatbot. Because. Because if we go on to the natural language or the normal english language, I mean, interacting by means of a video or interacting by means of a photo, like avatar, generation, anything like that. Syed Asad: So that has become more popular or, and is gaining more popularity. And if I talk about, specifically about visual rags. So the projects which I am working on is, say, for example, say, for example, there is a family tree type of app in which. In which you have an account right now. So, so you are recording day videos every day, right? Like whatever you are doing, for example, you are singing a song, you're walking in the park, you are eating anything like that, and you're recording those videos and just uploading them on that app. But what do you want? Like, your future generations can do some sort of query, like what, what was my grandfather like? What was my, my uncle like? Anything my friend like. And it was, it is not straight, restricted to a family. It can be friends also. Syed Asad: Anyway, so. And these are all us based projects, not indian based projects. Okay, so, so you, you go in query and it returns a video about your grandfather who has already died. He has not. You can see him speaking about that particular thing. So it becomes really engaging. So this is something which is called visual rag, which I am working right now on this. Demetrios: I love that use case. So basically it's, I get to be closer to my family that may or may not be here with us right now because the rag can pull writing that they had. It can pull video of other family members talking about it. It can pull videos of when my cousin was born, that type of stuff. Syed Asad: Anything, anything from cousin to family. You can add any numbers of members of your family. You can give access to any number of people who can have after you, after you're not there, like a sort of a nomination or a delegation live up thing. So that is, I mean, actually, it is a very big project, involves multiple transcription models, video transcription models. It also involves actually the databases, and I'm using Qdrant, proud of it. So, in that, so. And Qdrant is working seamlessly in that. So, I mean, at the end there is a vector search, but at the background there is more of more of visual rag, and people want to communicate through videos and photos. Syed Asad: So that is coming into picture more. Demetrios: Well, talk to me about multimodal rag. And I know it's a bit of a hairy situation because if you're trying to do vector search with videos, it can be a little bit more complicated than just vector search with text. Right. So what are some of the unique challenges that you've seen when it comes to multimodal rag? Syed Asad: The first challenge dealing with multimodal rags is actually the accent, because it can be varying accent. The problem with the transcription, one of the problems or the challenges which I have faced in this is that lack of proper transcription models, if you are, if you are able to get a proper transcription model, then if that, I want to deploy that model in the cloud, say for example, an AWS cloud. So that AWS cloud is costing heavy on the pockets. So managing infra is one of the part. I mean, I'm talking in a, in a, in a highly scalable production environment. I'm not talking about a research environment in which you can do anything on a collab notebook and just go with that. So whenever it comes to the client part or the delivery part, it becomes more critical. And even there, there were points then that we have to entirely overhaul the entire approach, which was working very fine when we were doing it on the dev environment, like the openais whisper. Syed Asad: We started with that OpenAI's whisper. It worked fine. The transcription was absolutely fantastic. But we couldn't go into the production. Demetrios: Part with that because it was too, the word error rate was too high, or because it was too slow. What made it not allow you to go into production? Syed Asad: It was, the word error rate was also high. It was very slow when it was being deployed on an AWS instance. And the thing is that the costing part, because usually these are startups, or mid startup, if I talk about the business point of view, not the tech point of view. So these companies usually offer these type of services for free, and on the basis of these services they try to raise funding. So they want something which is actually optimized, optimizing their cost as well. So what I personally feel, although AWS is massively scalable, but I don't prefer AWS at all until, unless there are various other options coming out, like salad. I had a call, I had some interactions with Titan machine learning also, but it was also fine. But salad is one of the best as of now. Demetrios: Yeah. Unless you get that free AWS credits from the startup program, it can get very expensive very quickly. And even if you do have the free AWS credits, it still gets very expensive very quickly. So I understand what you're saying is basically it was unusable because of the cost and the inability to figure out, it was more of a product problem if you could figure out how to properly monetize it. But then you had technical problems like word error rate being really high, the speed and latency was just unbearable. I can imagine. So unless somebody makes a query and they're ready to sit around for a few minutes and let that query come back to you, with a video or some documents, whatever it may be. Is that what I'm understanding on this? And again, this is for the family tree use case that you're talking about. Syed Asad: Yes, family tree use case. So what was happening in that, in that case is a video is uploaded, it goes to the admin for an approval actually. So I mean you can, that is where we, they were restricting the costing part as far as the project was concerned. It's because you cannot upload any random videos and they will select that. Just some sort of moderation was also there, as in when the admin approves those videos, that videos goes on to the transcription pipeline. They are transcripted via an, say a video to text model like the open eyes whisper. So what was happening initially, all the, all the research was done with Openais, but at the end when deployment came, we have to go with deep Gram and AssemblyAI. That was the place where these models were excelling far better than OpenAI. Syed Asad: And I'm a big advocate of open source models, so also I try to leverage those, but it was not pretty working in production environment. Demetrios: Fascinating. So you had that, that's one of your use cases, right? And that's very much the multimodal rag use case. Are all of your use cases multimodal or did you have, do you have other ones too? Syed Asad: No, all are not multimodal. There are few multimodal, there are few text based on naive rag also. So what, like for example, there is one use case coming which is sort of a job search which is happening. A job search for a radiology, radiology section. I mean a very specialized type of client it is. And they're doing some sort of job search matching the modalities and procedures. And it is sort of a temporary job. Like, like you have two shifts ready, two shifts begin, just some. Syed Asad: So, so that is, that is very critical when somebody is putting their procedures or what in. Like for example, they, they are specializing in x rays in, in some sort of medical procedures and that is matching with the, with the, with the, with the employers requirement. So that is where the accuracy matters the most. Accurate. And in this case, Qdrant has proved just commendable in giving excellent search results. The other way around is that in this case is there were some challenges related to the quality of results also because. So progressing from frozen rack to advanced rag like adopting methods like re ranking, semantic chunking. I have, I have started using semantic chunking. Syed Asad: So it has proved very beneficial as far as the quality of results is concerned. Demetrios: Well, talk to me more about. I'm trying to understand this use case and why a rag is useful for the job matching. You have doctors who have specialties and they understand, all right, they're, maybe it's an orthopedic surgeon who is very good at a certain type of surgery, and then you have different jobs that come online. They need to be matched with those different jobs. And so where does the rag come into play? Because it seems like it could be solved with machine learning as opposed to AI. Syed Asad: Yeah, it could have been solved through machine learning, but the type of modalities that are, the type of, say, the type of jobs which they were posting are too much specialized. So it needed some sort of contextual matching also. So there comes the use case for the rag. In this place, the contextual matching was required. Initially, an approach for machine learning was on the table, but it was done with, it was not working. Demetrios: I get it, I get it. So now talk to me. This is really important that you said accuracy needs to be very high in this use case. How did you make sure that the accuracy was high? Besides the, I think you said chunking, looking at the chunks, looking at how you were doing that, what were some other methods you took to make sure that the accuracy was high? Syed Asad: I mean, as far as the accuracy is concerned. So what I did was that my focus was on the embedding model, actually when I started with what type of embed, choice of embedding model. So initially my team started with open source model available readily on hugging face, looking at some sort of leaderboard metrics, some sort of model specializing in medical, say, data, all those things. But even I was curious that the large language, the embedding models which were specializing in medical data, they were also not returning good results and they were mismatching. When, when there was a tabular format, I created a visualization in which the cosine similarity of various models were compared. So all were lagging behind until I went ahead with cohere. Cohere re rankers. They were the best in that case, although they are not trained on that. Syed Asad: And just an API call was required rather than loading that whole model onto the local. Demetrios: Interesting. All right. And so then were you doing certain types, so you had the cohere re ranker that gave you a big up. Were you doing any kind of monitoring of the output also, or evaluation of the output and if so, how? Syed Asad: Yes, for evaluation, for monitoring we readily use arrays AI, because I am a, I'm a huge advocate of Llama index also because it has made everything so easier versus lang chain. I mean, if I talk about my personal preference, not regarding any bias, because I'm not linked with anybody, I'm not promoting it here, but they are having the best thing which I write, I like about Llama index and why I use it, is that anything which is coming into play as far as the new research is going on, like for example, a recent research paper was with the raft retrieval augmented fine tuning, which was released by the Microsoft, and it is right now available on archive. So barely few days after they just implemented it in the library, and you can readily start using it rather than creating your own structure. So, yeah, so it was. So one of my part is that I go through the research papers first, then coming on to a result. So a research based approach is required in actually selecting the models, because every day there is new advancement going on in rags and you cannot figure out what is, what would be fine for you, and you cannot do hit and trial the whole day. Demetrios: Yes, that is a great point. So then if we break down your tech stack, what does it look like? You're using Llama index, you're using arise for the monitoring, you're using Qdrant for your vector database. You have the, you have the coherent re ranker, you are using GPT 3.5. Syed Asad: No, it's GPT 4, not 3.5. Demetrios: You needed to go with GPT 4 because everything else wasn't good enough. Syed Asad: Yes, because one of the context length was one of the most things. But regarding our production, we have been readily using since the last one and a half months. I have been readily using Mixtril. I have been. I have been using because there's one more challenge coming onto the rack, because there's one more I'll give, I'll give you an example of one more use case. It is the I'll name the project also because I'm allowed by my company. It is a big project by the name of Quasar markets. It is a us based company and they are actually creating a financial market type of check chatbot. Syed Asad: Q u a s a r, quasar. You can search it also, and they give you access to various public databases also, and some paid databases also. They have a membership plan. So we are entirely handling the front end backend. I'm not handling the front end and the back end, I'm handling the AI part in that. So one of the challenges is the inference, timing, the timing in which the users are getting queries when it is hitting the database. Say for example, there is a database publicly available database called Fred of us government. So when user can select in that app and go and select the Fred database and want to ask some questions regarding that. Syed Asad: So that is in this place there is no vectors, there are no vector databases. It is going without that. So we are following some keyword approach. We are extracting keywords, classifying the queries in simple or complex, then hitting it again to the database, sending it on the live API, getting results. So there are multiple hits going on. So what happened? This all multiple hits which were going on. They reduced the timing and I mean the user experience was being badly affected as the time for the retrieval has gone up and user and if you're going any query and inputting any query it is giving you results in say 1 minute. You wouldn't be waiting for 1 minute for a result. Demetrios: Not at all. Syed Asad: So this is one of the challenge for a GPU based approach. And in, in the background everything was working on GPT 4 even, not 3.5. I mean the costliest. Demetrios: Yeah. Syed Asad: So, so here I started with the LPU approach, the Grok. I mean it's magical. Demetrios: Yeah. Syed Asad: I have been implementing proc since the last many days and it has been magical. The chatbots are running blazingly fast but there are some shortcomings also. You cannot control the temperature if you have lesser control on hallucination. That is one of the challenges which I am facing. So that is why I am not able to deploy Grok into production right now. Because hallucination is one of the concern for the client. Also for anybody who is having, who wants to have a rag on their own data, say, or AI on their own data, they won't, they won't expect you, the LLM, to be creative. So that is one of the challenges. Syed Asad: So what I found that although many of the tools that are available in the market right now day in and day out, there are more researches. But most of the things which are coming up in our feeds or more, I mean they are coming as a sort of a marketing gimmick. They're not working actually on the ground. Demetrios: Tell me, tell me more about that. What other stuff have you tried that's not working? Because I feel that same way. I've seen it and I also have seen what feels like some people, basically they release models for marketing purposes as opposed to actual valuable models going out there. So which ones? I mean Grok, knowing about Grok and where it excels and what some of the downfalls are is really useful. It feels like this idea of temperature being able to control the knob on the temperature and then trying to decrease the hallucinations is something that is fixable in the near future. So maybe it's like months that we'll have to deal with that type of thing for now. But I'd love to hear what other things you've tried that were not like you thought they were going to be when you were scrolling Twitter or LinkedIn. Syed Asad: Should I name them? Demetrios: Please. So we all know we don't have to spend our time on them. Syed Asad: I'll start with OpenAI. The clients don't like GPT 4 to be used in there just because the primary concern is the cost. Secondary concern is the data privacy. And the third is that, I mean, I'm talking from the client's perspective, not the tech stack perspective. Demetrios: Yeah, yeah, yeah. Syed Asad: They consider OpenAI as a more of a marketing gimmick. Although GPT 4 gives good results. I'm, I'm aware of that, but the clients are not in favor. But the thing is that I do agree that GPT 4 is still the king of llms right now. So they have no option, no option to get the better, better results. But Mixtral is performing very good as far as the hallucinations are concerned. Just keeping the parameter temperature is equal to zero in a python code does not makes the hallucination go off. It is one of my key takeaways. Syed Asad: I have been bogging my head. Just. I'll give you an example, a chat bot. There is a, there's one of the use case in which is there's a big publishing company. I cannot name that company right now. And they want the entire system of books since the last 2025 years to be just converted into a rack pipeline. And the people got query. The. Syed Asad: The basic problem which I was having is handling a hello. When a user types hello. So when you type in hello, it. Demetrios: Gives you back a book. Syed Asad: It gives you back a book even. It is giving you back sometimes. Hello, I am this, this, this. And then again, some information. What you have written in the prompt, it is giving you everything there. I will answer according to this. I will answer according to this. So, so even if the temperature is zero inside the code, even so that, that included lots of prompt engineering. Syed Asad: So prompt engineering is what I feel is one of the most important trades which will be popular, which is becoming popular. And somebody is having specialization in prompt engineering. I mean, they can control the way how an LLM behaves because it behaves weirdly. Like in this use case, I was using croc and Mixtral. So to control Mixtral in such a way. It was heck lot of work, although it, we made it at the end, but it was heck lot of work in prompt engineering part. Demetrios: And this was, this was Mixtral large. Syed Asad: Mixtral, seven bits, eight by seven bits. Demetrios: Yeah. I mean, yeah, that's the trade off that you have to deal with. And it wasn't fine tuned at all. Syed Asad: No, it was not fine tuned because we were constructing a rack pipeline, not a fine tuned application, because right now, right now, even the customers are not interested in getting a fine tune model because it cost them and they are more interested in a contextual, like a rag contextual pipeline. Demetrios: Yeah, yeah. Makes sense. So basically, this is very useful to think about. I think we all understand and we've all seen that GPT 4 does best if we can. We want to get off of it as soon as possible and see how we can, how far we can go down the line or how far we can go on the difficulty spectrum. Because as soon as you start getting off GPT 4, then you have to look at those kind of issues with like, okay, now it seems to be hallucinating a lot more. How do I figure this out? How can I prompt it? How can I tune my prompts? How can I have a lot of prompt templates or a prompt suite to make sure that things work? And so are you using any tools for keeping track of prompts? I know there's a ton out there. Syed Asad: We initially started with the parameter efficient fine tuning for prompts, but nothing is working 100% interesting. Nothing works 100% it is as far as the prompting is concerned. It goes on to a hit and trial at the end. Huge wastage of time in doing prompt engineering. Even if you are following the exact prompt template given on the hugging face given on the model card anywhere, it will, it will behave, it will act, but after some time. Demetrios: Yeah, yeah. Syed Asad: But mixed well. Is performing very good. Very, very good. Mixtral eight by seven bits. That's very good. Demetrios: Awesome. Syed Asad: The summarization part is very strong. It gives you responses at par with GPT 4. Demetrios: Nice. Okay. And you don't have to deal with any of those data concerns that your customers have. Syed Asad: Yeah, I'm coming on to that only. So the next part was the data concern. So they, they want either now or in future the localization of llms. I have been doing it with readily, with Llama, CPP and Ollama. Right now. Ollama is very good. I mean, I'm a huge, I'm a huge fan of Ollama right now, and it is performing very good as far as the localization and data privacy is concerned because, because at the end what you are selling, it makes things, I mean, at the end it is sales. So even if the client is having data of the customers, they want to make their customers assure that the data is safe. Syed Asad: So that is with the localization only. So they want to gradually go into that place. So I want to bring here a few things. To summarize what I said, localization of llms is one of the concern right now is a big market. Second is quantization of models. Demetrios: Oh, interesting. Syed Asad: In quantization of models, whatever. So I perform scalar quantization and binary quantization, both using bits and bytes. I various other techniques also, but the bits and bytes was the best. Scalar quantization is performing better. Binary quantization, I mean the maximum compression or maximum lossy function is there, so it is not, it is, it is giving poor results. Scalar quantization is working very fine. It, it runs on CPU also. It gives you good results because whatever projects which we are having right now or even in the markets also, they are not having huge corpus of data right now, but they will eventually scale. Syed Asad: So they want something right now so that quantization works. So quantization is one of the concerns. People want to dodge aws, they don't want to go to AWS, but it is there. They don't have any other way. So that is why they want aws. Demetrios: And is that because of costs lock in? Syed Asad: Yeah, cost is the main part. Demetrios: Yeah. They understand that things can get out of hand real quick if you're using AWS and you start using different services. I think it's also worth noting that when you're using different services on AWS, it may be a very similar service. But if you're using sagemaker endpoints on AWS, it's like a lot more expensive than just an EKS endpoint. Syed Asad: Minimum cost for a startup, for just the GPU, bare minimum is minimum. $450. Minimum. It's $450 even without just on the testing phases or the development phases, even when it has not gone into production. So that gives a dent to the client also. Demetrios: Wow. Yeah. Yeah. So it's also, and this is even including trying to use like tranium or inferencia and all of that stuff. You know those services? Syed Asad: I know those services, but I've not readily tried those services. I'm right now in the process of trying salad also for inference, and they are very, very cheap right now. Demetrios: Nice. Okay. Yeah, cool. So if you could wave your magic wand and have something be different when it comes to your work, your day in, day out, especially because you've been doing a lot of rags, a lot of different kinds of rags, a lot of different use cases with, with rags. Where do you think you would get the biggest uptick in your performance, your ability to just do what you need to do? How could rags be drastically changed? Is it something that you say, oh, the hallucinations. If we didn't have to deal with those, that would make my life so much easier. I didn't have to deal with prompts that would make my life infinitely easier. What are some things like where in five years do you want to see this field be? Syed Asad: Yeah, you figured it right. The hallucination part is one of the concerns, or biggest concerns with the client when it comes to the rag, because what we see on LinkedIn and what we see on places, it gives you a picture that it, it controls hallucination, and it gives you answer that. I don't know anything about this, as mentioned in the context, but it does not really happen when you come to the production. It gives you information like you are developing a rag for a publishing company, and it is giving you. Where is, how is New York like, it gives you information on that also, even if you have control and everything. So that is one of the things which needs to be toned down. As far as the rag is concerned, pricing is the biggest concern right now, because there are very few players in the market as far as the inference is concerned, and they are just dominating the market with their own rates. So this is one of the pain points. Syed Asad: And the. I'll also want to highlight the popular vector databases. There are many Pinecone weaviate, many things. So they are actually, the problem with many of the vector databases is that they work fine. They are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant. Syed Asad: Not because Qdrant is sponsoring me, not because I am doing a job with Qdrant, but Qdrant is having the ease of use. And it, I have, I have trained people in my team who specialize with Qdrant, and they were initially using Weaviate and Pinecone. I mean, you can do also store vectors in those databases, but it is not especially the, especially the latest development with Pine, sorry, with Qdrant is the fast embed, which they just now released. And it made my work a lot easier by using the ONNX approach rather than a Pytorch based approach, because there was one of the projects in which we were deploying embedding model on an AWS server and it was running continuously. And minimum utilization of ram is 6gb. Even when it is not doing any sort of vector embedding so fast. Embed has so Qdrant is playing a huge role, I should acknowledge them. And one more thing which I would not like to use is LAN chain. Syed Asad: I have been using it. So. So I don't want to use that language because it is not, it did not serve any purpose for me, especially in the production. It serves purpose in the research phase. When you are releasing any notebook, say you have done this and does that. It is not. It does not works well in production, especially for me. Llama index works fine, works well. Demetrios: You haven't played around with anything else, have you? Like Haystack or. Syed Asad: Yeah, haystack. Haystack. I have been playing out around, but haystack is lacking functionalities. It is working well. I would say it is working well, but it lacks some functionalities. They need to add more things as compared to Llama index. Demetrios: And of course, the hottest one on the block right now is DSPY. Right? Have you messed around with that at all? Syed Asad: DSPy, actually DSPY. I have messed with DSPY. But the thing is that DSPY is right now, I have not experimented with that in the production thing, just in the research phase. Demetrios: Yeah. Syed Asad: So, and regarding the evaluation part, DeepEval, I heard you might have a DeepEval. So I've been using that. It is because one of the, one of the challenges is the testing for the AI. Also, what responses are large language model is generating the traditional testers or the manual tester software? They don't know, actually. So there's one more vertical which is waiting to be developed, is the testing for AI. It has a huge potential. And DeepEval, the LLM based approach on testing is very, is working fine and is open source also. Demetrios: And that's the DeepEval I haven't heard. Syed Asad: Let me just tell you the exact spelling. It is. Sorry. It is DeepEval. D E E P. Deep eval. I can. Demetrios: Yeah. Okay. I know DeepEval. All right. Yeah, for sure. Okay. Hi. I for some reason was understanding D Eval. Syed Asad: Yeah, actually I was pronouncing it wrong. Demetrios: Nice. So these are some of your favorite, non favorite, and that's very good to know. It is awesome to hear about all of this. Is there anything else that you want to say before we jump off? Anything that you can, any wisdom you can impart on us for your rag systems and how you have learned the hard way? So tell us so we don't have to learn that way. Syed Asad: Just go. Don't go with the marketing. Don't go with the marketing. Do your own research. Hugging face is a good, I mean, just fantastic. The leaderboard, although everything does not work in the leaderboard, also say, for example, I don't, I don't know about today and tomorrow, today and yesterday, but there was a model from Salesforce, the embedding model from Salesforce. It is still topping charts, I think, in the, on the MTEB. MTEB leaderboard for the embedding models. Syed Asad: But you cannot use it in the production. It is way too huge to implement it. So what's the use? Mixed bread AI. The mixed bread AI, they are very light based, lightweight, and they, they are working fine. They're not even on the leaderboard. They were on the leaderboard, but they're right, they might not. When I saw they were ranking on around seven or eight on the leaderboard, MTEB leaderboard, but they were working fine. So even on the leaderboard thing, it does not works. Demetrios: And right now it feels a little bit like, especially when it comes to embedding models, you just kind of go to the leaderboard and you close your eyes and then you pick one of them. Have you figured out a way to better test these or do you just find one and then try and use it everywhere? Syed Asad: No, no, that is not the case. Actually what I do is that I need to find the first, the embedding model. Try to find the embedding model based on my use case. Like if it is an embedding model on a medical use case more. So I try to find that. But the second factor to filter that is, is the size of that embedding model. Because at the end, if I am doing the entire POC or an entire research with that embedding model, what? And it has happened to me that we did entire research with embedding models, large language models, and then we have to remove everything just on the production part and it just went in smoke. Everything. Syed Asad: So a lightweight embedding model, especially the one which, which has started working recently, is that the cohere embedding models, and they have given a facility to call those embedding models in a quantized format. So that is also working and fast. Embed is one of the things which is by Qdrant, these two things are working in the production. I'm talking in the production for research. You can do anything. Demetrios: Brilliant, man. Well, this has been great. I really appreciate it. Asad, thank you for coming on here and for anybody else that would like to come on to the vector space talks, just let us know. In the meantime, don't get lost in vector space. We will see you all later. Have a great afternoon. Morning, evening, wherever you are. Demetrios: Asad, you taught me so much, bro. Thank you.
qdrant-landing/content/blog/are-you-vendor-locked.md
--- title: "Are You Vendor Locked?" draft: false slug: are-you-vendor-locked short_description: "Redefining freedom in the age of Generative AI." description: "Redefining freedom in the age of Generative AI. We believe that vendor-dependency comes from hardware, not software. " preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png social_preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png date: 2024-05-05T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - vendor lock - hybrid cloud --- We all are. > *“There is no use fighting it. Pick a vendor and go all in. Everything else is a mirage.”* The last words of a seasoned IT professional > As long as we are using any product, our solution’s infrastructure will depend on its vendors. Many say that building custom infrastructure will hurt velocity. **Is this true in the age of AI?** It depends on where your company is at. Most startups don’t survive more than five years, so putting too much effort into infrastructure is not the best use of their resources. You first need to survive and demonstrate product viability. **Sometimes you may pick the right vendors and still fail.** ![gpu-costs](/blog/are-you-vendor-locked/gpu-costs.png) We have all started to see the results of the AI hardware bottleneck. Running LLMs is expensive and smaller operations might fold to high costs. How will this affect large enterprises? > If you are an established corporation, being dependent on a specific supplier can make or break a solid business case. For large-scale GenAI solutions, costs are essential to maintenance and dictate the long-term viability of such projects. In the short run, enterprises may afford high costs, but when the prices drop - then it’s time to adjust. > Unfortunately, the long run goal of scalability and flexibility may be countered by vendor lock-in. Shifting operations from one host to another requires expertise and compatibility adjustments. Should businesses become dependent on a single cloud service provider, they open themselves to risks ranging from soaring costs to stifled innovation. **Finding the best vendor is key; but it’s crucial to stay mobile.** ## **Hardware is the New Vendor Lock** > *“We’re so short on GPUs, the less people that use the tool [ChatGPT], the better.”* OpenAI CEO, Sam Altman > When GPU hosting becomes too expensive, large and exciting Gen AI projects lose their luster. If moving clouds becomes too costly or difficulty to implement - you are vendor-locked. This used to be common with software. Now, hardware is the new dependency. *Enterprises have many reasons to stay provider agnostic - but cost is the main one.* [Appenzeller, Bornstein & Casado from Andreessen Horowitz](https://a16z.com/navigating-the-high-cost-of-ai-compute/) point to growing costs of AI compute. It is still a vendor’s market for A100 hourly GPUs, largely due to supply constraints. Furthermore, the price differences between AWS, GCP and Azure are dynamic enough to justify extensive cost-benefit analysis from prospective customers. ![gpu-costs-a16z](/blog/are-you-vendor-locked/gpu-costs-a16z.png) *Source: Andreessen Horowitz* Sure, your competitors can brag about all the features they can access - but are they willing to admit how much their company has lost to convenience and increasing costs? As an enterprise customer, one shouldn’t expect a vendor to stay consistent in this market. ## How Does This Affect Qdrant? As an open source vector database, Qdrant is completely risk-free. Furthermore, cost savings is one of the many reasons companies use it to augment the LLM. You won’t need to burn through GPU cash for training or inference. A basic instance with a CPU and RAM can easily manage indexing and retrieval. > *However, we find that many of our customers want to host Qdrant in the same place as the rest of their infrastructure, such as the LLM or other data engineering infra. This can be for practical reasons, due to corporate security policies, or even global political reasons.* One day, they might find this infrastructure too costly. Although vector search will remain cheap, their training, inference and embedding costs will grow. Then, they will want to switch vendors. What could interfere with the switch? Compatibility? Technologies? Lack of expertise? In terms of features, cloud service standardization is difficult due to varying features between cloud providers. This leads to custom solutions and vendor lock-in, hindering migration and cost reduction efforts, [as seen with Snapchat and Twitter](https://www.businessinsider.com/snap-google-cloud-aws-reducing-costs-2023-2). ## **Fear, Uncertainty and Doubt** You spend months setting up the infrastructure, but your competitor goes all in with a cheaper alternative and has a competing product out in one month? Does avoiding the lock-in matter if your company will be out of business while you try to setup a fully agnostic platform? **Problem:** If you're not locked into a vendor, you're locked into managing a much larger team of engineers. The build vs buy tradeoff is real and it comes with its own set of risks and costs. **Acknowledgement:** Any organization that processes vast amounts of data with AI needs custom infrastructure and dedicated resources, no matter the industry. Having to work with expensive services such as A100 GPUs justifies the existence of in-house DevOps crew. Any enterprise that scales up needs to employ vigilant operatives if it wants to manage costs. > There is no need for **Fear, Uncertainty and Doubt**. Vendor lock is not a futile cause - so let’s dispel the sentiment that all vendors are adversaries. You just need to work with a company that is willing to accommodate flexible use of products. > **The Solution is Kubernetes:** Decoupling your infrastructure from a specific cloud host is currently the best way of staying risk-free. Any component of your solution that runs on Kubernetes can integrate seamlessly with other compatible infrastructure. This is how you stay dynamic and move vendors whenever it suits you best. ## **What About Hybrid Cloud?** The key to freedom is to building your applications and infrastructure to run on any cloud. By leveraging containerization and service abstraction using Kubernetes or Docker, software vendors can exercise good faith in helping their customers transition to other cloud providers. We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. This technology integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service. #### Take a look. It's completely yours. We’ll help you manage it. <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/BF02jULGCfo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> [Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today. Sign up or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). #### Try it out! [![hybrid-cloud-cta.png](/blog/are-you-vendor-locked/hybrid-cloud-cta.png)](https://qdrant.to/cloud)
qdrant-landing/content/blog/azure-marketplace.md
--- draft: false title: "Qdrant is Now Available on Azure Marketplace!" short_description: Discover the power of Qdrant on Azure Marketplace! description: Discover the power of Qdrant on Azure Marketplace! Get started today and streamline your operations with ease. preview_image: /blog/azure-marketplace/azure-marketplace.png date: 2024-03-26T10:30:00Z author: David Myriel featured: true weight: 0 tags: - Qdrant - Azure Marketplace - Enterprise - Vector Database --- We're thrilled to announce that Qdrant is now [officially available on Azure Marketplace](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db), bringing enterprise-level vector search directly to Azure's vast community of users. This integration marks a significant milestone in our journey to make Qdrant more accessible and convenient for businesses worldwide. > *With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure,* - Tara Walker, Principal Software Engineer at Microsoft. ## Why Azure Marketplace? [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/) is renowned for its robust ecosystem, trusted by millions of users globally. By listing Qdrant on Azure Marketplace, we're not only expanding our reach but also ensuring seamless integration with Azure's suite of tools and services. This collaboration opens up new possibilities for our users, enabling them to leverage the power of Azure alongside the capabilities of Qdrant. > *Enterprises like Bosch can now use the power of Microsoft Azure to host Qdrant, unleashing unparalleled performance and massive-scale vector search. "With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale,* - Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. ## Key Benefits for Users: - **Rapid Application Development:** Deploying a cluster on Microsoft Azure via the Qdrant Cloud console only takes a few seconds and can scale up as needed, giving developers maximal flexibility for their production deployments. - **Billion Vector Scale:** Seamlessly grow and handle large-scale datasets with billions of vectors by leveraging Qdrant's features like vertical and horizontal scaling or binary quantization with Microsoft Azure's scalable infrastructure. - **Unparalleled Performance:** Qdrant is built to handle scaling challenges, high throughput, low latency, and efficient indexing. Written in Rust makes Qdrant fast and reliable even under high load. See benchmarks. - **Versatile Applications:** From recommendation systems to similarity search, Qdrant's integration with Microsoft Azure provides a versatile tool for a diverse set of AI applications. ## Getting Started: Ready to experience the benefits of Qdrant on Azure Marketplace? Getting started is easy: 1. **Visit the Azure Marketplace**: Navigate to [Qdrant's Marketplace listing](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db). 2. **Deploy Qdrant**: Follow the simple deployment instructions to set up your instance. 3. **Start Using Qdrant**: Once deployed, start exploring the [features and capabilities of Qdrant](/documentation/concepts/) on Azure. 4. **Read Documentation**: Read Qdrant's [Documentation](/documentation/) and build demo apps using [Tutorials](/documentation/tutorials/). ## Join Us on this Exciting Journey: We're incredibly excited about this collaboration with Azure Marketplace and the opportunities it brings for our users. As we continue to innovate and enhance Qdrant, we invite you to join us on this journey towards greater efficiency, scalability, and success. Ready to elevate your business with Qdrant? **Click the banner and get started today!** [![Get Started on Azure Marketplace](cta.png)](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db) ### About Qdrant: Qdrant is the leading, high-performance, scalable, open-source vector database and search engine, essential for building the next generation of AI/ML applications. Qdrant is able to handle billions of vectors, supports the matching of semantically complex objects, and is implemented in Rust for performance, memory safety, and scale.
qdrant-landing/content/blog/batch-vector-search-with-qdrant.md
--- draft: false title: Batch vector search with Qdrant slug: batch-vector-search-with-qdrant short_description: Introducing efficient batch vector search capabilities, streamlining and optimizing large-scale searches for enhanced performance. description: "Discover the latest feature designed to streamline and optimize large-scale searches. " preview_image: /blog/from_cms/andrey.vasnetsov_career_mining_on_the_moon_with_giant_machines_813bc56a-5767-4397-9243-217bea869820.png date: 2022-09-26T15:39:53.751Z author: Kacper Łukawski featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval --- The latest release of Qdrant 0.10.0 has introduced a lot of functionalities that simplify some common tasks. Those new possibilities come with some slightly modified interfaces of the client library. One of the recently introduced features is the possibility to query the collection with multiple vectors at once — a batch search mechanism. There are a lot of scenarios in which you may need to perform multiple non-related tasks at the same time. Previously, you only could send several requests to Qdrant API on your own. But multiple parallel requests may cause significant network overhead and slow down the process, especially in case of poor connection speed. Now, thanks to the new batch search, you don’t need to worry about that. Qdrant will handle multiple search requests in just one API call and will perform those requests in the most optimal way. ## An example of using the batch search We’ve used the official Python client to show how the batch search might be integrated with your application. Since there have been some changes in the interfaces of Qdrant 0.10.0, we’ll go step by step. ## Creating the collection The first step is to create a collection with a specified configuration — at least vector size and the distance function used to measure the similarity between vectors. ```python from qdrant_client import QdrantClient from qdrant_client.conversions.common_types import VectorParams client = QdrantClient("localhost", 6333) client.recreate_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.EUCLID), ) ``` ## Loading the vectors With the collection created, we can put some vectors into it. We’re going to have just a few examples. ```python vectors = [ [.1, .0, .0, .0], [.0, .1, .0, .0], [.0, .0, .1, .0], [.0, .0, .0, .1], [.1, .0, .1, .0], [.0, .1, .0, .1], [.1, .1, .0, .0], [.0, .0, .1, .1], [.1, .1, .1, .1], ] client.upload_collection( collection_name="test_collection", vectors=vectors, ) ``` ## Batch search in a single request Now we’re ready to start looking for similar vectors, as our collection has some entries. Let’s say we want to find the distance between the selected vector and the most similar database entry and at the same time find the two most similar objects for a different vector query. Up till 0.9, we would need to call the API twice. Now, we can send both requests together: ```python results = client.search_batch( collection_name="test_collection", requests=[ SearchRequest( vector=[0., 0., 2., 0.], limit=1, ), SearchRequest( vector=[0., 0., 0., 0.01], with_vector=True, limit=2, ) ] ) # Out: [ # [ScoredPoint(id=2, version=0, score=1.9, # payload=None, vector=None)], # [ScoredPoint(id=3, version=0, score=0.09, # payload=None, vector=[0.0, 0.0, 0.0, 0.1]), # ScoredPoint(id=1, version=0, score=0.10049876, # payload=None, vector=[0.0, 0.1, 0.0, 0.0])] # ] ``` Each instance of the SearchRequest class may provide its own search parameters, including vector query but also some additional filters. The response will be a list of individual results for each request. In case any of the requests is malformed, there will be an exception thrown, so either all of them pass or none of them. And that’s it! You no longer have to handle the multiple requests on your own. Qdrant will do it under the hood. ## Benchmark The batch search is fairly easy to be integrated into your application, but if you prefer to see some numbers before deciding to switch, then it’s worth comparing four different options: 1. Querying the database sequentially. 2. Using many threads/processes with individual requests. 3. Utilizing the batch search of Qdrant in a single request. 4. Combining parallel processing and batch search. In order to do that, we’ll create a richer collection of points, with vectors from the *glove-25-angular* dataset, quite a common choice for ANN comparison. If you’re interested in seeing some more details of how we benchmarked Qdrant, let’s take a [look at the Gist](https://gist.github.com/kacperlukawski/2d12faa49e06a5080f4c35ebcb89a2a3). ## The results We launched the benchmark 5 times on 10000 test vectors and averaged the results. Presented numbers are the mean values of all the attempts: 1. Sequential search: 225.9 seconds 2. Batch search: 208.0 seconds 3. Multiprocessing search (8 processes): 194.2 seconds 4. Multiprocessing batch search (8 processes, batch size 10): 148.9 seconds The results you may achieve on a specific setup may vary depending on the hardware, however, at the first glance, it seems that batch searching may save you quite a lot of time. Additional improvements could be achieved in the case of distributed deployment, as Qdrant won’t need to make extensive inter-cluster requests. Moreover, if your requests share the same filtering condition, the query optimizer would be able to reuse it among batch requests. ## Summary Batch search allows packing different queries into a single API call and retrieving the results in a single response. If you ever struggled with sending several consecutive queries into Qdrant, then you can easily switch to the new batch search method and simplify your application code. As shown in the benchmarks, that may almost effortlessly speed up your interactions with Qdrant even by over 30%, even not considering the spare network overhead and possible reuse of filters!
qdrant-landing/content/blog/binary-quantization-andrey-vasnetsov-vector-space-talk-001.md
--- draft: false title: Binary Quantization - Andrey Vasnetsov | Vector Space Talks slug: binary-quantization short_description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its applications in vector indexing. description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its benefits in vector indexing, including the challenges and potential future developments of this technique. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-09T10:30:10.952Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Binary Quantization - Qdrant --- > *"Everything changed when we actually tried binary quantization with OpenAI model.”*\ > -- Andrey Vasnetsov Ever wonder why we need quantization for vector indexes? Andrey Vasnetsov explains the complexities and challenges of searching through proximity graphs. Binary quantization reduces storage size and boosts speed by 30x, but not all models are compatible. Andrey worked as a Machine Learning Engineer most of his career. He prefers practical over theoretical, working demo over arXiv paper. He is currently working as the CTO at Qdrant a Vector Similarity Search Engine, which can be used for semantic search, similarity matching of text, images or even videos, and also recommendations. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7dPOm3x4rDBwSFkGZuwaMq?si=Ip77WCa_RCCYebeHX6DTMQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/4aUq5VnR_VI).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/4aUq5VnR_VI?si=CdT2OL-eQLEFjswr" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Binary-Quantization---Andrey-Vasnetsov--Vector-Space-Talk-001-e2bsa3m/a-aajrqfd" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## Top Takeaways: Discover how oversampling optimizes precision in real-time, enhancing the accuracy without altering stored data structures in our very first episode of the Vector Space Talks by Qdrant, with none other than the CTO of Qdrant, Andrey Vasnetsov. In this episode, Andrey shares invaluable insights into the world of binary quantization and its profound impact on Vector Space technology. 5 Keys to Learning from the Episode: 1. The necessity of quantization and the complex challenges it helps to overcome. 2. The transformative effects of binary quantization on processing speed and storage size reduction. 3. A detailed exploration of oversampling and its real-time precision control in query search. 4. Understanding the simplicity and effectiveness of binary quantization, especially when compared to more intricate quantization methods. 5. The ongoing research and potential impact of binary quantization on future models. > Fun Fact: Binary quantization can deliver processing speeds over 30 times faster than traditional quantization methods, which is a revolutionary advancement in Vector Space technology. > ## Show Notes: 00:00 Overview of HNSW vector index.\ 03:57 Efficient storage needed for large vector sizes.\ 07:49 Oversampling controls precision in real-time search.\ 12:21 Comparison of vectors using dot production.\ 15:20 Experimenting with models, OpenAI has compatibility.\ 18:29 Qdrant architecture doesn't support removing original vectors. ## More Quotes from Andrey: *"Inside Qdrant we use HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors.”*\ -- Andrey Vasnetsov *"The main idea is that we convert the float point elements of the vector into binary representation. So, it's either zero or one, depending if the original element is positive or negative.”*\ -- Andrey Vasnetsov *"We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI.”*\ -- Andrey Vasnetsov ## Transcript: Demetrios: Okay, welcome everyone. This is the first and inaugural vector space talks, and who better to kick it off than the CTO of Qdrant himself? Andrey V. Happy to introduce you and hear all about this binary quantization that you're going to be talking about. I've got some questions for you, and I know there are some questions that came through in the chat. And the funny thing about this is that we recorded it live on Discord yesterday. But the thing about Discord is you cannot trust the recordings on there. And so we only got the audio and we wanted to make this more visual for those of you that are watching on YouTube. Hence here we are recording it again. Demetrios: And so I'll lead us through some questions for you, Andrey. And I have one thing that I ask everyone who is listening to this, and that is if you want to give a talk and you want to showcase either how you're using Qdrant, how you've built a rag, how you have different features or challenges that you've overcome with your AI, landscape or ecosystem or stack that you've set up, please reach out to myself and I will get you on here and we can showcase what you've done and you can give a talk for the vector space talk. So without further ado, let's jump into this, Andrey, we're talking about binary quantization, but let's maybe start a step back. Why do we need any quantization at all? Why not just use original vectors? Andrey Vasnetsov: Yep. Hello, everyone. Hello Demetrios. And it's a good question, and I think in order to answer it, I need to first give a short overview of what is vector index, how it works and what challenges it possess. So, inside Qdrant we use so called HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors. So in order to search through this graph, what you actually need to do is do a greedy deep depth first search, and you can tune the precision of your search with the beam size of the greedy search process. But this structure of the index actually has its own challenges and first of all, its index building complexity. Andrey Vasnetsov: Inserting one vector into the index is as complicated as searching for one vector in the graph. And the graph structure overall have also its own limitations. It requires a lot of random reads where you can go in any direction. It's not easy to predict which path the graph will take. The search process will take in advance. So unlike traditional indexes in traditional databases, like binary trees, like inverted indexes, where we can pretty much serialize everything. In HNSW it's always random reads and it's actually always sequential reads, because you need to go from one vertex to another in a sequential manner. And this actually creates a very strict requirement for underlying storage of vectors. Andrey Vasnetsov: It had to have a very low latency and it have to support this randomly spatter. So basically we can only do it efficiently if we store all the vectors either in very fast solid state disks or if we use actual RAM to store everything. And RAM is not cheap these days, especially considering that the size of vectors increases with each new version of the model. And for example, OpenAI model is already more than 1000 dimensions. So you can imagine one vector is already 6 data, no matter how long your text is, and it's just becoming more and more expensive with the advancements of new models and so on. So in order to actually fight this, in order to compensate for the growth of data requirement, what we propose to do, and what we already did with different other quantization techniques is we actually compress vectors into quantized vector storage, which is usually much more compact for the in memory representation. For example, on one of the previous releases we have scalar quantization and product quantization, which can compress up to 64 times the size of the vector. And we only keep in fast storage these compressed vectors. Andrey Vasnetsov: We retrieve them and get a list of candidates which will later rescore using the original vectors. And the benefit here is this reordering or rescoring process actually doesn't require any kind of sequential or random access to data, because we already know all the IDs we need to rescore, and we can efficiently read it from the disk using asynchronous I O, for example, and even leverage the advantage of very cheap network mounted disks. And that's the main benefit of quantization. Demetrios: I have a few questions off the back of this one, being just a quick thing, and I'm wondering if we can double benefit by using this binary quantization, but also if we're using smaller models that aren't the GBTs, will that help? Andrey Vasnetsov: Right. So not all models are as big as OpenAI, but what we see, the trend in this area, the trend of development of different models, indicates that they will become bigger and bigger over time. Just because we want to store more information inside vectors, we want to have larger context, we want to have more detailed information, more detailed separation and so on. This trend is obvious if like five years ago the usual size of the vector was 100 dimensions now the usual size is 700 dimensions, so it's basically. Demetrios: Preparing for the future while also optimizing for today. Andrey Vasnetsov: Right? Demetrios: Yeah. Okay, so you mentioned on here oversampling. Can you go into that a little bit more and explain to me what that is? Andrey Vasnetsov: Yeah, so oversampling is a special technique we use to control precision of the search in real time, in query time. And the thing is, we can internally retrieve from quantized storage a bit more vectors than we actually need. And when we do rescoring with original vectors, we assign more precise score. And therefore from this overselection, we can pick only those vectors which are actually good for the user. And that's how we can basically control accuracy without rebuilding index, without changing any kind of parameters inside the stored data structures. But we can do it real time in just one parameter change of the search query itself. Demetrios: I see, okay, so basically this is the quantization. And now let's dive into the binary quantization and how it works. Andrey Vasnetsov: Right, so binary quantization is actually very simple. The main idea that we convert the float point elements of the vector into binary representation. So it's either zero or one, depending if the original element is positive or negative. And by doing this we can approximate dot production or cosine similarity, whatever metric you use to compare vectors with just hemming distance, and hemming distance is turned to be very simple to compute. It uses only two most optimized CPU instructions ever. It's Pixor and Popcount. Instead of complicated float point subprocessor, you only need those tool. It works with any register you have, and it's very fast. Andrey Vasnetsov: It uses very few CPU cycles to actually produce a result. That's why binary quantization is over 30 times faster than regular product. And it actually solves the problem of complicated index building, because this computation of dot products is the main source of computational requirements for HNSW. Demetrios: So if I'm understanding this correctly, it's basically taking all of these numbers that are on the left, which can be, yes, decimal numbers. Andrey Vasnetsov: On the left you can see original vector and it converts it in binary representation. And of course it does lose a lot of precision in the process. But because first we have very large vector and second, we have oversampling feature, we can compensate for this loss of accuracy and still have benefit in both speed and the size of the storage. Demetrios: So if I'm understanding this correctly, it's basically saying binary quantization on its own probably isn't the best thing that you would want to do. But since you have these other features that will help counterbalance the loss in accuracy. You get the speed from the binary quantization and you get the accuracy from these other features. Andrey Vasnetsov: Right. So the speed boost is so overwhelming that it doesn't really matter how much over sampling is going to be, we will still benefit from that. Demetrios: Yeah. And how much faster is it? You said that, what, over 30 times faster? Andrey Vasnetsov: Over 30 times and some benchmarks is about 40 times faster. Demetrios: Wow. Yeah, that's huge. And so then on the bottom here you have dot product versus hammering distance. And then there's. Yeah, hamming. Sorry, I'm inventing words over here on your slide. Can you explain what's going on there? Andrey Vasnetsov: Right, so dot production is the metric we usually use in comparing a pair of vectors. It's basically the same as cosine similarity, but this normalization on top. So internally, both cosine and dot production actually doing only dot production, that's usual metric we use. And in order to do this operation, we first need to multiply each pair of elements to the same element of the other vector and then add all these multiplications in one number. It's going to be our score instead of this in binary quantization, in binary vector, we do XOR operation and then count number of ones. So basically, Hemming distance is an approximation of dot production in this binary space. Demetrios: Excellent. Okay, so then it looks simple enough, right? Why are you implementing it now after much more complicated product quantization? Andrey Vasnetsov: It's actually a great question. And the answer to this is binary questization looked too simple to be true, too good to be true. And we thought like this, we tried different things with open source models that didn't work really well. But everything changed when we actually tried binary quantization with OpenAI model. And it turned out that OpenAI model has very good compatibility with this type of quantization. Unfortunately, not every model have as good compatibility as OpenAI. And to be honest, it's not yet absolutely clear for us what makes models compatible and whatnot. We do know that it correlates with number of dimensions, but it is not the only factor. Andrey Vasnetsov: So there is some secret source which exists and we should find it, which should enable models to be compatible with binary quantization. And I think it's actually a future of this space because the benefits of this hemming distance benefits of binary quantization is so great that it makes sense to incorporate these tricks on the learning process of the model to make them more compatible. Demetrios: Well, you mentioned that OpenAI's model is one that obviously works well with binary quantization, but there are models that don't work well with it, which models have not been very good. Andrey Vasnetsov: So right now we are in the process of experimenting with different models. We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI. We also tried different closed source models, for example Cohere AI, which is on the same level of compatibility with binary quantization as OpenAI, but they actually have much larger dimensionality. So instead of 1500 they have 4000. And it's not yet clear if only dimensionality makes this model compatible. Or there is something else in training process, but there are open source models which are getting close to OpenAI 1000 dimensions, but they are not nearly as good as Openi in terms of this compression compatibility. Demetrios: So let that be something that hopefully the community can help us figure out. Why is it that this works incredibly well with these closed source models, but not with the open source models? Maybe there is something that we're missing there. Andrey Vasnetsov: Not all closed source models are compatible as well, so some of them work similar as open source, but a few works well. Demetrios: Interesting. Okay, so is there a plan to implement other quantization methods, like four bit quantization or even compressing two floats into one bit? Andrey Vasnetsov: Right, so our choice of quantization is mostly defined by available CPU instructions we can apply to perform those computations. In case of binary quantization, it's straightforward and very simple. That's why we like binary quantization so much. In case of, for example, four bit quantization, it is not as clear which operation we should use. It's not yet clear. Would it be efficient to convert into four bits and then apply multiplication of four bits? So this would require additional investigation, and I cannot say that we have immediate plans to do so because still the binary quincellation field is not yet explored on 100% and we think it's a lot more potential with this than currently unlocked. Demetrios: Yeah, there's some low hanging fruits still on the binary quantization field, so tackle those first and then move your way over to four bit and all that fun stuff. Last question that I've got for you is can we remove original vectors and only keep quantized ones in order to save disk space? Andrey Vasnetsov: Right? So unfortunately Qdrant architecture is not designed and not expecting this type of behavior for several reasons. First of all, removing of the original vectors will compromise some features like oversampling, like segment building. And actually removing of those original vectors will only be compatible with some types of quantization for example, it won't be compatible with scalar quantization because in this case we won't be able to rebuild index to do maintenance of the system. And in order to maintain, how would you say, consistency of the API, consistency of the engine, we decided to enforce always enforced storing of the original vectors. But the good news is that you can always keep original vectors on just disk storage. It's very cheap. Usually it's ten times or even more times cheaper than RAM, and it already gives you great advantage in terms of price. That's answer excellent. Demetrios: Well man, I think that's about it from this end, and it feels like it's a perfect spot to end it. As I mentioned before, if anyone wants to come and present at our vector space talks, we're going to be doing these, hopefully biweekly, maybe weekly, if we can find enough people. And so this is an open invitation for you, and if you come present, I promise I will send you some swag. That is my promise to you. And if you're listening after the fact and you have any questions, come into discord on the Qdrant. Discord. And ask myself or Andrey any of the questions that you may have as you're listening to this talk about binary quantization. We will catch you all later. Demetrios: See ya, have a great day. Take care.
qdrant-landing/content/blog/building-a-high-performance-entity-matching-solution-with-qdrant-rishabh-bhardwaj-vector-space-talks-005.md
--- draft: false title: Building a High-Performance Entity Matching Solution with Qdrant - Rishabh Bhardwaj | Vector Space Talks slug: entity-matching-qdrant short_description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant. description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant, addressing data inconsistency, duplication, and real-time processing challenges. preview_image: /blog/from_cms/rishabh-bhardwaj-cropped.png date: 2024-01-09T11:53:56.825Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talk - Entity Matching Solution - Real Time Processing --- > *"When we were building proof of concept for this solution, we initially started with Postgres. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed... then we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.”*\ > -- Rishabh Bhardwaj > How does the HNSW (Hierarchical Navigable Small World) algorithm benefit the solution built by Rishabh? Rhishabh, a Data Engineer at HRS Group, excels in designing, developing, and maintaining data pipelines and infrastructure crucial for data-driven decision-making processes. With extensive experience, Rhishabh brings a profound understanding of data engineering principles and best practices to the role. Proficient in SQL, Python, Airflow, ETL tools, and cloud platforms like AWS and Azure, Rhishabh has a proven track record of delivering high-quality data solutions that align with business needs. Collaborating closely with data analysts, scientists, and stakeholders at HRS Group, Rhishabh ensures the provision of valuable data and insights for informed decision-making. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/3IMIZljXqgYBqt671eaR9b?si=HUV6iwzIRByLLyHmroWTFA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/tDWhMAOyrcE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/tDWhMAOyrcE?si=-LVPtwvJTyyvaSv3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-a-High-Performance-Entity-Matching-Solution-with-Qdrant---Rishabh-Bhardwaj--Vector-Space-Talks-005-e2cbu7e/a-aaldc8e" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Data inconsistency, duplication, and real-time processing challenges? Rishabh Bhardwaj, Data Engineer at HRS Group has the solution! In this episode, Rishabh dives into the nitty-gritty of creating a high-performance hotel matching solution with Qdrant, covering everything from data inconsistency challenges to the speed and accuracy enhancements achieved through the HNSW algorithm. 5 Keys to Learning from the Episode: 1. Discover the importance of data consistency and the challenges it poses when dealing with multiple sources and languages. 2. Learn how Qdrant, an open-source vector database, outperformed other solutions and provided an efficient solution for high-speed matching. 3. Explore the unique modification of the HNSW algorithm in Qdrant and how it optimized the performance of the solution. 4. Dive into the crucial role of geofiltering and how it ensures accurate matching based on hotel locations. 5. Gain insights into the considerations surrounding GDPR compliance and the secure handling of hotel data. > Fun Fact: Did you know that Rishabh and his team experimented with multiple transformer models to find the best fit for their entity resolution use case? Ultimately, they found that the Mini LM model struck the perfect balance between speed and accuracy. Talk about a winning combination! > ## Show Notes: 02:24 Data from different sources is inconsistent and complex.\ 05:03 Using Postgres for proof, switched to Qdrant for better results\ 09:16 Geofiltering is crucial for validating our matches.\ 11:46 Insights on performance metrics and benchmarks.\ 16:22 We experimented with different values and found the desired number.\ 19:54 We experimented with different models and found the best one.\ 21:01 API gateway connects multiple clients for entity resolution.\ 24:31 Multiple languages supported, using transcript API for accuracy. ## More Quotes from Rishabh: *"One of the major challenges is the data inconsistency.”*\ -- Rishabh Bhardwaj *"So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the embeddings.”*\ -- Rishabh Bhardwaj *"Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner.”*\ -- Rishabh Bhardwaj ## Transcript: Demetrios: Hello, fellow travelers in vector space. Dare, I call you astronauts? Today we've got an incredible conversation coming up with Rishabh, and I am happy that you all have joined us. Rishabh, it's great to have you here, man. How you doing? Rishabh Bhardwaj: Thanks for having me, Demetrios. I'm doing really great. Demetrios: Cool. I love hearing that. And I know you are in India. It is a little bit late there, so I appreciate you taking the time to come on the Vector space talks with us today. You've got a lot of stuff that you're going to be talking about. For anybody that does not know you, you are a data engineer at Hrs Group, and you're responsible for designing, developing, and maintaining data pipelines and infrastructure that supports the company. I am excited because today we're going to be talking about building a high performance hotel matching solution with Qdrant. Of course, there's a little kicker there. Demetrios: We want to get into how you did that and how you leveraged Qdrant. Let's talk about it, man. Let's get into it. I want to know give us a quick overview of what exactly this is. I gave the title, but I think you can tell us a little bit more about building this high performance hotel matching solution. Rishabh Bhardwaj: Definitely. So to start with, a brief description about the project. So we have some data in our internal databases, and we ingest a lot of data on a regular basis from different sources. So Hrs is basically a global tech company focused on business travel, and we have one of the most used hotel booking portals in Europe. So one of the major things that is important for customer satisfaction is the content that we provide them on our portals. Right. So the issue or the key challenges that we have is basically with the data itself that we ingest from different sources. One of the major challenges is the data inconsistency. Rishabh Bhardwaj: So different sources provide data in different formats, not only in different formats. It comes in multiple languages as well. So almost all the languages being used across Europe and also other parts of the world as well. So, Majorly, the data is coming across 20 different languages, and it makes it really difficult to consolidate and analyze this data. And this inconsistency in data often leads to many errors in data interpretation and decision making as well. Also, there is a challenge of data duplication, so the same piece of information can be represented differently across various sources, which could then again lead to data redundancy. And identifying and resolving these duplicates is again a significant challenge. Then the last challenge I can think about is that this data processing happens in real time. Rishabh Bhardwaj: So we have a constant influx of data from multiple sources, and processing and updating this information in real time is a really daunting task. Yeah. Demetrios: And when you are talking about this data duplication, are you saying things like, it's the same information in French and German? Or is it something like it's the same column, just a different way in like, a table? Rishabh Bhardwaj: Actually, it is both the cases, so the same entities can be coming in multiple languages. And then again, second thing also wow. Demetrios: All right, cool. Well, that sets the scene for us. Now, I feel like you brought some slides along. Feel free to share those whenever you want. I'm going to fire away the first question and ask about this. I'm going to go straight into Qdrant questions and ask you to elaborate on how the unique modification of Qdrant of the HNSW algorithm benefits your solution. So what are you doing there? How are you leveraging that? And how also to add another layer to this question, this ridiculously long question that I'm starting to get myself into, how do you handle geo filtering based on longitude and latitude? So, to summarize my lengthy question, let's just start with the HNSW algorithm. How does that benefit your solution? Rishabh Bhardwaj: Sure. So to begin with, I will give you a little backstory. So when we were building proof of concept for this solution, we initially started with Postgres, because we had some Postgres databases lying around in development environments, and we just wanted to try out and build a proof of concept. So we installed an extension called Pgvector. And at that point of time, it used to have IVF Flat indexing approach. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed. Basically, if we want to increase the speed, then we would suffer a lot on basis of recall. Then we started looking for native vector databases in the market, and then we saw some benchmarks and we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment. Rishabh Bhardwaj: And also, it was open source and really easy to host and use. We just needed to deploy a docker image in EC two instance and we can really start using it. Demetrios: Did you guys do your own benchmarks too? Or was that just like, you looked, you saw, you were like, all right, let's give this thing a spin. Rishabh Bhardwaj: So while deciding initially we just looked at the publicly available benchmarks, but later on, when we started using Qdrant, we did our own benchmarks internally. Nice. Demetrios: All right. Rishabh Bhardwaj: We just deployed a docker image of Qdrant in one of the EC Two instances and started experimenting with it. Very soon we realized that the HNSW indexing algorithm that it uses to build the indexing for the vectors, it was really efficient. We noticed that as compared to the PG Vector IVF Flat approach, it was around 16 times faster. And it didn't mean that it was not that accurate. It was actually 5% more accurate as compared to the previous results. So hold up. Demetrios: 16 times faster and 5% more accurate. And just so everybody out there listening knows we're not paying you to say this, right? Rishabh Bhardwaj: No, not at all. Demetrios: All right, keep going. I like it. Rishabh Bhardwaj: Yeah. So initially, during the experimentations, we begin with the default values for the HNSW algorithm that Qdrant ships with. And these benchmarks that I just told you about, it was based on those parameters. But as our use cases evolved, we also experimented on multiple values of basically M and EF construct that Qdrant allow us to specify in the indexing algorithm. Demetrios: Right. Rishabh Bhardwaj: So also the other thing is, Qdrant also provides the functionality to specify those parameters while making the search as well. So it does not mean if we build the index initially, we only have to use those specifications. We can again specify them during the search as well. Demetrios: Okay. Rishabh Bhardwaj: Yeah. So some use cases we have requires 100% accuracy. It means we do not need to worry about speed at all in those use cases. But there are some use cases in which speed is really important when we need to match, like, a million scale data set. In those use cases, speed is really important, and we can adjust a little bit on the accuracy part. So, yeah, this configuration that Qdrant provides for indexing really benefited us in our approach. Demetrios: Okay, so then layer into that all the fun with how you're handling geofiltering. Rishabh Bhardwaj: So geofiltering is also a very important feature in our solution because the entities that we are dealing with in our data majorly consist of hotel entities. Right. And hotel entities often comes with the geocordinates. So even if we match it using one of the Embedding models, then we also need to make sure that whatever the model has matched with a certain cosine similarity is also true. So in order to validate that, we use geofiltering, which also comes in stacked with Qdrant. So we provide geocordinate data from our internal databases, and then we match it from what we get from multiple sources as well. And it also has a radius parameter, which we can provide to tune in. How much radius do we want to take in account in order for this to be filterable? Demetrios: Yeah. Makes sense. I would imagine that knowing where the hotel location is is probably a very big piece of the puzzle that you're serving up for people. So as you were doing this, what are some things that came up that were really important? I know you talked about working with Europe. There's a lot of GDPR concerns. Was there, like, privacy considerations that you had to address? Was there security considerations when it comes to handling hotel data? Vector, Embeddings, how did you manage all that stuff? Rishabh Bhardwaj: So GDP compliance? Yes. It does play a very important role in this whole solution. Demetrios: That was meant to be a thumbs up. I don't know what happened there. Keep going. Sorry, I derailed that. Rishabh Bhardwaj: No worries. Yes. So GDPR compliance is also one of the key factors that we take in account while building this solution to make sure that nothing goes out of the compliance. We basically deployed Qdrant inside a private EC two instance, and it is also protected by an API key. And also we have built custom authentication workflows using Microsoft Azure SSO. Demetrios: I see. So there are a few things that I also want to ask, but I do want to open it up. There are people that are listening, watching live. If anyone wants to ask any questions in the chat, feel free to throw something in there and I will ask away. In the meantime, while people are typing in what they want to talk to you about, can you talk to us about any insights into the performance metrics? And really, these benchmarks that you did where you saw it was, I think you said, 16 times faster and then 5% more accurate. What did that look like? What benchmarks did you do? How did you benchmark it? All that fun stuff. And what are some things to keep in mind if others out there want to benchmark? And I guess you were just benchmarking it against Pgvector, right? Rishabh Bhardwaj: Yes, we did. Demetrios: Okay, cool. Rishabh Bhardwaj: So for benchmarking, we have some data sets that are already matched to some entities. This was done partially by humans and partially by other algorithms that we use for matching in the past. And it is already consolidated data sets, which we again used for benchmarking purposes. Then the benchmarks that I specified were only against PG vector, and we did not benchmark it any further because the speed and the accuracy that Qdrant provides, I think it is already covering our use case and it is way more faster than we thought the solution could be. So right now we did not benchmark against any other vector database or any other solution. Demetrios: Makes sense just to also get an idea in my head kind of jumping all over the place, so forgive me. The semantic components of the hotel, was it text descriptions or images or a little bit of both? Everything? Rishabh Bhardwaj: Yes. So semantic comes just from the descriptions of the hotels, and right now it does not include the images. But in future use cases, we are also considering using images as well to calculate the semantic similarity between two entities. Demetrios: Nice. Okay, cool. Good. I am a visual guy. You got slides for us too, right? If I'm not mistaken? Do you want to share those or do you want me to keep hitting you with questions? We have something from Brad in the chat and maybe before you share any slides, is there a map visualization as part of the application UI? Can you speak to what you used? Rishabh Bhardwaj: If so, not right now, but this is actually a great idea and we will try to build it as soon as possible. Demetrios: Yeah, it makes sense. Where you have the drag and you can see like within this area, you have X amount of hotels, and these are what they look like, et cetera, et cetera. Rishabh Bhardwaj: Yes, definitely. Demetrios: Awesome. All right, so, yeah, feel free to share any slides you have, otherwise I can hit you with another question in the meantime, which is I'm wondering about the configurations you used for the HNSW index in Qdrant and what were the number of edges per node and the number of neighbors to consider during the index building. All of that fun stuff that goes into the nitty gritty of it. Rishabh Bhardwaj: So should I go with the slide first or should I answer your question first? Demetrios: Probably answer the question so we don't get too far off track, and then we can hit up your slides. And the slides, I'm sure, will prompt many other questions from my side and the audience's side. Rishabh Bhardwaj: So, for HNSW configuration, we have specified the value of M, which is, I think, basically the layers as 64, and the value for EF construct is 256. Demetrios: And how did you go about that? Rishabh Bhardwaj: So we did some again, benchmarks based on the single model that we have selected, which is mini LM, L six, V two. I will talk about it later also. But we basically experimented with different values of M and EF construct, and we came to this number that this is the value that we want to go ahead with. And also when I said that in some cases, indexing is not required at all, speed is not required at all, we want to make sure that whatever we are matching is 100% accurate. In that case, the Python client for Qdrant also provides a parameter called exact, and if we specify it as true, then it basically does not use indexing and it makes a full search on the whole vector collection, basically. Demetrios: Okay, so there's something for me that's pretty fascinating there on these different use cases. What else differs in the different ones? Because you have certain needs for speed or accuracy. It seems like those are the main trade offs that you're working with. What differs in the way that you set things up? Rishabh Bhardwaj: So in some cases so there are some internal databases that need to have hotel entities in a very sophisticated manner. It means it should not contain even a single duplicate entity. In those cases, accuracy is the most important thing we look at, and in some cases, for data analytics and consolidation purposes, we want speed more, but the accuracy should not be that much in value. Demetrios: So what does that look like in practice? Because you mentioned okay, when we are looking for the accuracy, we make sure that it comes through all of the different records. Right. Are there any other things in practice that you did differently? Rishabh Bhardwaj: Not really. Nothing I can think of right now. Demetrios: Okay, if anything comes up yeah, I'll remind you, but hit us with the slides, man. What do you got for the visual learners out there? Rishabh Bhardwaj: Sure. So I have an architecture diagram of what the solution looks like right now. So, this is the current architecture that we have in production. So, as I mentioned, we have deployed the Qdrant vector database in an EC Two, private EC Two instance hosted inside a VPC. And then we have some batch jobs running, which basically create Embeddings. And the source data basically first comes into S three buckets into a data lake. We do a little bit of preprocessing data cleaning and then it goes through a batch process of generating the Embeddings using the Mini LM model, mini LML six, V two. And this model is basically hosted in a SageMaker serverless inference endpoint, which allows us to not worry about servers and we can scale it as much as we want. Rishabh Bhardwaj: And it really helps us to build the Embeddings in a really fast manner. Demetrios: Why did you choose that model? Did you go through different models or was it just this one worked well enough and you went with it? Rishabh Bhardwaj: No, actually this was, I think the third or the fourth model that we tried out with. So what happens right now is if, let's say we want to perform a task such as sentence similarity and we go to the Internet and we try to find a model, it is really hard to see which model would perform best in our use case. So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. So we did a lot of experiments. We used, I think, Mpnet model and a lot of multilingual models as well. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the Embeddings. So we have deployed it in a serverless inference endpoint in SageMaker. And once we generate the Embeddings in a glue job, we then store them into the vector database Qdrant. Rishabh Bhardwaj: Then this part here is what goes on in the real time scenario. So, we have multiple clients, basically multiple application that would connect to an API gateway. We have exposed this API gateway in such a way that multiple clients can connect to it and they can use this entity resolution service according to their use cases. And we take in different parameters. Some are mandatory, some are not mandatory, and then they can use it based on their use case. The API gateway is connected to a lambda function which basically performs search on Qdrant vector database using the same Embeddings that can be generated from the same model that we hosted in the serverless inference endpoint. So, yeah, this is how the diagram looks right now. It did not used to look like this sometime back, but we have evolved it, developed it, and now we have got to this point where it is really scalable because most of the infrastructure that we have used here is serverless and it can be scaled up to any number of requests that you want. Demetrios: What did you have before that was the MVP. Rishabh Bhardwaj: So instead of this one, we had a real time inference endpoint which basically limited us to some number of requests that we had preset earlier while deploying the model. So this was one of the bottlenecks and then lambda function was always there, I think this one and also I think in place of this Qdrant vector database, as I mentioned, we had Postgres. So yeah, that was also a limitation because it used to use a lot of compute capacity within the EC two instance as compared to Qdrant. Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner. Demetrios: Awesome. Cool. This is fascinating. From my side, I love seeing what you've done and how you went about iterating on the architecture and starting off with something that you had up and running and then optimizing it. So this project has been how long has it been in the making and what has the time to market been like that first MVP from zero to one and now it feels like you're going to one to infinity by making it optimized. What's the time frames been here? Rishabh Bhardwaj: I think we started this in the month of May this year. Now it's like five to six months already. So the first working solution that we built was in around one and a half months and then from there onwards we have tried to iterate it to make it better and better. Demetrios: Cool. Very cool. Some great questions come through in the chat. Do you have multiple language support for hotel names? If so, did you see any issues with such mappings? Rishabh Bhardwaj: Yes, we do have support for multiple languages and we do not do it using currently using the multilingual models because what we realized is the multilingual models are built on journal sentences and not based it is not trained on entities like names, hotel names and traveler names, et cetera. So when we experimented with the multilingual models it did not provide much satisfactory results. So we used transcript API from Google and it is able to basically translate a lot of languages across that we have across the data and it really gives satisfactory results in terms of entity resolution. Demetrios: Awesome. What other transformers were considered for the evaluation? Rishabh Bhardwaj: The ones I remember from top of my head are Mpnet, then there is a Chinese model called Text to VEC, Shiba something and Bert uncased, if I remember correctly. Yeah, these were some of the models. Demetrios: That we considered and nothing stood out that worked that well or was it just that you had to make trade offs on all of them? Rishabh Bhardwaj: So in terms of accuracy, Mpnet was a little bit better than Mini LM but then again it was a lot slower than the Mini LM model. It was around five times slower than the Mini LM model, so it was not a big trade off to give up with. So we decided to go ahead with Mini LM. Demetrios: Awesome. Well, dude, this has been pretty enlightening. I really appreciate you coming on here and doing this. If anyone else has any questions for you, we'll leave all your information on where to get in touch in the chat. Rishabh, thank you so much. This is super cool. I appreciate you coming on here. Anyone that's listening, if you want to come onto the vector space talks, feel free to reach out to me and I'll make it happen. Demetrios: This is really cool to see the different work that people are doing and how you all are evolving the game, man. I really appreciate this. Rishabh Bhardwaj: Thank you, Demetrios. Thank you for inviting inviting me and have a nice day.
qdrant-landing/content/blog/building-llm-powered-applications-in-production-hamza-farooq-vector-space-talks-006.md
--- draft: false title: Building LLM Powered Applications in Production - Hamza Farooq | Vector Space Talks slug: llm-complex-search-copilot short_description: Hamza Farooq discusses the future of LLMs, complex search, and copilots. description: Hamza Farooq presents the future of large language models, complex search, and copilot, discussing real-world applications and the challenges of implementing these technologies in production. preview_image: /blog/from_cms/hamza-farooq-cropped.png date: 2024-01-09T12:16:22.760Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - Vector Database --- > *"There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used.”*\ > -- Hamza Farooq > How do you think Hamza's background in machine learning and previous experiences at Google and Walmart Labs have influenced his approach to building LLM-powered applications? Hamza Farooq, an accomplished educator and AI enthusiast, is the founder of Traversaal.ai. His journey is marked by a relentless passion for AI exploration, particularly in building Large Language Models. As an adjunct professor at UCLA Anderson, Hamza shapes the future of AI by teaching cutting-edge technology courses. At Traversaal.ai, he empowers businesses with domain-specific AI solutions, focusing on conversational search and recommendation systems to deliver personalized experiences. With a diverse career spanning academia, industry, and entrepreneurship, Hamza brings a wealth of experience from time at Google. His overarching goal is to bridge the gap between AI innovation and real-world applications, introducing transformative solutions to the market. Hamza eagerly anticipates the dynamic challenges and opportunities in the ever-evolving field of AI and machine learning. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1oh31JA2XsqzuZhCUQVNN8?si=viPPgxiZR0agFhz1QlimSA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0N9ozwgmEQM).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/0N9ozwgmEQM?si=4f_MaEUrberT575w" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-LLM-Powered-Applications-in-Production---Hamza-Farooq--Vector-Space-Talks-006-e2cuur5/a-aan8b8j" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## Top Takeaways: UX specialist? Your expertise in designing seamless user experiences for GenAI products is guaranteed to be in high demand. Let's elevate the user interface for next-gen technology! In this episode, Hamza presents the future of large language models and complex search, discussing real-world applications and the challenges of implementing these technologies in production. 5 Keys to Learning from the Episode: 1. **Complex Search** - Discover how LLMs are revolutionizing the way we interact with search engines and enhancing the search experience beyond basic queries. 2. **Conversational Search and Personalization** - Explore the potential of conversational search and personalized recommendations using open-source LLMs, bringing a whole new level of user engagement. 3. **Challenges and Solutions** - Uncover the downtime challenges faced by LLM services and learn the strategies deployed to mitigate these issues for seamless operation. 4. **Traversal AI's Unique Approach** - Learn how Traversal AI has created a unified platform with a myriad of applications, simplifying the integration of LLMs and domain-specific search. 5. **The Importance of User Experience (UX)** - Understand the unparalleled significance of UX professionals in shaping the future of Gen AI products, and how they play a pivotal role in enhancing user interactions with LLM-powered applications. > Fun Fact: User experience (UX) designers are anticipated to be crucial in the development of AI-powered products as they bridge the gap between user interaction and the technical aspects of the AI systems. > ## Show Notes: 00:00 Teaching GPU AI with open source products.\ 06:40 Complex search leads to conversational search implementation.\ 07:52 Generating personalized travel itineraries with ease.\ 12:02 Maxwell's talk highlights challenges in search technology.\ 16:01 Balancing preferences and trade-offs in travel.\ 17:45 Beta mode, selective, personalized database.\ 22:15 Applications needed: chatbot, knowledge retrieval, recommendation, job matching, copilot\ 23:59 Challenges for UX in developing gen AI. ## More Quotes from Hamza: *"Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold.”*\ -- Hamza Farooq *"Usually they don't come to us and say we need a pine cone or we need a quadrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need.”*\ -- Hamza Farooq *"Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal.”*\ -- Hamza Farooq ## Transcript: Demetrios: Yes, we are live. So what is going on? Hamza, it's great to have you here for this edition of the Vector Space Talks. Let's first start with this. Everybody that is here with us right now, great to have you. Let us know where you're dialing in from in the chat and feel free over the course of the next 20 - 25 minutes to ask any questions as they. Come up in the chat. I'll be monitoring it and maybe jumping. In in case we need to stop. Hunts at any moment. And if you or anybody you know would like to come and give a presentation on our vector space talks, we are very open to that. Reach out to me either on discord or LinkedIn or your preferred method of communication. Maybe it's carrier Pigeon. Whatever it may be, I am here and ready to hear your pitch about. What you want to talk about. It's always cool hearing about how people are building with Qdrant or what they. Are building in this space. So without further ado, let's jump into this with my man Hamza. Great to have you here, dude. Hamza Farooq: Thank you for having me. It's an honor. Demetrios: You say that now. Just wait. You don't know me that well. I guess that's the only thing. So let's just say this. You're doing some incredible stuff. You're the founder of Traversaal.ai. You have been building large language models in the past, and you're also a professor at UCLA. You're doing all kinds of stuff. And that is why I think it. Is my honor to have you here with us today. I know you've got all kinds of fun stuff that you want to get. Into, and it's really about building llm powered applications in production. You have some slides for us, I believe. So I'm going to kick it over. To you, let you start rocking, and in case anything comes up, I'll jump. In and stop you from going too. Far down the road. Hamza Farooq: Awesome. Thank you for that. I really like your joke of the carrier pigeon. Is it a geni carrier pigeon with multiple areas and h 100 attached to it? Demetrios: Exactly. Those are the expensive carrier pigeons. That's the premium version. I am not quite that GPU rich yet. Hamza Farooq: Absolutely. All right. I think that's a great segue. I usually tell people that I'm going to teach you all how to be a GPU poor AI gap person, and my job is to basically teach everyone, or the thesis of my organization is also, how can we build powerful solutions, LLM powered solutions by using open source products and open source llms and architectures so that we can stretch the dollar as much as possible. That's been my thesis and I have always pushed for open source because they've done some great job over there and they are coming in close to pretty much at par of what the industry standard is. But I digress. Let's start with my overall presentation. I'm here to talk about the future of search and copilots and just the overall experience which we are looking with llms. Hamza Farooq: So I know you gave a background about me. I am a founder at Traversaal.ai. Previously I was at Google and Walmart Labs. I have quite a few years of experience in machine learning. In fact, my first job in 2007 was working for SaaS and I was implementing trees for identifying fraud, for fraud detection. And I did not know that was honestly data science, but we were implementing that. I have had the experience of teaching at multiple universities and that sort of experience has really helped me do better at what I do, because when you can teach something, you actually truly understand that. All right, so why are we here? Why are we really here? I have a very strong mean game. Hamza Farooq: So we started almost a year ago, Char GPT came into our lives and almost all of a sudden we started using it. And I think in January, February, March, it was just an explosion of usage. And now we know all the different things that have been going on and we've seen peripheration of a lot of startups that have come in this space. Some of them are wrappers, some of them have done a lot, have a lot more motor. There are many, many different ways that we have been using it. I don't think we even know how many ways we can use charge GBT, but most often it's just been text generation, one form or the other. And that is what the focus has been. But if we look deeper, the llms that we know, they also can help us with a very important part, something which is called complex search. Hamza Farooq: And complex search is basically when we converse with a search system to actually give a much longer query of how we would talk to a human being. And that is something that has been missing for the longest time in our interfacing with any kind of search engine. Google has always been at the forefront of giving the best form of search for us all. But imagine if you were to look at any other e commerce websites other than Amazon. Imagine you go to Nike.com, you go to gap, you go to Banana Republic. What you see is that their search is really basic and this is an opportunity for a lot of companies to actually create a great search experience for the users with a multi tier engagement model. So you basically make a request. I would like to buy a Nike blue t shirt specially designed for golf with all these features which I need and at a reasonable price point. Hamza Farooq: It shows you a set of results and then from that you can actually converse more to it and say, hey, can you remove five or six or reduce this by a certain degree? That is the power of what we have at hand with complex search. And complex search is becoming quickly a great segue to why we need to implement conversational search. We would need to implement large language models in our ecosystem so that we can understand the context of what users have been asking. So I'll show you a great example of sort of know complex search that TripAdvisor has been. Last week in one of my classes at Stanford, we had head of AI from Trivia Advisor come in and he took us through an experience of a new way of planning your trips. So I'll share this example. So if you go to the website, you can use AI and you can actually select a city. So let's say I'm going to select London for that matter. Hamza Farooq: And I can say I'm going to go for a few days, I do next and I'm going to go with my partner now at the back end. This is just building up a version of complex search and I want to see attractions, great food, hidden gems. I basically just want to see almost everything. And then when I hit submit, the great thing what it does is that it sort of becomes a starting point for something that would have taken me quite a while to put it together, sort of takes all my information and generates an itinerary. Now see what's different about this. It has actual data about places where I can stay, things I can do literally day by day, and it's there for you free of cost generated within 10 seconds. This is an experience that did not exist before. You would have to build this by yourself and what you would usually do is you would go to chat. Hamza Farooq: GPT if you've started this year, you would say seven day itinerary to London and it would identify a few things over here. However, you see it has able to integrate the ability to book, the ability to actually see those restaurants all in one place. That is something that has not been done before. And this is the truest form of taking complex search and putting that into production and sort of create a great experience for the user so that they can understand what they can select. They can highlight and sort of interact with it. Going to pause here. Is there any question or I can help answer anything? Demetrios: No. Demetrios: Man, this is awesome though. I didn't even realize that this is already live, but it's 100% what a travel agent would be doing. And now you've got that at your fingertips. Hamza Farooq: So they have built a user experience which takes 10 seconds to build. Now, was it really happening in the back end? You have this macro task that I want to plan a vacation in Paris, I want to plan a vacation to London. And what web agents or auto agents or whatever you want to call them, they are recursively breaking down tasks into subtasks. And when you reach to an individual atomic subtask, it is able to divide it into actions which can be taken. So there's a task decomposition and a task recognition scene that is going on. And from that, for instance, Stripadvisor is able to build something of individual actions. And then it makes one interface for you where you can see everything ready to go. And that's the part that I have always been very interested in. Hamza Farooq: Whenever we go to Amazon or anything for search, we just do one tier search. We basically say, I want to buy a jeans, I want to buy a shirt, I want to buy. It's an atomic thing. Do you want to get a flight? Do you want to get an accommodation? Imagine if you could do, I would like to go to Tokyo or what kind of gear do I need? What kind of overall grade do I need to go to a glacier? And it can identify all the different subtasks that are involved in it and then eventually show you the action. Well, it's all good that it exists, but the biggest thing is that it's actually difficult to build complex search. Google can get away with it. Amazon can get away with it. But if you imagine how do we make sure that it's available to the larger masses? It's available to just about any company for that matter, if they want to build that experience at this point. Hamza Farooq: This is from a talk that was given by Maxwell a couple of months ago. There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used. Because again, also because of GPT coming in and the way we have been conversing with our products, our search is getting more coherent, as we would expect it to be. We would talk to a person and it's great for finding a website for more complex questions or tasks. It often falls too short because a lot of companies, 99.99% companies, I think they are just stuck on elasticsearch because it's cheaper to run it, it's easier, it's out of the box, and a lot of companies do not want to spend the money or they don't have the people to help them build that as a product, as an SDK that is available and they can implement and starts working for them. And the biggest thing is that there are complex search is not just one query, it's multiple queries, sessions or deep, which requires deep engagement with search. And what I mean by deep engagement is imagine when you go to Google right now, you put in a search, you can give feedback on your search, but there's nothing that you can do that it can unless you start a new search all over again. Hamza Farooq: In perplexity, you can ask follow up questions, but it's also a bit of a broken experience because you can't really reduce as you would do with Jarvis in Ironman. So imagine there's a human aspect to it. And let me show you another example of a copilot system, let's say. So this is an example of a copilot which we have been working on. Demetrios: There is a question, there's actually two really good questions that came through, so I'm going to stop you before you get into this. Cool copilot Carlos was asking, what about downtime? When it comes to these LLM services. Hamza Farooq: I think the downtime. This is the perfect question. If you have a production level system running on Chat GPT, you're going to learn within five days that you can't run a production system on Chat GPT and you need to host it by yourself. And then you start with hugging face and then you realize hugging face can also go down. So you basically go to bedrock, or you go to an AWS or GCP and host your LLM over there. So essentially it's all fun with demos to show oh my God, it works beautifully. But consistently, if you have an SLA that 99.9% uptime, you need to deploy it in an architecture with redundancies so that it's up and running. And the eventual solution is to have dedicated support to it. Hamza Farooq: It could be through Azure open AI, I think, but I think even Azure openi tends to go down with open ais out of it's a little bit. Demetrios: Better, but it's not 100%, that is for sure. Hamza Farooq: Can I just give you an example? Recently we came across a new thing, the token speed. Also varies with the day and with the time of the day. So the token generation. And another thing that we found out that instruct, GPT. Instruct was great, amazing. But it's leaking the data. Even in a rack solution, it's leaking the data. So you have to go back to then 16k. Hamza Farooq: It's really slow. So to generate an answer can take up to three minutes. Demetrios: Yeah. So it's almost this catch 22. What do you prefer, leak data or slow speeds? There's always trade offs, folks. There's always trade offs. So Mike has another question coming through in the chat. And Carlos, thanks for that awesome question Mike is asking, though I presume you could modify the search itinerary with something like, I prefer italian restaurants when possible. And I was thinking about that when it comes to. So to add on to what Mike is saying, it's almost like every single piece of your travel or your itinerary would be prefaced with, oh, I like my flights at night, or I like to sit in the aisle row, and I don't want to pay over x amount, but I'm cool if we go anytime in December, et cetera, et cetera. Demetrios: And then once you get there, I like to go into hotels that are around this part of this city. I think you get what I'm going at, but the preference list for each of these can just get really detailed. And you can preference all of these different searches with what you were talking about. Hamza Farooq: Absolutely. So I think that's a great point. And I will tell you about a company that we have been closely working with. It's called Tripsby or Tripspy AI, and we actually help build them the ecosystem where you can have personalized recommendations with private discovery. It's pretty much everything that you just said. I prefer at this time, I prefer this. I prefer this. And it sort of takes audio and text, and you can converse it through WhatsApp, you can converse it through different ways. Hamza Farooq: They are still in the beta mode, and they go selectively, but literally, they have built this, they have taken a lot more personalization into play, and because the database is all the same, it's Ahmedius who gives out, if I'm pronouncing correct, they give out the database for hotels or restaurants or availability, and then you can build things on top of it. So they have gone ahead and built something, but with more user expectation. Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal. Demetrios: Yeah. Demetrios: And your ability. I think another example of this would be how I love to watch TikTok videos and some of the stuff that pops up on my TikTok feed is like Amazon finds you need to know about, and it's talking about different cool things you can buy on Amazon. If Amazon knew that I was liking that on TikTok, it would probably show it to me next time I'm on Amazon. Hamza Farooq: Yeah, I mean, that's what cookies are, right? Yeah. It's a conspiracy theory that you're talking about a product and it shows up on. Demetrios: Exactly. Well, so, okay. This website that you're showing is absolutely incredible. Carlos had a follow up question before we jump into the next piece, which is around the quality of these open source models and how you deal with that, because it does seem that OpenAI, the GPT-3 four, is still quite a. Hamza Farooq: Bit ahead these days, and that's the silver bullet you have to buy. So what we suggest is have open llms as a backup. So at a point in time, I know it will be subpar, but something subpar might be a little better than breakdown of your complete system. And that's what we have been employed, we have deployed. What we've done is that when we're building large scale products, we basically tend to put an ecosystem behind or a backup behind, which is like, if the token rate is not what we want, if it's not working, it's taking too long, we automatically switch to a redundant version, which is open source. It does perform. Like, for instance, even right now, perplexity is running a lot of things on open source llms now instead of just GPT wrappers. Demetrios: Yeah. Gives you more control. So I didn't want to derail this too much more. I know we're kind of running low on time, so feel free to jump back into it and talk fast. Demetrios: Yeah. Hamza Farooq: So can you give me a time check? How are we doing? Demetrios: Yeah, we've got about six to eight minutes left. Hamza Farooq: Okay, so I'll cover one important thing of why I built my company, Traversaal.ai. This is a great slide to see what everyone is doing everywhere. Everyone is doing so many different things. They're looking into different products for each different thing. You can pick one thing. Imagine the concern with this is that you actually have to think about every single product that you have to pick up because you have to meticulously go through, oh, for this I need this. For this I need this. For this I need this. Hamza Farooq: All what we have done is that we have created one platform which has everything under one roof. And I'll show you with a very simple example. This is our website. We call ourselves one platform with multiple applications. And in this what we have is we have any kind of data format, pretty much that you have any kind of integrations which you need, for example, any applications. And I'll zoom in a little bit. And if you need domain specific search. So basically, if you're looking for Internet search to come in any kind of llms that are in the market, and vector databases, you see Qdrant right here. Hamza Farooq: And what kind of applications that are needed? Do you need a chatbot? You need a knowledge retrieval system, you need recommendation system? You need something which is a job matching tool or a copilot. So if you've built a one stop shop where a lot of times when a customer comes in, usually they don't come to us and say we need a pine cone or we need a Qdrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need. And that is such a powerful thing that once they start trusting us, and the best way to have them trust me is they can come to my class on maven, they can come to my class in Stanford, they come to my class in UCLA, or they can. Demetrios: Listen to this podcast and sort of. Hamza Farooq: It adds credibility to what we have been doing with them. Sorry, stop sharing what we have been doing with them and sort of just goes in that direction that we can do these things pretty fast and we tend to update. I want to just cover one slide. At the end of the day, this is the main slide. Right now. All engineers and product managers think of, oh, llms and Gen AI and this and that. I think one thing we don't talk about is UX experience. I just showed you a UX experience on Tripadvisor. Hamza Farooq: It's so easy to explain, right? Like you're like, oh, I know how to use it and you can already find problems with it, which means that they've done a great job thinking about a user experience. I predict one main thing. Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold. Not bitcoin, but gold. It's basically because they will have to build user experiences because we can't imagine right now what it will look like. Demetrios: Yeah, I 100% agree with that, actually. Demetrios: I. Demetrios: Imagine you have seen some of the work from Linus Lee from notion and how notion is trying to add in the clicks. Instead of having to always chat with the LLM, you can just point and click and give it things that you want to do. I noticed with the demo that you shared, it was very much that, like, you're highlighting things that you like to do and you're narrowing that search and you're giving it more context without having to type in. I like italian food and I don't like meatballs or whatever it may be. Hamza Farooq: Yes. Demetrios: So that's incredible. Demetrios: This is perfect, man. Demetrios: And so for anyone that wants to continue the conversation with you, you are on LinkedIn. We will leave a link to your LinkedIn. And you're also teaching on Maven. You're teaching in Stanford, UCLA, all this fun stuff. It's been great having you here. Demetrios: I'm very excited and I hope to have you back because it's amazing seeing what you're building and how you're building it. Hamza Farooq: Awesome. I think, again, it's a pleasure and an honor and thank you for letting. Demetrios: Me speak about the UX part a. Hamza Farooq: Lot because when you go to your customers, you realize that you need the UX and all those different things. Demetrios: Oh, yeah, it's so true. It is so true. Well, everyone that is out there watching. Demetrios: Us, thank you for joining and we will see you next time. Next week we'll be back for another. Demetrios: Session of these vector talks and I am pleased to have you again. Demetrios: Reach out to me if you want to join us. Demetrios: You want to give a talk? I'll see you all later. Have a good one. Hamza Farooq: Thank you. Bye.
qdrant-landing/content/blog/building-search-rag-for-an-openapi-spec-nick-khami-vector-space-talks.md
--- draft: false title: Building Search/RAG for an OpenAPI spec - Nick Khami | Vector Space Talks slug: building-search-rag-open-api short_description: Nick Khami, Founder and Engineer of Trieve, dives into the world of search and rag apps powered by Open API specs. description: Nick Khami discuss Trieve's work with Qdrant's Open API spec for creating powerful and simplified search and recommendation systems, touching on real-world applications, technical specifics, and the potential for improved user experiences. preview_image: /blog/from_cms/nick-khami-cropped.png date: 2024-04-11T22:23:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - OpenAPI - Trieve --- > *"It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system if you're using Qdrant.”*\ — Nick Khami > Nick Khami, a seasoned full-stack engineer, has been deeply involved in the development of vector search and RAG applications since the inception of Qdrant v0.11.0 back in October 2022. His expertise and passion for innovation led him to establish Trieve, a company dedicated to facilitating businesses in embracing cutting-edge vector search and RAG technologies. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1JtL167O2ygirKFVyieQfP?si=R2cN5LQrTR60i-JzEh_m0Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/roLpKNTeG5A?si=JkKI7yOFVOVEY4Qv).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/roLpKNTeG5A?si=FViKeSYBT-Xw-gwM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-SearchRAG-for-an-OpenAPI-spec---Nick-Khami--Vector-Space-Talk-022-e2iabfb/a-ab5mb2m" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Nick showcases Trieve and the advancements in the world of search technology, demonstrating with Qdrant how simple it is to construct precise search functionalities with open API specs for colorful sneaker discoveries, all while unpacking the potential of improved search experiences and analytics for diverse applications like apps for legislation. We're going deep into the mechanics of search and recommendation applications. Whether you're a developer or just an enthusiast, this episode is guaranteed in giving you insight into how to create a seamless search experience using the latest advancements in the industry. Here are five key takeaways from this episode: 1. **Understand the Open API Spec**: Discover the magic behind Open API specifications and how they can serve your development needs especially when it comes to rest API routes. 2. **Simplify with Trieve and Qdrant**: Nick walks us through a real-world application using Trieve and Qdrant's group-based system, demonstrating how to effortlessly build search capabilities. 3. **Elevate Search Results**: Learn about the power of grouping and recommendations within Qdrant to fine-tune your search results, using the colorful world of sneakers as an example! 4. **Trieve's Infrastructure Made Easy**: Find out how taking advantage of Trieve can make creating datasets, obtaining API keys, and kicking off searches simpler than you ever imagined. 5. **Enhanced Vector Search with Tantivy**: If you're curious about alternative search engines, get the scoop on Tantivy, how it complements Qdrant, and its role within the ecosystem. > Fun Fact: Trieve was established in 2023 and the name is a play on the word "retrieve”. > ## Show notes: 00:00 Vector Space Talks intro to Nick Khami.\ 06:11 Qdrant system simplifies difficult building process.\ 07:09 Using Qdrant to organize and manage content.\ 11:43 Creating a group: search results may not group.\ 14:23 Searching with Qdrant: utilizing system routes.\ 17:00 Trieve wrapped up YC W24 batch.\ 21:45 Revolutionizing company search.\ 23:30 Next update: user tracking, analytics, and cross-encoders.\ 27:39 Quadruple supported sparse vectors.\ 30:09 Final questions and wrap up. ## More Quotes from Nick: *"You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create.”*\ — Nick Khami *"Qdrant also offers recommendations for groups, so like, which is really cool... Not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working [and] in a lot of cases too, you can fix problems in your search.”*\ — Nick Khami *"Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity.”*\ — Nick Khami ## Transcript: Demetrios: What is happening? Everyone? Welcome back to another edition of the Vector Space Talks. I am super excited to be here with you today. As always, we've got a very special guest. We've got Nick, the founder and engineer, founder slash engineer of Trieve. And as you know, we like to start these sessions off with a little recommendations of what you can hopefully be doing to make life better. And so when Sabrina's here, I will kick it over to her and ask her for her latest recommendation of what she's been doing. But she's traveling right now, so I'm just going to give you mine on some things that I've been listening to and I have been enjoying. For those who want some nice music, I would recommend an oldie, but a goodie. Demetrios: It is from the incredible band that is not coming to me right now, but it's called this must be the place from the. Actually, it's from the Talking Heads. Definitely recommend that one as a fun way to get the day started. We will throw a link to that music in the chat, but we're not going to be just talking about good music recommendations. Today we are going to get Nick on the stage to talk all about search and rags. And Nick is in a very interesting position because he's been using vector search from Qdrant since 2022. Let's bring this man on the stage and see what he's got to say. What's up, dude? Nick Khami: Hey. Demetrios: Hey. Nick Khami: Nice to meet you. Demetrios: How you doing? Nick Khami: Doing great. Demetrios: Well, it's great to have you. Nick Khami: Yeah, yeah. Nice sunny day. It looks like it's going to be here in San Francisco, which is good. It was raining like all of January, but finally got some good sunny days going, which is awesome. Demetrios: Well, it is awesome that you are waking up early for us and you're doing this. I appreciate it coming all the way from San Francisco and talking to us today all about search and recommender system. Sorry, rag apps. I just have in my mind, whenever I say search, I automatically connect recommender because it is kind of similar, but not in this case. You're going to be talking about search and rag apps and specifically around the Open API spec. I know you've got a talk set up for. For us. Do you want to kick it off? And then I'll be monitoring the chat. Demetrios: So if anybody has any questions, throw it in the chat and I'll pop up on screen again and ask away. Nick Khami: Yeah, yeah, I'd love to. I'll go ahead and get this show on the road. Okay. So I guess the first thing I'll talk about is what exactly an Open API spec is. This is Qdrants open API spec. I feel like it's a good topical example for vector space talk. You can see here, Qdrant offers a bunch of different rest API routes on their API. Each one of these exists within this big JSON file called the Open API specification. Nick Khami: There's a lot of projects that have an Open API specification. Stripe has one, I think sentry has one. It's kind of like a de facto way of documenting your API. Demetrios: Can you make your screen just a little or the font just a little bit bigger? Maybe zoom in? Nick Khami: I think I can, yeah. Demetrios: All right, awesome. So that my eyesight is not there. Oh, that is brilliant. That is awesome. Nick Khami: Okay, we doing good here? All right, awesome. Yeah. Hopefully this is more readable for everyone, but yeah. So this is an open API specification. If you look at it inside of a JSON file, it looks a little bit like this. And if you go to the top, I can show the structure. There's a list or there's an object called paths that contains all the different API paths for the API. And then there's another object called security, which explains the authentication scheme. Nick Khami: And you have a nice info section I'm going to ignore, kind of like these two, they're not all that important. And then you have this list of like tags, which is really cool because this is kind of how things get organized. If we go back, you can see these kind of exist as tags. So these items here will be your tags in the Open API specification. One thing that's kind of like interesting is it would be cool if it was relatively trivial to build search over an OpenAPI specification, because if you don't know what you're looking for, then this search bar does not always work great. For example, if you type in search within groups. Oh, this one actually works pretty good. Wow, this seems like an enhanced Open API specification search bar. Nick Khami: I should have made sure that I checked it before going. So this is quite good. Our search bar for tree in example, does not actually, oh, it does have the same search, but I was really interested in, I guess, explaining how you could enhance this or hook it up to vector search in order to do rag audit. It's what I want to highlight here. Qdrant has a really interesting feature called groups. You can search over a group of points at one time and kind of return results in a group oriented way instead of only searching for a singular route. And for an Open API specification, that's very interesting. Because it means that you can search for a tag while looking at each tag's individual paths. Nick Khami: It is like a, it's something that's very difficult to build without a system like Qdrant and kind of like one of the primary, I think, feature offerings of it compared to PG vector or maybe like brute force with face or yousearch or something. And the goal that I kind of had was to figure out which endpoint was going to be most relevant for what I was trying to do. In a lot of cases with particularly Qdrants, Open API spec in this example. To go about doing that, I used a scripting runtime for JavaScript called Bun. I'm a big fan of it. It tends to work quite well. It's very performant and kind of easy to work with. I start off here by loading up the Qdrant Open API spec from JSON and then I import some things that exist inside of tree. Nick Khami: Trieve uses Qdrant under the hood to offer a lot of its features, and that's kind of how I'm going to go about doing this here. So I import some stuff from the tree SDK client package, instantiate a couple of environment variables, set up my configuration for the tree API, and now this is where it gets interesting. I pull the tags from the Qdrant Open API JSON specification, which is this array here, and then I iterate over each tag and I check if I've already created the group. If I have, then I do nothing. But if I have it, then I go ahead and I create a group. For each tag, I'm creating these groups so that way I can insert each path into its relevant groups whenever I create them as individual points. Okay, so I finished creating all of the groups, and now for like the next part, I iterate over the paths, which are the individual API routes. For each path I pull the tags that it has, the summary, the description and the API method. Nick Khami: So post, get put, delete, et cetera, and I then create the point. In Trieve world, we call each point a chunk, kind of using I guess like rag terminology. For each individual path I create the chunk and by including its tags in this group tracking ids request body key, it will automatically get added to its relevant groups. I have some try catches here, but that's really the whole script. It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system. If you're using Qdrant. If you can think about an e commerce store, sometimes you have multiple colorways of an item. Nick Khami: You'll have a red version of the sneaker, a white version, a blue version, et cetera. And when someone performs a search, you not only want to find the relevant shoe, you want to find the relevant colorway of that shoe. And groups allow you to do this within Qdrant because you can place each colorway as an individual point. Or again, in tree world, chunk into a given group, and then when someone searches, they're going to get the relevant colorway at the top of the given group. It's really nice, really cool. You can see running this is very simple. If I want to update the entire data set by running this again, I can, and this is just going to go ahead and create all the relevant chunks for every route that Qdrant offers. If you guys who are watching or interested in replicating this experiment, I created an open source GitHub repo. Nick Khami: We're going to zoom in here that you can reference@GitHub.com/devflowinc/OpenAPI/search. You can follow the instructions in the readme to replicate the whole thing. Okay, but I uploaded all the data. Let's see how this works from a UI perspective. Yeah. Trieve bundles in a really nice UI for searching after you add all of your data. So if I go home here, you can see that I'm using the Qdrant Open API spec dataset. And the organization here is like the email I use. Nick Khami: Nick.K@OpenAPI one of the nice things about Trieve, kind of like me on just the simplicity of adding data is we use Qdrant's multi tenancy feature to offer the ability to have multiple datasets within a given organization. So you can have, I have the Open API organization. You can create additional datasets with different embedding models to test with and experiment when it comes to your search. Okay. But not going to go through all those features today, I kind of want to highlight this Open API search that we just finished building. So I guess to compare and contrast, I'm going to use the exact same query that I used before, also going to zoom in. Okay. Nick Khami: And that one would be like what we just did, right? So how do I maybe, how do I create a group? This isn't a Gen AI rag search. This is just a generic, this is just a generic search. Okay, so for how do I create a group? We're going to get all these top level results. In this case, we're not doing a group oriented search. We're just returning relevant chunks. Sometimes, or a lot of times I think that people will want to have a more group oriented search where the results are grouped by tag. So here I'm going to see that the most relevant endpoint or the most relevant tag within Qdrant's Open API spec is in theory collections, and within collections it thinks that these are the top three routes that are relevant. Recommend point groups discover bash points recommend bash points none of these are quite what I wanted, which is how do I create a group? But it's okay for cluster, you can see create shard key delete. Nick Khami: So for cluster, this is kind of interesting. It thinks cluster is relevant, likely because a cluster is a kind of group and it matches to a large extent on the query. Then we also have points which it keys in on the shard system and the snapshotting system. When the next version gets released, we'll have rolling snapshots in Qdrant, which is very exciting. If anyone else is excited about that feature. I certainly am. Then it pulls the metrics. For another thing that might be a little bit easier for the search to work on. Nick Khami: You can type in how do I search points via group? And now it kind of is going to key in on what I would say is a better result. And you can see here we have very nice sub sentence highlighting on the request. It's bolding the sentence of the response that it thinks is the most relevant, which in this case are the second two paragraphs. Yep, the description and summary of what the request does. Another convenient thing about tree is in our default search UI, you can include links out to your resources. If I click this link, I'm going to immediately get to the correct place within the Qdrant redox specification. That's the entire search experience. For the Jedi side of this, I did a lot less optimization, but we can experiment and see how it goes. Nick Khami: I'm going to zoom in again, guys. Okay, so let's say I want to make a new rag chat and I'm going to ask here, how would I search over points in a group oriented way with Qdrant? And it's going to go ahead and do a search query for me on my behalf again, powered by the wonder of Qdrant. And once it does this search query, I'm able to get citations and and see what the model thinks. The model is a pretty good job with the first response, and it says that to search for points and group oriented wave Qdrant, I can utilize the routes and endpoints provided by the system and the ones that I'm going to want to use first is points search groups. If I click doc one here and I look at the route, this is actually correct. Conveniently, you're able to open the link in the. Oh, well, okay, this env is wrong, but conveniently what this is supposed to do, if I paste it and fix the incorrect portion of the system. Changing chat to search is you can load the individual chunk of the search UI and read it here, and then you can update it to include document expansion, change the actual copy of what was indexed out, et cetera. Nick Khami: It's like a really convenient way to merchandise and enhance your data set without having to write a lot of code. Yeah, and it'll continue writing its answer. I'm not going to go through the whole thing, but this really encapsulates what I wanted to show. This is incredibly simple to do. You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create. And yeah, guys, hopefully that was cool. Demetrios: Excellent. So I've got some questions. Woo the infinite spinning field. So I want to know about Trieve and I want to jump into what you all are doing there. And then I want to jump in a little bit about the evolution that you've seen of Qdrant over the years, because you've been using it for a while. But first, can we get a bit of an idea on what you're doing and how you're dedicating yourself to creating what you're creating? Nick Khami: Yeah. At Trieve, we just wrapped up the Y Combinator W 24 batch and our fundogram, which is like cool. It took us like a year. So Dens and I started Trieve in January of 2023, and we kind of kept building and building and building, and in the process, we started out trying to build an app for you to have like AI powered arguments at work. It wasn't the best of ideas. That's kind of why we started using Qdrant originally in the process of building that, we thought it was really hard to get the amazing next gen search that products like Qdrant offer, because for a typical team, they have to run a Docker compose file on the local machine, add the Qdrant service, that docker compose docker compose up D stand up Qdrant, set an env, download the Qdrant SDK. All these things get very, very difficult after you index all of your data, you then have to create a UI to view it, because if you don't do that. It can be very hard to judge performance. Nick Khami: I mean, you can always make these benchmarks, but search and recommendations are kind of like a heuristic thing. It's like you can always have a benchmark, but the data is dynamic, it changes and you really like. In what we were experiencing at the time, we really needed a way to quickly gauge the system was doing. We gave up on our rag AI application argumentation app and pivoted to trying to build infrastructure for other people to benefit from the high quality search that is offered by splayed for sparse, or like sparse encode. I mean, elastics, LSR models, really cool. There's all the dense embedding vector models and we wanted to offer a managed suite of infrastructure for building on this kind of stuff. That's kind of what tree is. So like, with tree you go to. Nick Khami: It's more of like a managed experience. You go to the dashboard, you make an account, you create the data set, you get an API key and the data set id, you go to your little script and mine for the Open API specs, 80 lines, you add all your data and then boom, bam, bing bop. You can just start searching and you can. We offer recommendations as well. Maybe I should have shown those in my demo, like, you can open an individual path and get recommendations for similar. Demetrios: There were recommendations, so I wasn't too far off the mark. See, search and recommendation, they just, they occupy the same spot in my head. Nick Khami: And Qdrant also offers recommendations for groups, guys. So like, which is really cool. Like you can, you can, like, not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working in a lot of cases too. You can fix problems in your search. A good example of this is we built search for Y comb later companies so they could make it a lot better. Algolia is on an older search algorithm that doesn't offer semantic capabilities. And that means that you go to the Y combinator search companies bar, you type in which company offers short term rentals and you don't get Airbnb. Nick Khami: But with like Trieve it is. It is. But with tree, like, the magic of it is that even, believe it or not, there's a bunch of YC companies to do short term rentals and Airbnb does not appear first naturally. So with tree like, we offer a merchandising UI where you put that query in, you see Airbnb ranks a little bit lower than you want. You can immediately adjust the text that you indexed and even add like a re ranking weight so that appears higher in results. Do it again and it works. And you can also experiment and play with the rag. I think rag is kind of a third class citizen in our API. Nick Khami: It turns out search recommendations are a lot more popular with our customers and users. But yeah, like tree, I would say like to encapsulate it. Trieve is an all in one infrastructure suite for teams building search recommendations in Rag. And we bundle the power of databases like Qdrant and next gen search ML AI models with uis for fine tuning ranking of results. Demetrios: Dude, the reason I love this is because you can do so much with like well done search that is so valuable for so many companies and it's overlooked as like a solved problem, I think, for a lot of people, but it's not, and it's not that easy as you just explained. Nick Khami: Yeah, I mean, like we're fired up about it. I mean, like, even if you guys go to like YC.Trieve.AI, that's like the Y combinator company search and you can a b test it against like the older style of search that Algolia offers or like elasticsearch offers. And like, it's, to me it's magical. It's like it's an absolute like work of human ingenuity and amazingness that you can type in, which company should I get an airbed at? And it finds Airbnb despite like none of the keywords matching up. And I'm afraid right now our brains are trained to go to Google. And on Google search bar you can ask a question, you can type in abstract ideas and concepts and it works. But anytime we go to an e commerce search bar or oh, they're so. Demetrios: Bad, they're so bad. Everybody's had that experience too, where I don't even search. Like, I just am like, well, all right, or I'll go to Google and search specifically on Google for that website, you know, and like put in parentheses. Nick Khami: We'Re just excited about that. Like we want to, we're trying to make it a lot like the goal of tree is to make it a lot easier to power these search experiences, the latest gentech, and help fix this problem. Like, especially if AI continues to get better, people are going to become more and more used to like things working and not having to hack around, faceting and filtering for it to work. And yeah, we're just excited to make that easier for companies to work on and build. Demetrios: So there's one question coming through in the chat asking where we can get actual search metrics. Nick Khami: Yeah, so that's like the next thing that we're planning to add. Basically, like right now at tree, we don't track your users as queries. The next thing that we're like building at tree is a system for doing that. You're going to be able to analyze all of the searches that have been used on your data set within that search merchandising UI, or maybe a new UI, and adjust your rankings spot fix things the same way you can now, but with the power of the analytics. The other thing we're going to be offering soon is dynamically tunable cross encoders. Cross encoders are this magic neural net that can zip together full text and semantic results into a new ranked order. And they're underutilized, but they're also hard to adjust over time. We're going to be offering API endpoints for uploading, for doing your click through rates on the search results, and then dynamically on a batched timer tuning across encoder to adjust ranking. Nick Khami: This should be coming out in the next two to three weeks. But yeah, we're just now getting to the analytics hurdle. We also just got to the speed hurdle. So things are fast now. As you guys hopefully saw in the demo, it's sub 50 milliseconds for most queries. P 95 is like 80 milliseconds, which is pretty cool thanks to Qdrant, by the way. Nice Qdrant is huge, I mean for powering all of that. But yeah, analytics will be coming next two or three weeks. Nick Khami: We're excited about it. Demetrios: So there's another question coming through in the chat and they're asking, I wonder if llms can suggest graph QL queries based on schema as it's not so tied to endpoints. Nick Khami: I think they could in the system that we built for this case, I didn't actually use the response body. If you guys go to devflowinc Open API search on GitHub, you guys can make your own example where you fix that. In the response query of the Open API JSON spec, you have the structure. If you embed that inside of the chunk as another paragraph tag and then go back to doing rag, it probably can do that. I see no reason why I wouldn't be able to. Demetrios: I just dropped the link in the chat for anybody that is interested. And now let's talk a little bit for these next couple minutes about the journey of using Qdrant. You said you've been using it since 2022. Things have evolved a ton with the product over these years. Like, what have you seen what's been the most value add that you've had since starting? Nick Khami: I mean, there's so many, like, okay, the one that I have highlighted in my head that I wanted to talk about was, I remember in May of 2023, there's a GitHub issue with an Algora bounty for API keys. I remember Dens and I, we'd already been using it for a while and we knew there was no API key thing. There's no API key for it. We were always joking about it. We were like, oh, we're so early. There's not even an API key for our database. You had to have access permissions in your VPC or sub routing to have it work securely. And I'm not sure it's like the highest. Nick Khami: I'll talk about some other things where higher value add, but I just remember, like, how cool that was. Yeah, yeah, yeah. Demetrios: State of the nation. When you found out about it and. Nick Khami: It was so hyped, like, the API key had added, we were like, wow, this is awesome. It was kind of like a simple thing, but like, for us it was like, oh, whoa, this is. We're so much more comfortable in security now. But dude, Qdrant added so many cool things. Like a couple of things that I think I'd probably highlight are the group system. That was really awesome when that got added. I mean, I think it's one of my favorite features. Then after that, the sparse vector support and a recent version was huge. Nick Khami: We had a whole crazy subsystem with Tantivy. If anyone watching knows the crate Tantivy, it's like a full text. Uh, it's like a leucine alternative written in rust. Um, and we like, built this whole crazy subsystem and then quad fit, like, supported the sparse vectors and we were like, oh my God, we should have probably like, worked with them on the sparse vector thing we didn't even know you guys wanted to do, uh, because like, we spent all this time building it and probably could have like, helped out that PR. We felt bad, um, because that was really nice. When that got added, the performance fixes for that were also really cool. Some of the other things that, like, Qdrant added while we've been using it that were really awesome. Oh, the multiple recommendation modes, I think I forget what they're both called, but there's, it's also like insane for people, like, out there watching, like, try Qdrant for sure, it's so, so, so good compared to like a lot of what you can do in a PG vector. Nick Khami: There's like, this recommendation feature is awesome. Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity. And if the similarity of the negative points is higher than that of the positive points, it'll ignore that edge recommendations. And for us at least, like with our customers, this improved their quality of recommendations a lot when they use negative samples. And we didn't even find out about that. It was in the version release notes and we didn't think about it. And like a month or two later we had a customer that was like communicating that they wanted higher quality recommendations. Nick Khami: And we were like, okay, what is like, are we using all the features available? And we weren't. That was cool. Demetrios: The fact that you understand that now and you were able to communicate it back to me almost like better than I communicate it to people is really cool. And it shows that you've been in the weeds on it and you have seen a strong use case for it, because sometimes it's like, okay, this is out there. It needs to be communicated in the best use case so that people can understand it. And it seems like with that e commerce use case, it really stuck. Nick Khami: This one was actually for a company that does search over american legislation called funny enough, we want more e commerce customers for retrieve. Most of our customers right now are like SaaS applications. This particular customer, I don't think they'd mind me shouting them out. It's called Bill Track 50. If you guys want to like search over us legislation, try them out. They're very, very good. And yeah, they were the team that really used it. But yeah, it's another cool thing, I think, about infrastructure like Qdrant in general, and it's so, so powerful that like a lot of times it can be worth like getting an implementation partner. Nick Khami: Like, even if you're gonna, if you're gonna use Qdrant, like, the team at Qdrant is very helpful and you should consider reaching out to them because they can probably help anyone who's going to build search recommendations to figure out what is offered and what can help on a high level, not so much a GitHub issue code level, but at a high level. Thinking about your use case. Again, search is such a heuristic problem and so human in a way that it's always worth talking through your solution with people it that are very familiar with search recommendations in general. Demetrios: Yeah. And they know the best features and the best tool to use that is going to get you that outcome you're looking for. So. All right, Nick, last question for you. It is about Trieve. I have my theory on why you call it that. Is it retrieve? You just took off the Re-? Nick Khami: Yes. Drop the read. It's cleaner. That's like the Facebook quote, but for Trieve. Demetrios: I was thinking when I first read it, I was like, it must be some french word I'm not privy to. And so it's cool because it's french. You just got to put like an accent over one of these e's or both of them, and then it's even cooler. It's like luxury brand to the max. So I appreciate you coming on here. I appreciate you walking us through this and talking about it, man. This was awesome. Nick Khami: Yeah, thanks for having me on. I appreciate it. Demetrios: All right. For anybody else that is out there and wants to come on the vector space talks, come join us. You know where to find us. As always, later.
qdrant-landing/content/blog/case-study-bloop.md
--- draft: false title: Powering Bloop semantic code search slug: case-study-bloop short_description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation preview_image: /case-studies/bloop/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/bloop/ --- Founded in early 2021, [bloop](https://bloop.ai/) was one of the first companies to tackle semantic search for codebases. A fast, reliable Vector Search Database is a core component of a semantic search engine, and bloop surveyed the field of available solutions and even considered building their own. They found Qdrant to be the top contender and now use it in production. This document is intended as a guide for people who intend to introduce semantic search to a novel field and want to find out if Qdrant is a good solution for their use case. ## About bloop ![](/case-studies/bloop/screenshot.png) [bloop](https://bloop.ai/) is a fast code-search engine that combines semantic search, regex search and precise code navigation into a single lightweight desktop application that can be run locally. It helps developers understand and navigate large codebases, enabling them to discover internal libraries, reuse code and avoid dependency bloat. bloop’s chat interface explains complex concepts in simple language so that engineers can spend less time crawling through code to understand what it does, and more time shipping features and fixing bugs. ![](/case-studies/bloop/bloop-logo.png) bloop’s mission is to make software engineers autonomous and semantic code search is the cornerstone of that vision. The project is maintained by a group of Rust and Typescript engineers and ML researchers. It leverages many prominent nascent technologies, such as [Tauri](http://tauri.app), [tantivy](https://docs.rs/tantivy), [Qdrant](https://github.com/qdrant/qdrant) and [Anthropic](https://www.anthropic.com/). ## About Qdrant ![](/case-studies/bloop/qdrant-logo.png) Qdrant is an open-source Vector Search Database written in Rust . It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and many more solutions to make the most of unstructured data. It is easy to use, deploy and scale, blazing fast and is accurate simultaneously. Qdrant was founded in 2021 in Berlin by Andre Zayarni and Andrey Vasnestov with the mission to power the next generation of AI applications with advanced and high-performant vector similarity search technology. Their flagship product is the vector search database which is available as an open source https://github.com/qdrant/qdrant or managed cloud solution https://cloud.qdrant.io/. ## The Problem Firstly, what is semantic search? It’s finding relevant information by comparing meaning, rather than simply measuring the textual overlap between queries and documents. We compare meaning by comparing *embeddings* - these are vector representations of text that are generated by a neural network. Each document’s embedding denotes a position in a *latent* space, so to search you embed the query and find its nearest document vectors in that space. ![](/case-studies/bloop/vector-space.png) Why is semantic search so useful for code? As engineers, we often don’t know - or forget - the precise terms needed to find what we’re looking for. Semantic search enables us to find things without knowing the exact terminology. For example, if an engineer wanted to understand “*What library is used for payment processing?*” a semantic code search engine would be able to retrieve results containing “*Stripe*” or “*PayPal*”. A traditional lexical search engine would not. One peculiarity of this problem is that the **usefulness of the solution increases with the size of the code base** – if you only have one code file, you’ll be able to search it quickly, but you’ll easily get lost in thousands, let alone millions of lines of code. Once a codebase reaches a certain size, it is no longer possible for a single engineer to have read every single line, and so navigating large codebases becomes extremely cumbersome. In software engineering, we’re always dealing with complexity. Programming languages, frameworks and tools have been developed that allow us to modularize, abstract and compile code into libraries for reuse. Yet we still hit limits: Abstractions are still leaky, and while there have been great advances in reducing incidental complexity, there is still plenty of intrinsic complexity[^1] in the problems we solve, and with software eating the world, the growth of complexity to tackle has outrun our ability to contain it. Semantic code search helps us navigate these inevitably complex systems. But semantic search shouldn’t come at the cost of speed. Search should still feel instantaneous, even when searching a codebase as large as Rust (which has over 2.8 million lines of code!). Qdrant gives bloop excellent semantic search performance whilst using a reasonable amount of resources, so they can handle concurrent search requests. ## The Upshot [bloop](https://bloop.ai/) are really happy with how Qdrant has slotted into their semantic code search engine: it’s performant and reliable, even for large codebases. And it’s written in Rust(!) with an easy to integrate qdrant-client crate. In short, Qdrant has helped keep bloop’s code search fast, accurate and reliable. #### Footnotes: [^1]: Incidental complexity is the sort of complexity arising from weaknesses in our processes and tools, whereas intrinsic complexity is the sort that we face when trying to describe, let alone solve the problem.
qdrant-landing/content/blog/case-study-dailymotion.md
--- title: "Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database" draft: false slug: case-study-dailymotion # Change this slug to your page slug if needed short_description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database preview_image: /case-studies/dailymotion/preview-dailymotion.png # Change this # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-02-27T13:22:31+01:00 author: Atita Arora featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - dailymotion - case study - recommender system weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ## Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database In today's digital age, the consumption of video content has become ubiquitous, with an overwhelming abundance of options available at our fingertips. However, amidst this vast sea of videos, the challenge lies not in finding content, but in discovering the content that truly resonates with individual preferences and interests and yet is diverse enough to not throw users into their own filter bubble. As viewers, we seek meaningful and relevant videos that enrich our experiences, provoke thought, and spark inspiration. Dailymotion is not just another video application; it's a beacon of curated content in an ocean of options. With a steadfast commitment to providing users with meaningful and ethical viewing experiences, Dailymotion stands as the bastion of videos that truly matter. They aim to boost a dynamic visual dialogue, breaking echo chambers and fostering discovery. ### Scale - **420 million+ videos** - **2k+ new videos / hour** - **13 million+ recommendations / day** - **300+ languages in videos** - **Required response time < 100 ms** ### Challenge - **Improve video recommendations** across all 3 applications of Dailymotion (mobile app, website and embedded video player on all major French and International sites) as it is the main driver of audience engagement and revenue stream of the platform. - Traditional [collaborative recommendation model](https://en.wikipedia.org/wiki/Collaborative_filtering) tends to recommend only popular videos, fresh and niche videos suffer due to zero or minimal interaction - Video content based recommendation system required processing all the video embedding at scale and in real time, as soon as they are added to the platform - Exact neighbor search at the scale and keeping them up to date with new video updates in real time at Dailymotion was unreasonable and unrealistic - Precomputed [KNN](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) would be expensive and may not work due to video updates every hour - Platform needs fast recommendations ~ &lt; 100ms - Needed fast ANN search on a vector search engine which could support the scale and performance requirements of the platform ### Background / Journey The quest of Dailymotion to deliver an intelligent video recommendation engine providing a curated selection of videos to its users started with a need to present more relevant videos to the first-time users of the platform (cold start problem) and implement an ideal home feed experience to allow users to watch videos that are expected to be relevant, diverse, explainable, and easily tunable. \ This goal accounted for their efforts focused on[ Optimizing Video Recommender for Dailymotion's Home Feed ](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd)back in the time. They continued their work in [Optimising the recommender engine with vector databases and opinion mining](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) later with emphasis on ranking videos based on features like freshness, real views ratio, watch ratio, and aspect ratio to enhance user engagement and optimise watch time per user on the home feed. Furthermore, the team continued to focus on diversifying user interests by grouping videos based on interest and using stratified sampling to ensure a balanced experience for users. By now it was clear to the Dailymotion team that the future initiatives will involve overcoming obstacles related to data processing, sentiment analysis, and user experience to provide meaningful and diverse recommendations. The main challenge stayed at the candidate generation process, textual embeddings, opinion mining, along with optimising the efficiency and accuracy of these processes and tackling the complexities of large-scale content curation. ### Solution at glance ![solution-at-glance](/case-studies/dailymotion/solution-at-glance.png) The solution involved implementing a content based Recommendation System leveraging Qdrant to power the similar videos, with the following characteristics. **Fields used to represent each video** - Title , Tags , Description , Transcript (generated by [OpenAI whisper](https://openai.com/research/whisper)) **Encoding Model used** - [MUSE - Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa) * Supports - 16 languages ### Why Qdrant? ![quote-from-Samuel](/case-studies/dailymotion/Dailymotion-Quote.jpg) Looking at the complexity, scale and adaptability of the desired solution, the team decided to leverage Qdrant’s vector database to implement a content-based video recommendation that undoubtedly offered several advantages over other methods: **1. Efficiency in High-Dimensional Data Handling:** Video content is inherently high-dimensional, comprising various features such as audio, visual, textual, and contextual elements. Qdrant excels in efficiently handling high-dimensional data and out-of-the-box support for all the models with up to 65536 dimensions, making it well-suited for representing and processing complex video features with choice of any embedding model. **2. Scalability:** As the volume of video content and user interactions grows, scalability becomes paramount. Qdrant is meticulously designed to scale vertically as well as horizontally, allowing for seamless expansion to accommodate large volumes of data and user interactions without compromising performance. **3. Fast and Accurate Similarity Search:** Efficient video recommendation systems rely on identifying similarities between videos to make relevant recommendations. Qdrant leverages advanced HNSW indexing and similarity search algorithms to support fast and accurate retrieval of similar videos based on their feature representations nearly instantly (20ms for this use case) **4. Flexibility in vector representation with metadata through payloads:** Qdrant offers flexibility in storing vectors with metadata in form of payloads and offers support for advanced metadata filtering during the similarity search to incorporate custom logic. **5. Reduced Dimensionality and Storage Requirements:** Vector representations in Qdrant offer various Quantization and memory mapping techniques to efficiently store and retrieve vectors, leading to reduced storage requirements and computational overhead compared to alternative methods such as content-based filtering or collaborative filtering. **6. Impressive Benchmarks:** [Qdrant’s benchmarks](/benchmarks/) has definitely been one of the key motivations for the Dailymotion’s team to try the solution and the team comments that the performance has been only better than the benchmarks. **7. Ease of usage:** Qdrant API’s have been immensely easy to get started with as compared to Google Vertex Matching Engine (which was Dailymotion’s initial choice) and the support from the team has been of a huge value to us. **8. Being able to fetch data by id** Qdrant allows to retrieve vector point / videos by ids while the Vertex Matching Engine requires a vector input to be able to search for other vectors which was another really important feature for Dailymotion ### Data Processing pipeline ![data-processing](/case-studies/dailymotion/data-processing-pipeline.png) Figure shows the streaming architecture of the data processing pipeline that processes everytime a new video is uploaded or updated (Title, Description, Tags, Transcript), an updated embedding is computed and fed directly into Qdrant. ### Results ![before-qdrant-results](/case-studies/dailymotion/before-qdrant.png) There has been a big improvement in the recommended content processing time and quality as the existing system had issues like: 1. Subpar video recommendations due to long processing time ~ 5 hours 2. Collaborative recommender tended to recommend and focused on high signal / popular videos 3. Metadata based recommender focussed only on a very small scope of trusted video sources 4. The recommendations did not take contents of the video into consideration ![after-qdrant-results](/case-studies/dailymotion/after-qdrant.png) The new recommender system implementation leveraging Qdrant along with the collaborative recommender offered various advantages : 1. The processing time for the new video content reduced significantly to a few minutes which enabled the fresh videos to be part of recommendations. 2. The performant & scalable scope of video recommendation currently processes 22 Million videos and can provide recommendation for videos with fewer interactions too. 3. The overall huge performance gain on the low signal videos has contributed to more than 3 times increase on the interaction and CTR ( number of clicks) on the recommended videos. 4. Seamlessly solved the initial cold start and low performance problems with the fresh content. ### Outlook / Future plans The team is very excited with the results they achieved on their recommender system and wishes to continue building with it. \ They aim to work on Perspective feed next and say >”We've recently integrated this new recommendation system into our mobile app through a feature called Perspective. The aim of this feature is to disrupt the vertical feed algorithm, allowing users to discover new videos. When browsing their feed, users may encounter a video discussing a particular movie. With Perspective, they have the option to explore different viewpoints on the same topic. Qdrant plays a crucial role in this feature by generating candidate videos related to the subject, ensuring users are exposed to diverse perspectives and preventing them from being confined to an echo chamber where they only encounter similar viewpoints.” \ > Gladys Roch - Machine Learning Engineer ![perspective-feed-with-qdrant](/case-studies/dailymotion/perspective-feed-qdrant.jpg) The team is also interested in leveraging advanced features like [Qdrant’s Discovery API](/documentation/concepts/explore/#recommendation-api) to promote exploration of content to enable finding not only similar but dissimilar content too by using positive and negative vectors in the queries and making it work with the existing collaborative recommendation model. ### References **2024 -** [https://www.youtube.com/watch?v=1ULpLpWD0Aw](https://www.youtube.com/watch?v=1ULpLpWD0Aw) **2023 -** [https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) **2022 -** [https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd)
qdrant-landing/content/blog/case-study-dust.md
--- title: "Dust and Qdrant: Using AI to Unlock Company Knowledge and Drive Employee Productivity" draft: false slug: dust-and-qdrant #short_description: description: Using AI to Unlock Company Knowledge and Drive Employee Productivity preview_image: /case-studies/dust/preview.png date: 2024-02-06T07:03:26-08:00 author: Manuel Meyer featured: false tags: - Dust - case_study weight: 0 --- One of the major promises of artificial intelligence is its potential to accelerate efficiency and productivity within businesses, empowering employees and teams in their daily tasks. The French company [Dust](https://dust.tt/), co-founded by former Open AI Research Engineer [Stanislas Polu](https://www.linkedin.com/in/spolu/), set out to deliver on this promise by providing businesses and teams with an expansive platform for building customizable and secure AI assistants. ## Challenge "The past year has shown that large language models (LLMs) are very useful but complicated to deploy," Polu says, especially in the context of their application across business functions. This is why he believes that the goal of augmenting human productivity at scale is especially a product unlock and not only a research unlock, with the goal to identify the best way for companies to leverage these models. Therefore, Dust is creating a product that sits between humans and the large language models, with the focus on supporting the work of a team within the company to ultimately enhance employee productivity. A major challenge in leveraging leading LLMs like OpenAI, Anthropic, or Mistral to their fullest for employees and teams lies in effectively addressing a company's wide range of internal use cases. These use cases are typically very general and fluid in nature, requiring the use of very large language models. Due to the general nature of these use cases, it is very difficult to finetune the models - even if financial resources and access to the model weights are available. The main reason is that “the data that’s available in a company is a drop in the bucket compared to the data that is needed to finetune such big models accordingly,” Polu says, “which is why we believe that retrieval augmented generation is the way to go until we get much better at fine tuning”. For successful retrieval augmented generation (RAG) in the context of employee productivity, it is important to get access to the company data and to be able to ingest the data that is considered ‘shared knowledge’ of the company. This data usually sits in various SaaS applications across the organization. ## Solution Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG. Users can manage so-called data sources within Dust and upload files or directly connect to it via APIs to ingest data from tools like Notion, Google Drive, or Slack. Dust then handles the chunking strategy with the embeddings models and performs retrieval augmented generation. ![solution-laptop-screen](/case-studies/dust/laptop-solutions.jpg) For this, Dust required a vector database and evaluated different options including Pinecone and Weaviate, but ultimately decided on Qdrant as the solution of choice. “We particularly liked Qdrant because it is open-source, written in Rust, and it has a well-designed API,” Polu says. For example, Dust was looking for high control and visibility in the context of their rapidly scaling demand, which made the fact that Qdrant is open-source a key driver for selecting Qdrant. Also, Dust's existing system which is interfacing with Qdrant, is written in Rust, which allowed Dust to create synergies with regards to library support. When building their solution with Qdrant, Dust took a two step approach: 1. **Get started quickly:** Initially, Dust wanted to get started quickly and opted for [Qdrant Cloud](https://qdrant.to/cloud), Qdrant’s managed solution, to reduce the administrative load on Dust’s end. In addition, they created clusters and deployed them on Google Cloud since Dust wanted to have those run directly in their existing Google Cloud environment. This added a lot of value as it allowed Dust to centralize billing and increase security by having the instance live within the same VPC. “The early setup worked out of the box nicely,” Polu says. 2. **Scale and optimize:** As the load grew, Dust started to take advantage of Qdrant’s features to tune the setup for optimization and scale. They started to look into how they map and cache data, as well as applying some of Qdrant’s [built-in compression features](/documentation/guides/quantization/). In particular, Dust leveraged the control of the [MMAP payload threshold](/documentation/concepts/storage/#configuring-memmap-storage) as well as [Scalar Quantization](/articles/scalar-quantization/), which enabled Dust to manage the balance between storing vectors on disk and keeping quantized vectors in RAM, more effectively. “This allowed us to scale smoothly from there,” Polu says. ## Results Dust has seen success in using Qdrant as their vector database of choice, as Polu acknowledges: “Qdrant’s ability to handle large-scale models and the flexibility it offers in terms of data management has been crucial for us. The observability features, such as historical graphs of RAM, Disk, and CPU, provided by Qdrant are also particularly useful, allowing us to plan our scaling strategy effectively.” ![“We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x.” - Stanislas Polu, Co-Founder of Dust](/case-studies/dust/Dust-Quote.jpg) Dust was able to scale its application with Qdrant while maintaining low latency across hundreds of thousands of collections with retrieval only taking milliseconds, as well as maintaining high accuracy. Additionally, Polu highlights the efficiency gains Dust was able to unlock with Qdrant: "We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x." ## Outlook Dust will continue to build out their platform, aiming to be the platform of choice for companies to execute on their internal GenAI strategy, unlocking company knowledge and driving team productivity. Over the coming months, Dust will add more connections, such as Intercom, Jira, or Salesforce. Additionally, Dust will expand on its structured data capabilities. To learn more about how Dust uses Qdrant to help employees in their day to day tasks, check out our [Vector Space Talk](https://www.youtube.com/watch?v=toIgkJuysQ4) featuring Stanislas Polu, Co-Founder of Dust.
qdrant-landing/content/blog/case-study-pienso.md
--- draft: false title: "Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers" slug: case-study-pienso short_description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. preview_image: /case-studies/pienso/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/pienso/ --- The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso's low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### Scalability: Preparing for Sustained Growth in Data Volumes Qdrant's distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model's capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant's solution. ### Efficiency: Maximizing the Customer Value Proposition Qdrant's storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### Reliability: Fast Performance in a Secure Environment Qdrant's utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it's fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## What's Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. **To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909).** ![founders chat](/case-studies/pienso/founderschat.png)
qdrant-landing/content/blog/case-study-visua.md
--- draft: false title: "Visua and Qdrant: Vector Search in Computer Vision" slug: short_description: "Using vector search for quality control and anomaly detection in computer vision." description: "How Visua uses Qdrant as a vector search engine for quality control and anomaly detection in their computer vision platform." preview_image: /blog/case-study-visua/image4.png social_preview_image: /blog/case-study-visua/image4.png date: 2024-05-01T00:02:00Z author: Manuel Meyer featured: false tags: - visua - qdrant - computer vision - quality control - anomaly detection --- ![visua/image1.png](/blog/case-study-visua/image1.png) For over a decade, [VISUA](https://visua.com/) has been a leader in precise, high-volume computer vision data analysis, developing a robust platform that caters to a wide range of use cases, from startups to large enterprises. Starting with social media monitoring, where it excels in analyzing vast data volumes to detect company logos, VISUA has built a diverse ecosystem of customers, including names in social media monitoring, like **Brandwatch**, cybersecurity like **Mimecast**, trademark protection like **Ebay** and several sports agencies like **Vision Insights** for sponsorship evaluation. ![visua/image3.png](/blog/case-study-visua/image3.png) ## The Challenge **Quality Control at Scale** The accuracy of object detection within images is critical for VISUA ensuring that their algorithms are detecting objects in images correctly. With growing volumes of data processed for clients, the company was looking for a way to enhance its quality control and anomaly detection mechanisms to be more scalable and auditable. The challenge was twofold. First, VISUA needed a method to rapidly and accurately identify images and the objects within them that were similar, to identify false negatives, or unclear outcomes and use them as inputs for reinforcement learning. Second, the rapid growth in data volume challenged their previous quality control processes, which relied on a sampling method based on meta-information (like analyzing lower-confidence, smaller, or blurry images), which involved more manual reviews and was not as scalable as needed. In response, the team at VISUA explored vector databases as a solution. ## The Solution **Accelerating Anomaly Detection and Elevating Quality Control with Vector Search** In addressing the challenge of scaling and enhancing its quality control processes, VISUA turned to vector databases, with Qdrant emerging as the solution of choice. This technological shift allowed VISUA to leverage vector databases for identifying similarities and deduplicating vast volumes of images, videos, and frames. By doing so, VISUA was able to automatically classify objects with a level of precision that was previously unattainable. The introduction of vectors allowed VISUA to represent data uniquely and mark frames for closer examination by prioritizing the review of anomalies and data points with the highest variance. Consequently, this technology empowered Visia to scale its quality assurance and reinforcement learning processes tenfold. > *“Using Qdrant as a vector database for our quality control allowed us to review 10x more data by exploiting repetitions and deduplicating samples and doing that at scale with having a query engine.”* Alessandro Prest, Co-Founder at VISUA. ![visua/image2.jpg](/blog/case-study-visua/image2.jpg) ## The Selection Process **Finding the Right Vector Database For Quality Analysis and Anomaly Detection** Choosing the right vector database was a pivotal decision for VISUA, and the team conducted extensive benchmarks. They tested various solutions, including Weaviate, Pinecone, and Qdrant, focusing on the efficient handling of both vector and payload indexes. The objective was to identify a system that excels in managing hybrid queries that blend vector similarities with record attributes, crucial for enhancing their quality control and anomaly detection capabilities. Qdrant distinguished itself through its: - **Hybrid Query Capability:** Qdrant enables the execution of hybrid queries that combine payload fields and vector data, allowing for comprehensive and nuanced searches. This functionality leverages the strengths of both payload attributes and vector similarities for detailed data analysis. Prest noted the importance of Qdrant's hybrid approach, saying, “When talking with the founders of Qdrant, we realized that they put a lot of effort into this hybrid approach, which really resonated with us.” - **Performance Superiority**: Qdrant distinguished itself as the fastest engine for VISUA's specific needs, significantly outpacing alternatives with query speeds up to 40 times faster for certain VISUA use cases. Alessandro Prest highlighted, "Qdrant was the fastest engine by a large margin for our use case," underscoring its significant efficiency and scalability advantages. - **API Documentation**: The clarity, comprehensiveness, and user-friendliness of Qdrant’s API documentation and reference guides further solidified VISUA’s decision. This strategic selection enabled VISUA to achieve a notable increase in operational efficiency and scalability in its quality control processes. ## Implementing Qdrant Upon selecting Qdrant as their vector database solution, VISUA undertook a methodical approach to integration. The process began in a controlled development environment, allowing VISUA to simulate real-world use cases and ensure that Qdrant met their operational requirements. This careful, phased approach ensured a smooth transition when moving Qdrant into their production environment, hosted on AWS clusters. VISUA is leveraging several specific Qdrant features in their production setup: 1. **Support for Multiple Vectors per Record/Point**: This feature allows for a nuanced and multifaceted analysis of data, enabling VISUA to manage and query complex datasets more effectively. 2. **Quantization**: Quantization optimizes storage and accelerates query processing, improving data handling efficiency and lowering memory use, essential for large-scale operations. ## The Results Integrating Qdrant into VISUA's quality control operations has delivered measurable outcomes when it comes to efficiency and scalability: - **40x Faster Query Processing**: Qdrant has drastically reduced the time needed for complex queries, enhancing workflow efficiency. - **10x Scalability Boost:** The efficiency of Qdrant enables VISUA to handle ten times more data in its quality assurance and learning processes, supporting growth without sacrificing quality. - **Increased Data Review Capacity:** The increased capacity to review the data allowed VISUA to enhance the accuracy of its algorithms through reinforcement learning. #### Expanding Qdrant’s Use Beyond Anomaly Detection While the primary application of Qdrant is focused on quality control, VISUA's team is actively exploring additional use cases with Qdrant. VISUA's use of Qdrant has inspired new opportunities, notably in content moderation. "The moment we started to experiment with Qdrant, opened up a lot of ideas within the team for new applications,” said Prest on the potential unlocked by Qdrant. For example, this has led them to actively explore the Qdrant [Discovery API](/documentation/concepts/explore/?q=discovery#discovery-api), with an eye on enhancing content moderation processes. Beyond content moderation, VISUA is set for significant growth by broadening its copyright infringement detection services. As the demand for detecting a wider range of infringements, like unauthorized use of popular characters on merchandise, increases, VISUA plans to expand its technology capabilities. Qdrant will be pivotal in this expansion, enabling VISUA to meet the complex and growing challenges of moderating copyrighted content effectively and ensuring comprehensive protection for brands and creators.
qdrant-landing/content/blog/cohere-embedding-v3.md
--- draft: false preview_image: /blog/from_cms/nils-thumbnail.png sitemapExclude: true title: "From Content Quality to Compression: The Evolution of Embedding Models at Cohere with Nils Reimers" slug: cohere-embedding-v3 short_description: Nils Reimers head of machine learning at Cohere shares the details about their latest embedding model. description: Nils Reimers head of machine learning at Cohere comes on the recent vector space talks to share details about their latest embedding V3 model. date: 2023-11-19T12:48:36.622Z author: Demetrios Brinkmann featured: false author_link: https://www.linkedin.com/in/dpbrinkm/ tags: - Vector Space Talk - Cohere - Embedding Model categories: - News - Vector Space Talk --- For the second edition of our Vector Space Talks we were joined by none other than Cohere’s Head of Machine Learning Nils Reimers. ## Key Takeaways Let's dive right into the five key takeaways from Nils' talk: 1. Content Quality Estimation: Nils explained how embeddings have traditionally focused on measuring topic match, but content quality is just as important. He demonstrated how their model can differentiate between informative and non-informative documents. 2. Compression-Aware Training: He shared how they've tackled the challenge of reducing the memory footprint of embeddings, making it more cost-effective to run vector databases on platforms like [Qdrant](https://cloud.qdrant.io/login). 3. Reinforcement Learning from Human Feedback: Nils revealed how they've borrowed a technique from reinforcement learning and applied it to their embedding models. This allows the model to learn preferences based on human feedback, resulting in highly informative responses. 4. Evaluating Embedding Quality: Nils emphasized the importance of evaluating embedding quality in relative terms rather than looking at individual vectors. It's all about understanding the context and how embeddings relate to each other. 5. New Features in the Pipeline: Lastly, Nils gave us a sneak peek at some exciting features they're developing, including input type support for Langchain and improved compression techniques. Now, here's a fun fact from the episode: Did you know that the content quality estimation model *can't* differentiate between true and fake statements? It's a challenging task, and the model relies on the information present in its pretraining data. We loved having Nils as our guest, check out the full talk below. If you or anyone you know would like to come on the Vector Space Talks <iframe width="560" height="315" src="https://www.youtube.com/embed/Abh3YCahyqU?si=OB4FXhTivsLLXzQV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
qdrant-landing/content/blog/cve-2024-2221-response.md
--- title: "Response to CVE-2024-2221: Arbitrary file upload vulnerability" draft: false slug: cve-2024-2221-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-2221/cve-2024-2221-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-04-05T13:00:00-07:00 author: Mike Jang featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-2221](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-2221). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Quickstart guide](/documentation/cloud/quickstart-cloud/#step-2-test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version. > Note: This article has been updated on 2024-05-10 to encourage users to upgrade to 1.9.0 to ensure protection from both CVE-2024-2221 and CVE-2024-3829.
qdrant-landing/content/blog/cve-2024-3829-response.md
--- title: "Response to CVE-2024-3829: Arbitrary file upload vulnerability" draft: false slug: cve-2024-3829-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-3829-response/cve-2024-3829-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-06-10T17:00:00Z author: Mac Chaffee featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-3829](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-3829). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. This is a different but similar vulnerability to CVE-2024-2221, announced in April 2024. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Quickstart guide](https://qdrant.tech/documentation/cloud/quickstart-cloud/#step-2-test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version.
qdrant-landing/content/blog/datatalk-club-podcast-plug.md
--- title: "Navigating challenges and innovations in search technologies" draft: false slug: navigating-challenges-innovations short_description: Podcast on search and LLM with Datatalk.club description: Podcast on search and LLM with Datatalk.club preview_image: /blog/navigating-challenges-innovations/preview/preview.png date: 2024-01-12T15:39:53.751Z author: Atita Arora featured: false tags: - podcast - search - blog - retrieval-augmented generation - large language models --- ## Navigating challenges and innovations in search technologies We participated in a [podcast](#podcast-discussion-recap) on search technologies, specifically with retrieval-augmented generation (RAG) in language models. RAG is a cutting-edge approach in natural language processing (NLP). It uses information retrieval and language generation models. We describe how it can enhance what AI can do to understand, retrieve, and generate human-like text. ### More about RAG Think of RAG as a system that finds relevant knowledge from a vast database. It takes your query, finds the best available information, and then provides an answer. RAG is the next step in NLP. It goes beyond the limits of traditional generation models by integrating retrieval mechanisms. With RAG, NLP can access external knowledge sources, databases, and documents. This ensures more accurate, contextually relevant, and informative output. With RAG, we can set up more precise language generation as well as better context understanding. RAG helps us incorporate real-world knowledge into AI-generated text. This can improve overall performance in tasks such as: - Answering questions - Creating summaries - Setting up conversations ### The importance of evaluation for RAG and LLM Evaluation is crucial for any application leveraging LLMs. It promotes confidence in the quality of the application. It also supports implementation of feedback and improvement loops. ### Unique challenges of evaluating RAG and LLM-based applications *Retrieval* is the key to Retrieval Augmented Generation, as it affects quality of the generated response. Potential problems include: - Setting up a defined or expected set of documents, which can be a significant challenge. - Measuring *subjectiveness*, which relates to how well the data fits or applies to a given domain or use case. ### Podcast Discussion Recap In the podcast, we addressed the following: - **Model evaluation(LLM)** - Understanding the model at the domain-level for the given use case, supporting required context length and terminology/concept understanding. - **Ingestion pipeline evaluation** - Evaluating factors related to data ingestion and processing such as chunk strategies, chunk size, chunk overlap, and more. - **Retrieval evaluation** - Understanding factors such as average precision, [Distributed cumulative gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) (DCG), as well as normalized DCG. - **Generation evaluation(E2E)** - Establishing guardrails. Evaulating prompts. Evaluating the number of chunks needed to set up the context for generation. ### The recording Thanks to the [DataTalks.Club](https://datatalks.club) for organizing [this podcast](https://www.youtube.com/watch?v=_fbe1QyJ1PY). ### Event Alert If you're interested in a similar discussion, watch for the recording from the [following event](https://www.eventbrite.co.uk/e/the-evolution-of-genai-exploring-practical-applications-tickets-778359172237?aff=oddtdtcreator), organized by [DeepRec.ai](https://deeprec.ai). ### Further reading - [Qdrant Blog](/blog/)
qdrant-landing/content/blog/fastembed-fast-lightweight-embedding-generation-nirant-kasliwal-vector-space-talks-004.md
--- draft: false title: "FastEmbed: Fast & Lightweight Embedding Generation - Nirant Kasliwal | Vector Space Talks" slug: fast-embed-models short_description: Nirant Kasliwal, AI Engineer at Qdrant, discusses the power and potential of embedding models. description: Nirant Kasliwal discusses the efficiency and optimization techniques of FastEmbed, a Python library designed for speedy, lightweight embedding generation in machine learning applications. preview_image: /blog/from_cms/nirant-kasliwal-cropped.png date: 2024-01-09T11:38:59.693Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Quantized Emdedding Models - FastEmbed --- > *"When things are actually similar or how we define similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do.”*\ >-- Nirant Kasliwal Heard about FastEmbed? It's a game-changer. Nirant shares tricks on how to improve your embedding models. You might want to give it a shot! Nirant Kasliwal, the creator and maintainer of FastEmbed, has made notable contributions to the Finetuning Cookbook at OpenAI Cookbook. His contributions extend to the field of Natural Language Processing (NLP), with over 5,000 copies of the NLP book sold. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4QWCyu28SlURZfS2qCeGKf?si=GDHxoOSQQ_W_UVz4IzzC_A), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/e67jLAx_F2A).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/e67jLAx_F2A?si=533LvUwRKIt_qWWu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/FastEmbed-Fast--Lightweight-Embedding-Generation---Nirant-Kasliwal--Vector-Space-Talks-004-e2c8s3b/a-aal40k6" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Nirant Kasliwal, AI Engineer at Qdrant joins us on Vector Space Talks to dive into FastEmbed, a lightning-quick method for generating embeddings. In this episode, Nirant shares insights, tips, and innovative ways to enhance embedding generation. 5 Keys to Learning from the Episode: 1. Nirant introduces some hacker tricks for improving embedding models - you won't want to miss these! 2. Learn how quantized embedding models can enhance CPU performance. 3. Get an insight into future plans for GPU-friendly quantized models. 4. Understand how to select default models in Qdrant based on MTEB benchmark, and how to calibrate them for domain-specific tasks. 5. Find out how Fast Embed, a Python library created by Nirant, can solve common challenges in embedding creation and enhance the speed and efficiency of your workloads. > Fun Fact: The largest header or adapter used in production is only about 400-500 KBs -- proof that bigger doesn't always mean better! > ## Show Notes: 00:00 Nirant discusses FastEmbed at Vector Space Talks.\ 05:00 Tokens are expensive and slow in open air.\ 08:40 FastEmbed is fast and lightweight.\ 09:49 Supporting multimodal embedding is our plan.\ 15:21 No findings. Enhancing model downloads and performance.\ 16:59 Embed creation on your own compute, not cloud. Control and simplicity are prioritized.\ 21:06 Qdrant is fast for embedding similarity search.\ 24:07 Engineer's mindset: make informed guesses, set budgets.\ 26:11 Optimize embeddings with questions and linear layers.\ 29:55 Fast, cheap inference using mixed precision embeddings. ## More Quotes from Nirant: *"There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order.”*\ -- Nirant Kasliwal *"The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a shipload, for instance, or a plane load, which are very different.”*\ -- Nirant Kasliwal *"I think the more correct way to look at it is that we use the CPU better.”*\ -- Nirant Kasliwal ## Transcript: Demetrios: Welcome back, everyone, to another vector space talks. Today we've got my man Nirant coming to us talking about FastEmbed. For those, if this is your first time at our vector space talks, we like to showcase some of the cool stuff that the community in Qdrant is doing, the Qdrant community is doing. And we also like to show off some of the cool stuff that Qdrant itself is coming out with. And this is one of those times that we are showing off what Qdrant itself came out with with FastEmbed. And we've got my man Nirant around here somewhere. I am going to bring him on stage and I will welcome him by saying Nirant a little bit about his bio, we could say. So, Naran, what's going on, dude? Let me introduce you real fast before we get cracking. Demetrios: And you are a man that wears many hats. You're currently working on the Devrel team at Qdrant, right? I like that shirt that you got there. And you have worked with ML models and embeddings since 2017. That is wild. You are also the creator and maintainer of fast embed. So you're the perfect guy to talk to about this very topic that we are doing today. Now, if anyone has questions, feel free to throw them into the chat and I will ask Nirant as he's going through it. I will also take this moment to encourage anyone who is watching to come and join us in discord, if you are not already there for the Qdrant discord. Demetrios: And secondly, I will encourage you if you have something that you've been doing with Qdrant or in the vector database space, or in the AI application space and you want to show it off, we would love to have you talk at the vector space talks. So without further ado, Nirant, my man, I'm going to kick it over to you and I am going to start it off with what are the challenges with embedding creation today? Nirant Kasliwal: I think embedding creation has it's not a standalone problem, as you might first think like that's a first thought that it's a standalone problem. It's actually two problems. One is a classic compute that how do you take any media? So you can make embeddings from practically any form of media, text, images, video. In theory, you could make it from bunch of things. So I recently saw somebody use soup as a metaphor. So you can make soup from almost anything. So you can make embeddings from almost anything. Now, what do we want to do though? Embedding are ultimately a form of compression. Nirant Kasliwal: So now we want to make sure that the compression captures something of interest to us. In this case, we want to make sure that embeddings capture some form of meaning of, let's say, text or images. And when we do that, what does that capture mean? We want that when things are actually similar or whatever is our definition of similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do basically in this piece. The model itself is quite often trained and built in a way which retains its ability to learn new things. And you can separate similar embeddings faster and all of those. But when we actually use this in production, we don't need all of those capabilities, we don't need the train time capabilities. Nirant Kasliwal: And that means that all the extra compute and features and everything that you have stored for training time are wasted in production. So that's almost like saying that every time I have to speak to you I start over with hello, I'm Nirant and I'm a human being. It's extremely infuriating but we do this all the time with embedding and that is what fast embed primarily tries to fix. We say embeddings from the lens of production and we say that how can we make a Python library which is built for speed, efficiency and accuracy? Those are the core ethos in that sense. And I think people really find this relatable as a problem area. So you can see this on our GitHub issues. For instance, somebody says that oh yeah, we actually does what it says and yes, that's a good thing. So for 8 million tokens we took about 3 hours on a MacBook Pro M one while some other Olama embedding took over two days. Nirant Kasliwal: You can expect what 8 million tokens would cost on open air and how slow it would be given that they frequently rate limit you. So for context, we made a 1 million embedding set which was a little more than it was a lot more than 1 million tokens and that took us several hundred of us. It was not expensive, but it was very slow. So as a batch process, if you want to embed a large data set, it's very slow. I think the more colorful version of this somebody wrote on LinkedIn, Prithvira wrote on LinkedIn that your embeddings will go and I love that idea that we have optimized speed so that it just goes fast. That's the idea. So what do we I mean let's put names to these things, right? So one is we want it to be fast and light. And I'll explain what do we mean by light? We want recall to be fast, right? I mean, that's what we started with that what are embedding we want to be make sure that similar things are similar. Nirant Kasliwal: That's what we call recall. We often confuse this with accuracy but in retrieval sense we'll call it recall. We want to make sure it's still easy to use, right? Like there is no reason for this to get complicated. And we are fast, I mean we are very fast. And part of that is let's say we use BGE small En, the English model only. And let's say this is all in tokens per second and the token is model specific. So for instance, the way BGE would count a token might be different from how OpenAI might count a token because the tokenizers are slightly different and they have been trained on slightly different corporates. So that's the idea. Nirant Kasliwal: I would love you to try this so that I can actually brag about you trying it. Demetrios: What was the fine print on that slide? Benchmarks are my second most liked way to brag. What's your first most liked way to brag? Nirant Kasliwal: The best way is that when somebody tells me that they're using it. Demetrios: There we go. So I guess that's an easy way to get people to try and use it. Nirant Kasliwal: Yeah, I would love it if you try it. Tell us how it went for you, where it's working, where it's broken, all of that. I love it if you report issue then say I will even appreciate it if you yell at me because that means you're not ignoring me. Demetrios: That's it. There we go. Bug reports are good to throw off your mojo. Keep it rolling. Nirant Kasliwal: So we said fast and light. So what does light mean? So you will see a lot of these Embedding servers have really large image sizes. When I say image, I mean typically or docker image that can typically go to a few GPS. For instance, in case of sentence transformers, which somebody's checked out with Transformers the package and PyTorch, you get a docker image of roughly five GB. The Ram consumption is not that high by the way. Right. The size is quite large and of that the model is just 400 MB. So your dependencies are very large. Nirant Kasliwal: And every time you do this on, let's say an AWS Lambda, or let's say if you want to do horizontal scaling, your cold start times can go in several minutes. That is very slow and very inefficient if you are working in a workload which is very spiky. And if you were to think about it, people have more queries than, let's say your corpus quite often. So for instance, let's say you are in customer support for an ecommerce food delivery app. Bulk of your order volume will be around lunch and dinner timing. So that's a very spiky load. Similarly, ecommerce companies, which are even in fashion quite often see that people check in on their orders every evening and for instance when they leave from office or when they get home. And that's another spike. Nirant Kasliwal: So whenever you have a spiky load, you want to be able to scale horizontally and you want to be able to do it fast. And that speed comes from being able to be light. And that is why Fast Embed is very light. So you will see here that we call out that Fast Embed is just half a GB versus five GB. So on the extreme cases, this could be a ten x difference in your docker, image sizes and even Ram consumptions recall how good or bad are these embeddings? Right? So we said we are making them fast but do we sacrifice how much performance do we trade off for that? So we did a cosine similarity test with our default embeddings which was VG small en initially and now 1.5 and they're pretty robust. We don't sacrifice a lot of performance. Everyone with me? I need some audio to you. Demetrios: I'm totally with you. There is a question that came through the chat if this is the moment to ask it. Nirant Kasliwal: Yes, please go for it. Demetrios: All right it's from a little bit back like a few slides ago. So I'm just warning you. Are there any plans to support audio or image sources in fast embed? Nirant Kasliwal: If there is a request for that we do have a plan to support multimodal embedding. We would love to do that. If there's specific model within those, let's say you want Clip or Seglip or a specific audio model, please mention that either on that discord or our GitHub so that we can plan accordingly. So yeah, that's the idea. We need specific suggestions so that we keep adding it. We don't want to have too many models because then that creates confusion for our end users and that is why we take opinated stance and that is actually a good segue. Why do we prioritize that? We want this package to be easy to use so we're always going to try and make the best default choice for you. So this is a very Linux way of saying that we do one thing and we try to do that one thing really well. Nirant Kasliwal: And here, let's say for instance, if you were to look at Qdrant client it's just passing everything as you would. So docs is a list of strings, metadata is a list of dictionaries and IDs again is a list of IDs valid IDs as per the Qdrant Client spec. And the search is also very straightforward. The entire search query is basically just two params. You could even see a very familiar integration which is let's say langchain. I think most people here would have looked at this in some shape or form earlier. This is also very familiar and very straightforward. And under the hood what are we doing is just this one line. Nirant Kasliwal: We have a dot embed which is a generator and we call a list on that so that we actually get a list of embeddings. You will notice that we have a passage and query keys here which means that our retrieval model which we have used as default here, takes these into account that if there is a passage and a query they need to be mapped together and a question and answer context is captured in the model training itself. The other caveat is that we pass on the token limits or context windows from the embedding model creators themselves. So in the case of this model, which is BGE base, that is 512 BGE tokens. Demetrios: One thing on this, we had Neil's from Cohere on last week and he was talking about Cohere's embed version three, I think, or V three, he was calling it. How does this play with that? Does it is it supported or no? Nirant Kasliwal: As of now, we only support models which are open source so that we can serve those models directly. Embed V three is cloud only at the moment, so that is why it is not supported yet. But that said, we are not opposed to it. In case there's a requirement for that, we are happy to support that so that people can use it seamlessly with Qdrant and fast embed does the heavy lifting of passing it to Qdrant, structuring the schema and all of those for you. So that's perfectly fair. As I ask, if we have folks who would love to try coherent embed V three, we'd use that. Also, I think Nils called out that coherent embed V three is compatible with binary quantization. And I think that's the only embedding which officially supports that. Nirant Kasliwal: Okay, we are binary quantization aware and they've been trained for it. Like compression awareness is, I think, what it was called. So Qdrant supports that. So please of that might be worth it because it saves about 30 x in memory costs. So that's quite powerful. Demetrios: Excellent. Nirant Kasliwal: All right, so behind the scenes, I think this is my favorite part of this. It's also very short. We do literally two things. Why are we fast? We use ONNX runtime as of now, our configurations are such that it runs on CPU and we are still very fast. And that's because of all the multiple processing and ONNX runtime itself at some point in the future. We also want to support GPUs. We had some configuration issues on different Nvidia configurations. As the GPU changes, the OnX runtime does not seamlessly change the GPU. Nirant Kasliwal: So that is why we do not allow that as a provider. But you can pass that. It's not prohibited, it's just not a default. We want to make sure your default is always available and will be available in the happy path, always. And we quantize the models for you. So when we quantize, what it means is we do a bunch of tricks supported by a huge shout out to hugging faces optimum. So we do a bunch of optimizations in the quantization, which is we compress some activations, for instance, gelu. We also do some graph optimizations and we don't really do a lot of dropping the bits, which is let's say 32 to 16 or 64 to 32 kind of quantization only where required. Nirant Kasliwal: Most of these gains come from the graph optimizations themselves. So there are different modes which optimum itself calls out. And if there are folks interested in that, happy to share docs and details around that. Yeah, that's about it. Those are the two things which we do from which we get bulk of these speed gains. And I think this goes back to the question which you opened with. Yes, we do want to support multimodal. We are looking at how we can do an on and export of Clip, which is as robust as Clip. Nirant Kasliwal: So far we have not found anything. I've spent some time looking at this, the quality of life upgrades. So far, most of our model downloads have been through Google cloud storage hosted by Qdrant. We want to support hugging Face hub so that we can launch new models much, much faster. So we will do that soon. And the next thing is, as I called out, we always want to take performance as a first class citizen. So we are looking at how we can allow you to change or adapt frozen Embeddings, let's say open a Embedding or any other model to your specific domain. So maybe a separate toolkit within Fast Embed which is optional and not a part of the default path, because this is not something which you will use all the time. Nirant Kasliwal: We want to make sure that your training and experience parts are separate. So we will do that. Yeah, that's it. Fast and sweet. Demetrios: Amazing. Like FastEmbed. Nirant Kasliwal: Yes. Demetrios: There was somebody that talked about how you need to be good at your puns and that might be the best thing, best brag worthy stuff you've got. There's also a question coming through that I want to ask you. Is it true that when we use Qdrant client add Fast Embedding is included? We don't have to do it? Nirant Kasliwal: What do you mean by do it? As in you don't have to specify a Fast Embed model? Demetrios: Yeah, I think it's more just like you don't have to add it on to Qdrant in any way or this is completely separated. Nirant Kasliwal: So this is client side. You own all your data and even when you compress it and send us all the Embedding creation happens on your own compute. This Embedding creation does not happen on Cauldron cloud, it happens on your own compute. It's consistent with the idea that you should have as much control as possible. This is also why, as of now at least, Fast Embed is not a dedicated server. We do not want you to be running two different docker images for Qdrant and Fast Embed. Or let's say two different ports for Qdrant and Discord within the sorry, Qdrant and Fast Embed in the same docker image or server. So, yeah, that is more chaos than we would like. Demetrios: Yeah, and I think if I understood it, I understood that question a little bit differently, where it's just like this comes with Qdrant out of the box. Nirant Kasliwal: Yes, I think that's a good way to look at it. We set all the defaults for you, we select good practices for you and that should work in a vast majority of cases based on the MTEB benchmark, but we cannot guarantee that it will work for every scenario. Let's say our default model is picked for English and it's mostly tested on open domain open web data. So, for instance, if you're doing something domain specific, like medical or legal, it might not work that well. So that is where you might want to still make your own Embeddings. So that's the edge case here. Demetrios: What are some of the other knobs that you might want to be turning when you're looking at using this. Nirant Kasliwal: With Qdrant or without Qdrant? Demetrios: With Qdrant. Nirant Kasliwal: So one thing which I mean, one is definitely try the different models which we support. We support a reasonable range of models, including a few multilingual ones. Second is while we take care of this when you do use with Qdrants. So, for instance, let's say this is how you would have to manually specify, let's say, passage or query. When you do this, let's say add and query. What we do, we add the passage and query keys while creating the Embeddings for you. So this is taken care of. So whatever is your best practices for the Embedding model, make sure you use it when you're using it with Qdrant or just in isolation as well. Nirant Kasliwal: So that is one knob. The second is, I think it's very commonly recommended, we would recommend that you start with some evaluation, like have maybe let's even just five sentences to begin with and see if they're actually close to each other. And as a very important shout out in Embedding retrieval, when we use Embedding for retrieval or vector similarity search, it's the relative ordering which matters. So, for instance, we cannot say that zero nine is always good. It could also mean that the best match is, let's say, 0.6 in your domain. So there is no absolute cut off for threshold in terms of match. So sometimes people assume that we should set a minimum threshold so that we get no noise. So I would suggest that you calibrate that for your queries and domain. Nirant Kasliwal: And you don't need a lot of queries. Even if you just, let's say, start with five to ten questions, which you handwrite based on your understanding of the domain, you will do a lot better than just picking a threshold at random. Demetrios: This is good to know. Okay, thanks for that. So there's a question coming through in the chat from Shreya asking how is the latency in comparison to elasticsearch? Nirant Kasliwal: Elasticsearch? I believe that's a Qdrant benchmark question and I'm not sure how is elastics HNSW index, because I think that will be the fair comparison. I also believe elastics HNSW index puts some limitations on how many vectors they can store with the payload. So it's not an apples to apples comparison. It's almost like comparing, let's say, a single page with the entire book, because that's typically the ratio from what I remember I also might be a few months outdated on this, but I think the intent behind that question is, is Qdrant fast enough for what Qdrant does? It is definitely fast is, which is embedding similarity search. So for that, it's exceptionally fast. It's written in Rust and Twitter for all C. Similar tweets uses this at really large scale. They run a Qdrant instance. Nirant Kasliwal: So I think if a Twitter scale company, which probably does about anywhere between two and 5 million tweets a day, if they can embed and use Qdrant to serve that similarity search, I think most people should be okay with that latency and throughput requirements. Demetrios: It's also in the name. I mean, you called it Fast Embed for a reason, right? Nirant Kasliwal: Yes. Demetrios: So there's another question that I've got coming through and it's around the model selection and embedding size. And given the variety of models and the embedding sizes available, how do you determine the most suitable models and embedding sizes? You kind of got into this on how yeah, one thing that you can do to turn the knobs are choosing a different model. But how do you go about choosing which model is better? There. Nirant Kasliwal: There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order. So the academic and the gold standard way of doing this would probably look something like this. You will go at a known benchmark, which might be, let's say, something like Kilt K-I-L-T or multilingual text embedding benchmark, also known as MTEB or Beer, which is beir one of these three benchmarks. And you will look at their retrieval section and see which one of those marks very close to whatever is your domain or your problem area, basically. So, for instance, let's say you're working in Pharmacology, the ODS that a customer support retrieval task is relevant to. You are near zero unless you are specifically in, I don't know, a Pharmacology subscription app. So that is where you would start. Nirant Kasliwal: This will typically take anywhere between two to 20 hours, depending on how familiar you are with these data sets already. But it's not going to take you, let's say, a month to do this. So just to put a rough order of magnitude, once you have that, you try to take whatever is the best model on that subdomain data set and you see how does it work within your domain and you launch from there. At that point, you switch into the engineer's mindset. The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a ship load, for instance, or a plane load, which are very different. So you start with that and you say, okay, this is the number of requests which I expect, this is what my budget is, and your budget will quite often be, let's say, in terms of latency budgets, compute and memory budgets. Nirant Kasliwal: So for instance, one of the reasons I mentioned binary quantization and product quantization is with something like binary quantization you can get 98% recall, but with 30 to 40 x memory savings because it discards all the extraneous bits and just keeps the zero or one bit of the embedding itself. And Qdrant has already measured it for you. So we know that it works for OpenAI and Cohere embeddings for sure. So you might want to use that to just massively scale while keeping your budgets as an engineer. Now, in order to do this, you need to have some sense of three numbers, right? What are your latency requirements, your cost requirements, and your performance requirement. Now, for the performance, which is where engineers are most unfamiliar with, I will give the hacker answer, which is this. Demetrios: Is what I was waiting for. Man, so excited for this one, exactly this. Please tell us the hacker answer. Nirant Kasliwal: The hacker answer is this there are two tricks which I will share. One is write ten questions, figure out the best answer, and see which model gets as many of those ten, right? The second is most embedding models which are larger or equivalent to 768 embeddings, can be optimized and improved by adding a small linear head over it. So for instance, I can take the Open AI embedding, which is 1536 embedding, take my text, pass it through that, and for my own domain, adapt the Open A embedding by adding two or three layers of linear functions, basically, right? Y is equals to MX plus C or Ax plus B y is equals to C, something like that. So it's very simple, you can do it on NumPy, you don't need Torch for it because it's very small. The header or adapter size will typically be in this range of few KBS to be maybe a megabyte, maybe. I think the largest I have used in production is about 400 500 KBS. That's about it. And that will improve your recall several, several times. Nirant Kasliwal: So that's one, that's two tricks. And a third bonus hacker trick is if you're using an LLM, sometimes what you can do is take a question and rewrite it with a prompt and make embeddings from both, and pull candidates from both. And then with Qdrant Async, you can fire both these queries async so that you're not blocked, and then use the answer of both the original question which the user gave and the one which you rewrote using the LLM and see select the results which are there in both, or figure some other combination method. Also, so most Kagglers would be familiar with the idea of ensembling. This is the way to do query inference time ensembling, that's awesome. Demetrios: Okay, dude, I'm not going to lie, that was a lot more than I was expecting for that answer. Nirant Kasliwal: Got into the weeds of retrieval there. Sorry. Demetrios: I like it though. I appreciate it. So what about when it comes to the know, we had Andre V, the CTO of Qdrant on here a few weeks ago. He was talking about binary quantization. But then when it comes to quantizing embedding models, in the docs you mentioned like quantized embedding models for fast CPU generation. Can you explain a little bit more about what quantized embedding models are and how they enhance the CPU performance? Nirant Kasliwal: So it's a shorthand to say that they optimize CPU performance. I think the more correct way to look at it is that we use the CPU better. But let's talk about optimization or quantization, which we do here, right? So most of what we do is from optimum and the way optimum call set up is they call these levels. So you can basically go from let's say level zero, which is there are no optimizations to let's say 99 where there's a bunch of extra optimizations happening. And these are different flags which you can switch. And here are some examples which I remember. So for instance, there is a norm layer which you can fuse with the previous operation. Then there are different attention layers which you can fuse with the previous one because you're not going to update them anymore, right? So what we do in training is we update them. Nirant Kasliwal: You know that you're not going to update them because you're using them for inference. So let's say when somebody asks a question, you want that to be converted into an embedding as fast as possible and as cheaply as possible. So you can discard all these extra information which you are most likely to not going to use. So there's a bunch of those things and obviously you can use mixed precision, which most people have heard of with projects, let's say like lounge CPP that you can use FP 16 mixed precision or a bunch of these things. Let's say if you are doing GPU only. So some of these things like FP 16 work better on GPU. The CPU part of that claim comes from how ONNX the runtime which we use allows you to optimize whatever CPU instruction set you are using. So as an example with intel you can say, okay, I'm going to use the Vino instruction set or the optimization. Nirant Kasliwal: So when we do quantize it, we do quantization right now with CPUs in mind. So what we would want to do at some point in the future is give you a GPU friendly quantized model and we can do a device check and say, okay, we can see that a GPU is available and download the GPU friendly model first for you. Awesome. Does that answer the. Question. Demetrios: I mean, for me, yeah, but we'll see what the chat says. Nirant Kasliwal: Yes, let's do that. Demetrios: What everybody says there. Dude, this has been great. I really appreciate you coming and walking through everything we need to know, not only about fast embed, but I think about embeddings in general. All right, I will see you later. Thank you so much, Naran. Thank you, everyone, for coming out. If you want to present, please let us know. Hit us up, because we would love to have you at our vector space talks.
qdrant-landing/content/blog/fastllm-announcement.md
--- draft: false title: "Introducing FastLLM: Qdrant’s Revolutionary LLM" short_description: The most powerful LLM known to human...or LLM. description: Lightweight and open-source. Custom made for RAG and completely integrated with Qdrant. preview_image: /blog/fastllm-announcement/fastllm.png date: 2024-04-01T00:00:00Z author: David Myriel featured: false weight: 0 tags: - Qdrant - FastEmbed - LLM - Vector Database --- Today, we're happy to announce that **FastLLM (FLLM)**, our lightweight Language Model tailored specifically for Retrieval Augmented Generation (RAG) use cases, has officially entered Early Access! Developed to seamlessly integrate with Qdrant, **FastLLM** represents a significant leap forward in AI-driven content generation. Up to this point, LLM’s could only handle up to a few million tokens. **As of today, FLLM offers a context window of 1 billion tokens.** However, what sets FastLLM apart is its optimized architecture, making it the ideal choice for RAG applications. With minimal effort, you can combine FastLLM and Qdrant to launch applications that process vast amounts of data. Leveraging the power of Qdrant's scalability features, FastLLM promises to revolutionize how enterprise AI applications generate and retrieve content at massive scale. > *“First we introduced [FastEmbed](https://github.com/qdrant/fastembed). But then we thought - why stop there? Embedding is useful and all, but our users should do everything from within the Qdrant ecosystem. FastLLM is just the natural progression towards a large-scale consolidation of AI tools.” Andre Zayarni, President & CEO, Qdrant* > ## Going Big: Quality & Quantity Very soon, an LLM will come out with a context window so wide, it will completely eliminate any value a measly vector database can add. ***We know this. That’s why we trained our own LLM to obliterate the competition. Also, in case vector databases go under, at least we'll have an LLM left!*** As soon as we entered Series A, we knew it was time to ramp up our training efforts. FLLM was trained on 300,000 NVIDIA H100s connected by 5Tbps Infiniband. It took weeks to fully train the model, but our unified efforts produced the most powerful LLM known to human…..or LLM. We don’t see how any other company can compete with FastLLM. Most of our competitors will soon be burning through graphics cards trying to get to the next best thing. But it is too late. By this time next year, we will have left them in the dust. > ***“Everyone has an LLM, so why shouldn’t we? Let’s face it - the more products and features you offer, the more they will sign up. Sure, this is a major pivot…but life is all about being bold.”*** *David Myriel, Director of Product Education, Qdrant* > ## Extreme Performance Qdrant’s R&D is proud to stand behind the most dramatic benchmark results. Across a range of standard benchmarks, FLLM surpasses every single model in existence. In the [Needle In A Haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) (NIAH) test, FLLM found the embedded text with 100% accuracy, always within blocks containing 1 billion tokens. We actually believe FLLM can handle more than a trillion tokens, but it’s quite possible that it is hiding its true capabilities. FastLLM has a fine-grained mixture-of-experts architecture and a whopping 1 trillion total parameters. As developers and researchers delve into the possibilities unlocked by this new model, they will uncover new applications, refine existing solutions, and perhaps even stumble upon unforeseen breakthroughs. As of now, we're not exactly sure what problem FLLM is solving, but hey, it's got a lot of parameters! > *Our customers ask us “What can I do with an LLM this extreme?” I don’t know, but it can’t hurt to build another RAG chatbot.” Kacper Lukawski, Senior Developer Advocate, Qdrant* > ## Get Started! Don't miss out on this opportunity to be at the forefront of AI innovation. Join FastLLM's Early Access program now and embark on a journey towards AI-powered excellence! Stay tuned for more updates and exciting developments as we continue to push the boundaries of what's possible with AI-driven content generation. Happy Generating! 🚀 [Sign Up for Early Access](https://qdrant.to/cloud)
qdrant-landing/content/blog/full-text-filter-and-index-are-already-available.md
--- draft: false title: Full-text filter and index are already available! slug: qdrant-introduces-full-text-filters-and-indexes short_description: Qdrant v0.10 introduced full-text filters description: Qdrant v0.10 introduced full-text filters and indexes to enable more search capabilities for those working with textual data. preview_image: /blog/from_cms/andrey.vasnetsov_black_hole_sucking_up_the_word_tag_cloud_f349586d-3e51-43c5-9e5e-92abf9a9e871.png date: 2022-11-16T09:53:05.860Z author: Kacper Łukawski featured: false tags: - Information Retrieval - Database - Open Source - Vector Search Database --- Qdrant is designed as an efficient vector database, allowing for a quick search of the nearest neighbours. But, you may find yourself in need of applying some extra filtering on top of the semantic search. Up to version 0.10, Qdrant was offering support for keywords only. Since 0.10, there is a possibility to apply full-text constraints as well. There is a new type of filter that you can use to do that, also combined with every other filter type. ## Using full-text filters without the payload index Full-text filters without the index created on a field will return only those entries which contain all the terms included in the query. That is effectively a substring match on all the individual terms but **not a substring on a whole query**. ![](/blog/from_cms/1_ek61_uvtyn89duqtmqqztq.webp "An example of how to search for “long_sleeves” in a “detail_desc” payload field.") ## Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by. ![](/blog/from_cms/1_pohx4eznqpgoxak6ppzypq.webp "Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by.") First and foremost, you can choose the tokenizer. It defines how Qdrant should split the text into tokens. There are three options available: * **word** — spaces, punctuation marks and special characters define the token boundaries * **whitespace** — token boundaries defined by whitespace characters * **prefix** — token boundaries are the same as for the “word” tokenizer, but in addition to that, there are prefixes created for every single token. As a result, “Qdrant” will be indexed as “Q”, “Qd”, “Qdr”, “Qdra”, “Qdran”, and “Qdrant”. There are also some additional parameters you can provide, such as * **min_token_len** — minimal length of the token * **max_token_len** — maximal length of the token * **lowercase** — if set to *true*, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase ## Using text filters in practice ![](/blog/from_cms/1_pbtd2tzqtjqqlbi61r8czg.webp "There are also some additional parameters you can provide, such as min_token_len — minimal length of the token max_token_len — maximal length of the token lowercase — if set to true, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase Using text filters in practice") The main difference between using full-text filters on the indexed vs non-indexed field is the performance of such query. In a simple benchmark, performed on the [H&M dataset](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations) (with over 105k examples), the average query time looks as follows (n=1000): ![](/blog/from_cms/screenshot_31.png) It is evident that creating a filter on a field that we’ll query often, may lead us to substantial performance gains without much effort.
qdrant-landing/content/blog/gen-ai-and-vector-search-iveta-lohovska-vector-space-talks.md
--- draft: false title: Gen AI and Vector Search - Iveta Lohovska | Vector Space Talks slug: gen-ai-and-vector-search short_description: Iveta talks about the importance of trustworthy AI, particularly when implementing it within high-stakes enterprises like governments and security agencies description: Iveta Lohovska discusses the importance of explainability and transparency, discussing high-stakes use cases in sectors like cybersecurity and climate data, and emphasizing the necessity for on-prem solutions and traceable vector databases to ensure data integrity and confidentiality. preview_image: /blog/from_cms/iveta-lohovska-bp-cropped.png date: 2024-04-11T22:12:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - GenAI --- > *"In the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations.”*\ — Iveta Lohovska > Iveta Lohovska serves as the Chief Technologist and Principal Data Scientist for AI and Supercomputing at Hewlett Packard Enterprise (HPE), where she champions the democratization of decision intelligence and the development of ethical AI solutions. An industry leader, her multifaceted expertise encompasses natural language processing, computer vision, and data mining. Committed to leveraging technology for societal benefit, Iveta is a distinguished technical advisor to the United Nations' AI for Good program and a Data Science lecturer at the Vienna University of Applied Sciences. Her career also includes impactful roles with the World Bank Group, focusing on open data initiatives and Sustainable Development Goals (SDGs), as well as collaborations with USAID and the Gates Foundation. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7f1RDwp5l2Ps9N7gKubl8S?si=kCSX4HGCR12-5emokZbRfw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RsRAUO-fNaA).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RsRAUO-fNaA?si=s3k_-DP1U0rkPlEV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Gen-AI-and-Vector-Search---Iveta-Lohovska--Vector-Space-Talks-020-e2hnie2/a-ab48uha" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** In our continuous pursuit of knowledge and understanding, especially in the evolving landscape of AI and the vector space, we brought another great Vector Space Talk episode featuring Iveta Lohovska as she talks about generative AI and vector search. Iveta brings valuable insights from her work with the World Bank and as Chief Technologist at HPE, explaining the ins and outs of ethical AI implementation. Here are the episode highlights: - Exploring the critical role of trustworthiness and explainability in AI, especially within high confidentiality use cases like government and security agencies. - Discussing the importance of transparency in AI models and how it impacts the handling of data and understanding the foundational datasets for vector search. - Iveta shares her experiences implementing generative AI in high-stakes environments, including the energy sector and policy-making, emphasizing accuracy and source credibility. - Strategies for managing data privacy in high-stakes sectors, the superiority of on-premises solutions for control, and the implications of opting for cloud or hybrid infrastructure. - Iveta's take on the maturity levels of generative AI, the ongoing development of smaller, more focused models, and the evolving landscape of AI model licensing and open-source contributions. > Fun Fact: The climate agent solution showcased by Iveta helps individuals benchmark their carbon footprint and assists policymakers in drafting policy recommendations based on scientifically accurate data. > ## Show notes: 00:00 AI's vulnerabilities and ethical implications in practice.\ 06:28 Trust reliable sources for accurate climate data.\ 09:14 Vector database offers control and explainability.\ 13:21 On-prem vital for security and control.\ 16:47 Gen AI chat models at basic maturity.\ 19:28 Mature technical community, but slow enterprise adoption.\ 23:34 Advocates for open source but highlights complexities.\ 25:38 Unreliable information, triangle of necessities, vector space. ## More Quotes from Iveta: *"What we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from.”*\ — Iveta Lohovska *"Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things.”*\ — Iveta Lohovska *"Chat GPT for conversational purposes and individual help is something very cool but when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically.”*\ — Iveta Lohovska ## Transcript: Demetrios: Look at that. We are back for another vector space talks. I'm very excited to be doing this today with you all. I am joined by none other than Sabrina again. Where are you at, Sabrina? How's it going? Sabrina Aquino: Hey there, Demetrios. Amazing. Another episode and I'm super excited for this one. How are you doing? Demetrios: I'm great. And we're going to bring out our guest of honor today. We are going to be talking a lot about trustworthy AI because Iveta has a background working with the World bank and focusing on the open data with that. But currently she is chief technologist and principal data scientist at HPE. And we were talking before we hit record before we went live. And we've got some hot takes that are coming up. So I'm going to bring Iveta to the stage. Where are you? There you are, our guest of honor. Demetrios: How you doing? Iveta Lohovska: Good. I hope you can hear me well. Demetrios: Loud and clear. Yes. Iveta Lohovska: Happy to join here from Vienna and thank you for the invite. Demetrios: Yes. So I'm very excited to talk with you today. I think it's probably worth getting the TLDR on your story and why you're so passionate about trustworthiness and explainability. Iveta Lohovska: Well, I think especially in the genaid context where if there any vulnerabilities around the solution or the training data set or any underlying context, either in the enterprise or in a smaller scale, it's just the scale that AI engine AI can achieve if it has any vulnerabilities or any weaknesses when it comes to explainability or trustworthiness or bias, it just goes explain nature. So it is to be considered and taken with high attention when it comes to those use cases. And most of my work is within an enterprise with high confidentiality use cases. So it plays a big role more than actually people will think it's on a high level. It just sounds like AI ethical principles or high level words that are very difficult to implement in technical terms. But in reality, when you hit the ground, when you hit the projects, when you work with in the context of, let's say, governments or organizations that deal with atomic energy, I see it in Vienna, the atomic agency is a neighboring one, or security agencies. Then you see the importance and the impact of those terms and the technical implications behind that. Sabrina Aquino: That's amazing. And can you talk a little bit more about the importance of the transparency of these models and what can happen if we don't know exactly what kind of data they are being trained on? Iveta Lohovska: I mean, this is especially relevant under our context of vector databases and vector search. Because in the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations. So, so when it comes to implementing vector search or vector database and knowing the distribution of the foundational data sets, you have better control if you introduce additional layers or additional components to have the control in your hands of where the information is coming from, where it's stored, what are the embeddings. So that helps, but it is actually quite important that you know what the foundational data sets are, so that you can predict any kind of weaknesses or vulnerabilities or penetrations that the solution or the use case of the model will face when it lands at the end user. Because we know with generative AI that is unpredictable, we know we can implement guardrails. They're already solutions. Iveta Lohovska: We know they're not 100, they don't give you 100% certainty, but they are definitely use cases and work where you need to hit the hundred percent certainty, especially intelligence, cybersecurity and healthcare. Demetrios: Yeah, that's something that I wanted to dig into a little bit. More of these high stakes use cases feel like you can't. I don't know. I talk with a lot of people about at this current time, it's very risky to try and use specifically generative AI for those high stakes use cases. Have you seen people that are doing it well, and if so, how? Iveta Lohovska: Yeah, I'm in the business of high stakes use cases and yes, we do those kind of projects and work, which is very exciting and interesting, and you can see the impact. So I'm in the generative AI implementation into enterprise control. An enterprise context could mean critical infrastructure, could mean telco, could mean a government, could mean intelligence organizations. So those are just a few examples, but I could flip the coin and give you an alternative for a public one where I can share, let's say a good example is climate data. And we recently worked on, on building a knowledge worker, a climate agent that is trained, of course, his foundational knowledge, because all foundational models have prior knowledge they can refer to. But the key point here is to be an expert on climate data emissions gap country cards. Every country has a commitment to meet certain reduction emission reduction goals and then benchmarked and followed through the international supervisions of the world, like the United nations environmental program and similar entities. So when you're training this agent on climate data, they're competing ideas or several sources. Iveta Lohovska: You can source your information from the local government that is incentivized to show progress to the nation and other stakeholders faster than the actual reality, the independent entities that provide information around the state of the world when it comes to progress towards certain climate goals. And there are also different parties. So for this kind of solution, we were very lucky to work with kind of the status co provider, the benchmark around climate data, around climate publications. And what we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from. If Germany performs better compared to Austria, and also the partner we work with was the United nations environmental program. So they want to make sure that they're the citadel scientific arm when it comes to giving information. And there's no compromise, could be a compromise on the structure of the answer, on the breadth and death of the information, but there should be no compromise on the exact fact fullness of the information and where it's coming from. And this is a concrete example because why, you oughta ask, why is this so important? Because it has two interfaces. Iveta Lohovska: It has the public. You can go and benchmark your carbon footprint as an individual living in one country comparing to an individual living in another. But if you are a policymaker, which is the other interface of this application, who will write the policy recommendation of a country in their own country, or a country they're advising on, you might want to make sure that the scientific citations and the policy recommendations that you're making are correct and they are retrieved from the proper data sources. Because there will be a huge implication when you go public with those numbers or when you actually design a law that is reinforceable with legal terms and law enforcement. Sabrina Aquino: That's very interesting, Iveta, and I think this is one of the great use cases for RAG, for example. And I think if you can talk a little bit more about how vector search is playing into all of this, how it's helping organizations do this, this. Iveta Lohovska: Would be amazing in such specific use cases. I think the main differentiator is the traceability component, the first that you have full control on which data it will refer to, because if you deal with open source models, most of them are open, but the data it has been trained on has not been opened or given public so with vector database you introduce a step of control and explainability. Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things. So this is a major use case for us for those kind of high stake solution is that you have the explainability and traceability. Explainability. It could be as simple as a semantical similarity to the text, but also the traceability of where it's coming from and the exact link of where it's coming from. So it should be, it shouldn't be referred. You can close and you can cut the line of the model referring to its previous knowledge by introducing a vector database, for example. Iveta Lohovska: So there could be many other implications and improvements in terms of speed and just handling huge amounts of data, yet also nice to have that come with this kind of technique, but the prior use case is actually not incentivized around those. Demetrios: So if I'm hearing you correctly, it's like yet another reason why you should be thinking about using vector databases, because you need that ability to cite your work and it's becoming a very strong design pattern. Right. We all understand now, if you can't see where this data has been pulled from or you can't get, you can't trace back to the actual source, it's hard to trust what the output is. Iveta Lohovska: Yes, and the easiest way to kind of cluster the two groups. If you think of creative fields and marketing fields and design fields where you could go wild and crazy with the temperature on each model, how creative it could go and how much novelty it could bring to the answer are one family of use cases. But there is exactly the opposite type of use cases where this is a no go and you don't need any creativity, you just focus on, focus on the factfulness and explainability. So it's more of the speed and the accuracy of retrieving information with a high level of novelty, but not compromising on any kind of facts within the answer, because there will be legal implications and policy implications and societal implications based on the action taken on this answer, either policy recommendation or legal action. There's a lot to do with the intelligence agencies that retrieve information based on nearest neighbor or kind of a relational analysis that you can also execute with vector databases and generative AI. Sabrina Aquino: And we know that for these high stakes sectors that data privacy is a huge concern. And when we're talking about using vector databases and storing that data somewhere, what are some of the principles or techniques that you use in terms of infrastructure, where should you store your vector database and how should you think about that part of your system? Iveta Lohovska: Yeah, so most of the cases, I would say 99% of the cases, is that if you have such a high requirements around security and explainability, security of the data, but those security of the whole use case and environment, and the explainability and trustworthiness of the answer, then it's very natural to have expectations that will be on prem and not in the cloud, because only on prem you have a full control of where your data sits, where your model sits, the full ownership of your IP, and then the full ownership of having less question marks of the implementation and architecture, but mainly the full ownership of the end to end solution. So when it comes to those use cases, RAG on Prem, with the whole infrastructure, with the whole software and platform layers, including models on Prem, not accessible through an API, through a service somewhere where you don't know where the guardrails is, who designed the guardrails, what are the guardrails? And we see those, this a lot with, for example, copilot, a lot of question marks around that. So it's a huge part of my work is just talking of it, just sorting out that. Sabrina Aquino: Exactly. You don't want to just give away your data to a cloud provider, because there's many implications that that comes with. And I think even your clients, they need certain certifications, then they need to make sure that nobody can access that data, something that you cannot. Exactly. I think ensure if you're just using a cloud provider somewhere, which is, I think something that's very important when you're thinking about these high stakes solutions. But also I think if you're going to maybe outsource some of the infrastructure, you also need to think about something that's similar to a hybrid cloud solution where you can keep your data and outsource the kind of management of infrastructure. So that's also a nice use case for that, right? Iveta Lohovska: I mean, I work for HPE, so hybrid is like one of our biggest sacred words. Yeah, exactly. But actually like if you see the trends and if you see how expensive is to work to run some of those workloads in the cloud, either for training for national model or fine tuning. And no one talks about inference, inference not in ten users, but inference in hundred users with big organizations. This itself is not sustainable. Honestly, when you do the simple Linux, algebra or math of the exponential cost around this. That's why everything is hybrid. And there are use cases that make sense to be fast and speedy and easy to play with, low risk in the cloud to try. Iveta Lohovska: But when it comes to actual GenAI work and LLM models, yeah, the answer is never straightforward when it comes to the infrastructure and the environment where you are hosting it, for many reasons, not just cost, but any other. Demetrios: So there's something that I've been thinking about a lot lately that I would love to get your take on, especially because you deal with this day in and day out, and it is the maturity levels of the current state of Gen AI and where we are at for chat GPT or just llms and foundational models feel like they just came out. And so we're almost in the basic, basic, basic maturity levels. And when you work with customers, how do you like kind of signal that, hey, this is where we are right now, but you should be very conscientious that you're going to need to potentially work with a lot of breaking changes or you're going to have to be constantly updating. And this isn't going to be set it and forget it type of thing. This is going to be a lot of work to make sure that you're staying up to date, even just like trying to stay up to date with the news as we were talking about. So I would love to hear your take on on the different maturity levels that you've been seeing and what that looks like. Iveta Lohovska: So I have huge exposure to GenAI for the enterprise, and there's a huge component expectation management. Why? Because chat GPT for conversational purposes and individual help is something very cool. But when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically. So end users who are used to expect level of forgiveness as conversational chatbots have, is very different of what you will get into actual, let's say, knowledge worker type of context, or summarization type of context into the enterprise. And it's not so much to the performance of the models, but we have something called modalities of the models. And I don't think there will be ultimately one model with all the capabilities possible, let's say cult generation or image generation, voice generational, or just being very chatty and loving and so on. There will be multiple mini models out there for those. Modalities in actual architecture with reasonable cost are very difficult to handle. Iveta Lohovska: So I would say the technical community feels we are very mature and very fast. The enterprise adoption is a totally different topic, and it's a couple of years behind, but also the society type of technologists like me, who try to keep up with the development and we know where we stand at this point, but they're the legal side and the regulations coming in, like the EU act and Biden trying to regulate the compute power, but also how societies react to this and how they adapt. And I think especially on the third one, we are far behind understanding and the implications of this technology, also adopting it at scale and understanding the vulnerabilities. That's why I enjoy so much my enterprise work is because it's a reality check. When you put the price tag attached to actual Gen AI use case in production with the inference cost and the expected performance, it's different situation when you just have an app on the phone and you chat with it and it pulls you interesting links. So yes, I think that there's a bridge to be built between the two worlds. Demetrios: Yeah. And I find it really interesting too, because it feels to me like since it is so new, people are more willing to explore and not necessarily have that instant return of the ROI, but when it comes to more traditional ML or predictive ML, it is a bit more mature and so there's less patience for that type of exploration. Or, hey, is this use case? If you can't by now show the ROI of a predictive ML use case, then that's a little bit more dangerous. But if you can't with a Gen AI use case, it is not that big of a deal. Iveta Lohovska: Yeah, it's basically a technology growing up in front of our eyes. It's a kind of a flying a plane while building it type of situation. We are seeing it in the real time, and I agree with you. So that the maturity around ML is one thing, but around generative AI, and they will be a model of kind of mini disappointment or decline, in my opinion, before actually maturing product. This kind of powerful technology in a sustainable way. Sustainable ways mean you can afford it, but also it proves your business case and use case. Otherwise it's just doing for the sake of doing it because everyone else is doing it. Demetrios: Yeah, yeah, 100%. So I know we're bumping up against time here. I do feel like there was a bit of a topic that we wanted to discuss with the licenses and how that plays into basically trustworthiness and explainability. And so we were talking about how, yeah, the best is to run your own model, and it probably isn't going to be this gigantic model that can do everything. It's the, it seems like the trends are going into smaller models. And from your point of view though, we are getting new models like every week. It feels like. Yeah, especially. Demetrios: I mean, we were just talking about this before we went live again, like databricks just released there. What is it? DBRX Yesterday you had Mistral releasing like a new base model over the weekend, and then Llama 3 is probably going to come out in the flash of an eye. So where do you stand in regards to that? It feels like there's a lot of movement in open source, but it is a little bit of, as you mentioned, like, to be cautious with the open source movement. Iveta Lohovska: So I think it feels like there's a lot of open source, but that. So I'm totally for open sourcing and giving the people and the communities the power to be able to innovate, to do R & D in different labs so it's not locked to the view. Elite big tech companies that can afford this kind of technology. So kudos to meta for trying compared to the other equal players in the space. But open source comes with a lot of ecosystem in our world, especially for the more powerful models, which is something I don't like because it becomes like just, it immediately translates into legal fees type of conversation. It's like there are too many if else statements in those open source licensing terms where it becomes difficult to navigate, for technologists to understand what exactly this means, and then you have to bring the legal people to articulate it to you or to put additional clauses. So it's becoming a very complex environment to handle and less and less open, because there are not so many open source and small startup players that can afford to train foundational models that are powerful and useful. So it becomes a bit of a game logged to a view, and I think everyone needs to be a bit worried about that. Iveta Lohovska: So we can use the equivalents from the past, but I don't think we are doing well enough in terms of open sourcing. The three main core components of LLM model, which is the model itself, the data it has been trained on, and the data sets, and most of the times, at least in one of those, is restricted or missing. So it's difficult space to navigate. Demetrios: Yeah, yeah. You can't really call it trustworthy, or you can't really get the information that you need and that you would hope for if you're missing one of those three. I do like that little triangle of the necessities. So, Iveta, this has been awesome. I really appreciate you coming on here. Thank you, Sabrina, for joining us. And for everyone else that is watching, remember, don't get lost in vector space. This has been another vector space talk. Demetrios: We are out. Have a great weekend, everyone. Iveta Lohovska: Thank you. Bye. Thank you. Bye.
qdrant-landing/content/blog/gsoc24-summer-of-code.md
--- draft: false title: Qdrant Summer of Code 24 slug: qdrant-summer-of-code-24 short_description: Introducing Qdrant Summer of Code 2024 program. description: "Introducing Qdrant Summer of Code 2024 program. GSoC alternative." preview_image: /blog/Qdrant-summer-of-code.png date: 2024-02-21T00:39:53.751Z author: Andre Zayarni featured: false tags: - Open Source - Vector Database - Summer of Code - GSoC24 --- Google Summer of Code (#GSoC) is celebrating its 20th anniversary this year with the 2024 program. Over the past 20 years, 19K new contributors were introduced to #opensource through the program under the guidance of thousands of mentors from over 800 open-source organizations in various fields. Qdrant participated successfully in the program last year. Both projects, the UI Dashboard with unstructured data visualization and the advanced Geo Filtering, were completed in time and are now a part of the engine. One of the two young contributors joined the team and continues working on the project. We are thrilled to announce that Qdrant was 𝐍𝐎𝐓 𝐚𝐜𝐜𝐞𝐩𝐭𝐞𝐝 into the GSoc 2024 program for unknown reasons, but instead, we are introducing our own 𝐐𝐝𝐫𝐚𝐧𝐭 𝐒𝐮𝐦𝐦𝐞𝐫 𝐨𝐟 𝐂𝐨𝐝𝐞 program with a stipend for contributors! To not reinvent the wheel, we follow all the timelines and rules of the official Google program. ## Our project ideas. We have prepared some excellent project ideas. Take a look and choose if you want to contribute in Rust or a Python-based project. ➡ *WASM-based dimension reduction viz* 📊 Implement a dimension reduction algorithm in Rust, compile to WASM and integrate the WASM code with Qdrant Web UI. ➡ *Efficient BM25 and Okapi BM25, which uses the BERT Tokenizer* 🥇 BM25 and Okapi BM25 are popular ranking algorithms. Qdrant's FastEmbed supports dense embedding models. We need a fast, efficient, and massively parallel Rust implementation with Python bindings for these. ➡ *ONNX Cross Encoders in Python* ⚔️ Export a cross-encoder ranking models to operate on ONNX runtime and integrate this model with the Qdrant's FastEmbed to support efficient re-ranking ➡ *Ranking Fusion Algorithms implementation in Rust* 🧪 Develop Rust implementations of various ranking fusion algorithms including but not limited to Reciprocal Rank Fusion (RRF). For a complete list, see: https://github.com/AmenRa/ranx and create Python bindings for the implemented Rust modules. ➡ *Setup Jepsen to test Qdrant’s distributed guarantees* 💣 Design and write Jepsen tests based on implementations for other Databases and create a report or blog with the findings. See all details on our Notion page: https://www.notion.so/qdrant/GSoC-2024-ideas-1dfcc01070094d87bce104623c4c1110 Contributor application period begins on March 18th. We will accept applications via email. Let's contribute and celebrate together! In open-source, we trust! 🦀🤘🚀
qdrant-landing/content/blog/how-to-meow-on-the-long-tail-with-cheshire-cat-ai-piero-and-nicola-vector-space-talks.md
--- draft: false title: How to meow on the long tail with Cheshire Cat AI? - Piero and Nicola | Vector Space Talks slug: meow-with-cheshire-cat short_description: Piero Savastano and Nicola Procopio discusses the ins and outs of Cheshire Cat AI. description: Cheshire Cat AI's Piero Savastano and Nicola Procopio discusses the framework's vector space complexities, community growth, and future cloud-based expansions. preview_image: /blog/from_cms/piero-and-nicola-bp-cropped.png date: 2024-04-09T03:05:00.000Z author: Demetrios Brinkmann featured: false tags: - LLM - Qdrant - Cheshire Cat AI - Vector Search - Vector database --- > *"We love Qdrant! It is our default DB. We support it in three different forms, file based, container based, and cloud based as well.”*\ — Piero Savastano > Piero Savastano is the Founder and Maintainer of the open-source project, Cheshire Cat AI. He started in Deep Learning pure research. He wrote his first neural network from scratch at the age of 19. After a period as a researcher at La Sapienza and CNR, he provides international consulting, training, and mentoring services in the field of machine and deep learning. He spreads Artificial Intelligence awareness on YouTube and TikTok. > *"Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation.”*\ — Nicola Procopio > Nicola Procopio has more than 10 years of experience in data science and has worked in different sectors and markets from Telco to Healthcare. At the moment he works in the Media market, specifically on semantic search, vector spaces, and LLM applications. He has worked in the R&D area on data science projects and he has been and is currently a contributor to some open-source projects like Cheshire Cat. He is the author of popular science articles about data science on specialized blogs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2d58Xui99QaUyXclIE1uuH?si=68c5f1ae6073472f), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/K40DIG9ZzAU?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/K40DIG9ZzAU?si=rK0EVXmvNJ5OSZa4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/How-to-meow-on-the-long-tail-with-Cheshire-Cat-AI----Piero-and-Nicola--Vector-Space-Talks-018-e2h7k59/a-ab31teu" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Did you know that companies across Italy, Germany, and the USA are already harnessing the power of Cheshire Cat for a variety of nifty purposes? It's not just a pretty face; it's evolved from a simple tutorial to an influential framework! It’s time to learn how to meow! Piero in this episode of Vector Space Talks discusses the community and open-source nature that contributes to the framework's success and expansion while Nicola reveals the Cheshire Cat’s use of Qdrant and quantization to enhance search accuracy and performance in a hybrid mode. Here are the highlights from this episode: 1. **The Art of Embedding:** Discover how Cheshire Cat uses collections with an embedder, fine-tuning them through scalar quantization and other methods to enhance accuracy and performance. 2. **Vectors in Harmony:** Get the lowdown on storing quantized vectors in a hybrid mode – it's all about saving memory without compromising on speed. 3. **Memory Matters:** Scoop on managing different types of memory within Qdrant, the go-to vector DB for Cheshire Cat. 4. **Community Chronicles:** Talking about the growing community that's shaping the evolution of Cheshire Cat - from enthusiasts to core contributors! 5. **Looking Ahead:** They've got grand plans brewing for a cloud version of Cheshire Cat. Imagine a marketplace buzzing with user-generated plugins. This is the future they're painting! > Fun Fact: The Cheshire Cat community on Discord plays a crucial role in the development and user support of the framework, described humorously by Piero as "a mess" due to its large and active nature. > ## Show notes: 00:00 Powerful open source framework.\ 06:11 Tutorials, code customization, conversational forms, community challenges.\ 09:09 Exploring Qdrant's memory features.\ 13:02 Qdrant experiments with document quantization.\ 17:52 Explore details, export, and memories.\ 20:42 Addressing challenges in ensuring Cheshire Cat's reliability.\ 23:36 Leveraging cool features presents significant challenges.\ 27:06 Plugin-based approach distinguishes the CAT framework.\ 29:28 Wrap up ## More Quotes from Piero and Nicola: *"We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant.”*\ — Piero Savastano *"We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved.”*\ — Nicola Procopio *"It's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant.”*\ — Piero Savastano *"Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior.”*\ — Nicola Procopio ## Transcript: Demetrios: What is up, good people of the Internet? We are here for another one of these vector space talks and I've got to say it's a special day. We've got the folks from Cheshire Cat coming at you full on today and I want to get it started right away because I know they got a lot to talk about. And today we get a two for one discount. It's going to be nothing like you have experienced before. Or maybe those are big words. I'm setting them up huge. We've got Piero coming at us live. Where you at, Piero? Piero, founder. Demetrios: There he is, founder at Cheshire Cat. And you are joined today by Nicola, one of the core contributors. It's great to have you both very excited. So you guys are going to be talking to us all about what you poetically put how to meow on the long tail with Cheshire Cat. And so I know you've got some slides prepared. I know you've got all that fun stuff working right now and I'm going to let you hop right into it so we don't waste any time. You ready? Who wants to share their screen first? Is it you, Nicola, or go? Piero Savastano: I'll go. Thanks. Demetrios: Here we go. Man, you should be seeing it right now. Piero Savastano: Yes. Demetrios: Boom. Piero Savastano: Let's go. Thank you, Demetrios. We're happy to be hosted at the vector space talk. Let's talk about the Cheshire Cat AI. This is an open source framework. We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant. It's a python framework. And before starting to get into the details, I'm going to show you a little video. Piero Savastano: This is the website. So you see, it's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant. We have a plugin system and you can publish any plugin. You can sell your plugin. There is a big ecosystem already and we also give explanation on memories. We have adapters for the most common language models. Piero Savastano: Dark team, you can do a lot of stuff with the framework. This is how it presents itself. We have a blog with tutorials, but going back to our numbers, it is open source, GPL licensed. We have some good numbers. We are mostly active in Italy and in a good part of Europe, East Europe, and also a little bit of our communities in the United States. There are a lot of contributors already and our docker image has been downloaded quite a few times, so it's really easy to start up and running because you just docker run and you're good to go. We have also a discord server with thousands of members. If you want to join us, it's going to be fun. Piero Savastano: We like meme, we like to build culture around code, so it is not just the code, these are the main components of the cat. You have a chat as usual. The rabbit hole is our module dedicated to document ingestion. You can extend all of these parts. We have an agent manager. Meddetter is the module to manage plugins. We have a vectordb which is Qdrant natively, by the way. We use both the file based Qdrant, the container version, and also we support the cloud version. Piero Savastano: So if you are using Qdrant, we support the whole stack. Right now with the framework we have an embedder and a large language model coming to the embedder and language models. You can use any language model or embedded you want, closed source API, open Ollama, self hosted anything. These are the main features. So the first feature of the cat is that he's ready to fight. It is already dogsized. It's model agnostic. One command in the terminal and you can meow. Piero Savastano: The other aspect is that there is not only a retrieval augmented generation system, but there is also an action agent. This is all customizable. You can plug in any script you want as an agent, or you can customize the ready default presence default agent. And one of our specialty is that we do retrieve augmented generation, not only on documents as everybody's doing, but we do also augmented generation over conversations. I can hear your keyboard. We do augmented generation over conversations and over procedures. So also our tools and form conversational forms are embedded into the DB. We have a big plugin system. Piero Savastano: It's really easy to use and with different primitives. We have hooks which are events, WordPress style events. We have tools, function calling, and also we just build up a spec for conversational forms. So you can use your assistant to order a pizza, for example, multitool conversation and order a pizza, book a flight. You can do operative stuff. I already told you, and I repeat a little, not just a runner, but it's a full fledged framework. So we built this not to use language model, but to build applications on top of language models. There is a big documentation where all the events are described. Piero Savastano: You find tutorials and with a few lines of code you can change the prompt. You can use long chain inspired tools, and also, and this is the big part we just built, you can use conversational forms. We launched directly on GitHub and in our discord a pizza challenge, where we challenged our community members to build up prototypes to support a multi turn conversational pizza order. And the result of this challenge is this spec where you define a pedantic model in Python and then you subclass the pizza form, the cut form from the framework, and you can give examples on utterances that triggers the form, stops the forms, and you can customize the submit function and any other function related to the form. So with a simple subclass you can handle pragmatic, operational, multi turn conversations. And I truly believe we are among the first in the world to build such a spec. We have a lot of plugins. Many are built from the community itself. Piero Savastano: Many people is already hosting private plugins. There is a little marketplace independent about plugins. All of these plugins are open source. There are many ways to customize the cat. The big advantage here is no vendor lock in. So since the framework is open and the plugin system can be open, you do not need to pass censorship from big tech giants. This is one of the best key points of moving the framework along the open source values for the future. We plan to add the multimodality. Piero Savastano: At the moment we are text only, but there are plugins to generate images. But we want to have images and sounds natively into the framework. We already accomplished the conversational forms. In a later talk we can speak in more detail about this because it's really cool and we want to integrate a knowledge graph into the framework so we can play with both symbolic vector representations and symbolic network ones like linked data, for example wikidata. This stuff is going to be really interesting within. Yes, we love the Qdrant. It is our default DB. We support it in three different forms, file based, container based, and cloud based also. Piero Savastano: But from now on I want to give word to Nicola, which is way more expert on this vector search topic and he wrote most of the part related to the DB. So thank you guys. Nicola to you. Nicola Procopio: Thanks Piero. Thanks Demetrios. I'm so proud to be hosted here because I'm a vector space talks fan. Okay, Qdrant is the vector DB of the cat and now I will try to explore the feature that we use on Cheshire Cat. The first slide, explain the cut's memory. Because Qdrant is our memory. We have a long term memory in three parts. The episodic memory when we store and manage the conversation, the chart, the declarative memory when we store and manage documents and the procedural memory when we store and manage the tools how to manage three memories with several embedder because the user can choose his fabric embedder and change it. Nicola Procopio: We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved. To us the flow now is this. We create the collection with the name and we use the aliases to. Piero Savastano: Label. Nicola Procopio: This collection with the name of the embedder used. When the user changed the embedder, we check if the embedder has the same dimension. If has the same dimension, we check also the aliases. If the aliases is the same we don't change nothing. Otherwise we create another collection and this is the drunken cut effect. The first feature that we use in the cat. Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation. Nicola Procopio: If you read on the Qdrant documents are experimented on not to all embedder but also for OpenAI and Coer. If I remember well with this discover quantitation and the scour quantization is used in the storage step. The vector are quantized and stored in a hybrid mode, the original vector on disk, the quantized vector in RAM and with this procedure we procedure we can use less memory. In case of Qdrant scalar quantization, the flat 32 elements is converted to int eight on a single number on a single element needs 75% less memory. In case of big embeddings like I don't know Gina embeddings or mistral embeddings with more than 1000 elements. This is big improvements. The second part is the retriever step. We use a quantizement query at the quantized vector to calculate causing similarity and we have the top n results like a simple semantic search pipeline. Nicola Procopio: But if we want a top end results in quantize mod, the quantity mod has less quality on the information and we use the oversampling. The oversampling is a simple multiplication. If we want top n with n ten with oversampling with a score like one five, we have 15 results, quantities results. When we have these 15 quantities results, we retrieve also the same 15 unquanted vectors. And on these unquanted vectors we rescale busset on the query and filter the best ten. This is an improvement because the retrieve step is so fast. Yes, because using these tip and tricks, the Cheshire capped vectors achieve up. Piero Savastano: Four. Nicola Procopio: Times lower memory footprint and two time performance increase. We are so fast using this Qdrant feature. And last but not least, we go in deep on the memory. This is the visualization that Piero showed before. This is the vector space in 2D we use Disney is very similar to the Qdrant cloud visualization. For the embeddings we have the search bar, how many vectors we want to retrieve. We can choose the memory and other filters. We can filter on the memory and we can wipe a memory or all memory and clean all our space. Nicola Procopio: We can go in deep using the details. We can pass on the dot and we have a bubble or use the detail, the detail and we have a list of first n results near our query for every memory. Last but not least, we can export and share our memory in two modes. The first is exporting the JSON using the export button from the UI. Or if you are very curious, you can navigate the folder in the project and share the long term memory folder with all the memories. Or the experimental feature is wake up the door mouse. This feature is simple, the download of Qdrant snapshots. This is experimental because the snapshot is very easy to download and we will work on faster methods to use it. Nicola Procopio: But now it works and sometimes us, some user use this feature for me is all and thank you. Demetrios: All right, excellent. So that is perfect timing. And I know there have been a few questions coming through in the chat, one from me. I think you already answered, Piero. But when we can have some pistachio gelato made from good old Cheshire cat. Piero Savastano: So the plan is make the cat order gelato from service from an API that can already be done. So we meet somewhere or at our house and gelato is going to come through the cat. The cat is able to take, each of us can do a different order, but to make the gelato itself, we're going to wait for more open source robotics to come to our way. And then we go also there. Demetrios: Then we do that, we can get the full program. How cool is that? Well, let's see, I'll give it another minute, let anyone from the chat ask any questions. This was really cool and I appreciate you all breaking down. Not only the space and what you're doing, but the different ways that you're using Qdrant and the challenges and the architecture behind it. I would love to know while people are typing in their questions, especially for you, Nicola, what have been some of the challenges that you've faced when you're dealing with just trying to get Cheshire Cat to be more reliable and be more able to execute with confidence? Nicola Procopio: The challenges are in particular to mix a lot of Qdrant feature with the user needs. Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior. And sometimes we cut some feature on the Cheshire cat because it's not important now for for the user and we can introduce some bug, or rather misunderstanding for the user. Demetrios: Can you hear me? Yeah. All right, good. Now I'm seeing a question come through in the chat that is asking if you are thinking about cloud version of the cat. Like a SaaS, it's going to come. It's in the works. Piero Savastano: It's in the works. Not only you can self host the cat freely, some people install it on a raspberry, so it's really lightweight. We plan to have an osted version and also a bigger plugin ecosystem with a little marketplace. Also user will be able to upload and maybe sell their plugins. So we want to build an know our vision is a WordPress style ecosystem. Demetrios: Very cool. Oh, that is awesome. So basically what I'm hearing from Nicola asking about some of the challenges are like, hey, there's some really cool features that we've got in Qdrant, but it's almost like you have to keep your eye on the prize and make sure that you're building for what people need and want instead of just using cool features because you can use cool features. And then Piero, you're saying, hey, we really want to enable people to be able to build more cool things and use all these cool different features and whatever flavors or tools they want to use. But we want to be that ecosystem creator so that anyone can bring and create an app on top of the ecosystem and then enable them to get paid also. So it's not just Cheshire cat getting paid, it's also the contributors that are creating cool stuff. Piero Savastano: Yeah. Community is the first protagonist without community. I'm going to tell you, the cat started as a tutorial. When chat GPT came out, I decided to do a little rug tutorial and I chose Qdrant as vector. I took OpenAI as a language model, and I built a little tutorial, and then from being a tutorial to show how to build an agent on GitHub, it completely went out of hand. So the whole framework is organically grown? Demetrios: Yeah, that's the best. That is really cool. Simone is asking if there's companies that are already using Cheshire cat, and if you can mention a few. Piero Savastano: Yeah, okay. In Italy, there are at least 1015 companies distributed along education, customer care, typical chatbot usage. Also, one of them in particular is trying to build for public administration, which is really hard to do on the international level. We are seeing something in Germany, like web agencies starting to use the cat a little on the USA. Mostly they are trying to build agents using the cat and Ollama as a runner. And a company in particular presented in a conference in Vegas a pitch about a 3d avatar. Inside the avatar, there is the cat as a linguistic device. Demetrios: Oh, nice. Piero Savastano: To be honest, we have a little problem tracking companies because we still have no telemetry. We decided to be no telemetry for the moment. So I hope companies will contribute and make themselves happen. If that does not, we're going to track a little more. But companies using the cat are at least in the 50, 60, 70. Demetrios: Yeah, nice. So if anybody out there is using the cat, and you have not talked to Piero yet, let him know so that he can have a good idea of what you're doing and how you're doing it. There's also another question coming through about the market analysis. Are there some competitors? Piero Savastano: There are many competitors. When you go down to what distinguishes the cat from many other frameworks that are coming out, we decided since the beginning to go for a plugin based operational agent. And at the moment, most frameworks are retrieval augmented generation frameworks. We have both retrieval augmented generation. We have tooling, we have forms. The tools and the forms are also embedded. So the cat can have 20,000 tools, because we also embed the tools and we make a recall over the function calling. So we scaled up both documents, conversation and tools, conversational forms, and I've not seen anybody doing that till now. Piero Savastano: So if you want to build an application, a pragmatic, operational application, to buy products, order pizza, do stuff, have a company assistant. The cat is really good at the moment. Demetrios: Excellent. Nicola Procopio: And the cat has a very big community on discord works. Piero Savastano: Our discord is a mess. Demetrios: You got the best memes around. If that doesn't make people join the discord, I don't know what will. Piero Savastano: Please, Nicola. Sorry for interrupting. Demetrios: No. Nicola Procopio: Okay. The community is a plus for Cheshire Cat because we have a lot of developer user on Discord, and for an open source project, the community is fundamentally 100%. Demetrios: Well fellas, this has been awesome. I really appreciate you coming on the vector space talks and sharing about the cat for anybody that is interested. Hopefully they go, they check it out, they join your community, they share some memes and they get involved, maybe even contribute back and create some tools. That would be awesome. So Piero and Nicola, I really appreciate your time. We'll see you all later. Piero Savastano: Thank you. Nicola Procopio: Thank you. Demetrios: And for anybody out there that wants to come on to the vector space talks and give us a bit of an update on how you're using Qdrant, we'd love to hear it. Just reach out and we'll schedule you in. Until next time. See y'all. Bye.
qdrant-landing/content/blog/hybrid-cloud-airbyte.md
--- draft: false title: "Elevate Your Data With Airbyte and Qdrant Hybrid Cloud" short_description: "Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance." description: "Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance." preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte.png date: 2024-04-10T00:00:00Z author: Qdrant featured: false weight: 1013 tags: - Qdrant - Vector Database --- In their mission to support large-scale AI innovation, [Airbyte](https://airbyte.com/) and Qdrant are collaborating on the launch of Qdrant’s new offering - [Qdrant Hybrid Cloud](/hybrid-cloud/). This collaboration allows users to leverage the synergistic capabilities of both Airbyte and Qdrant within a private infrastructure. Qdrant’s new offering represents the first managed vector database that can be deployed in any environment. Businesses optimizing their data infrastructure with Airbyte are now able to host a vector database either on premise, or on a public cloud of their choice - while still reaping the benefits of a managed database product. This is a major step forward in offering enterprise customers incredible synergy for maximizing the potential of their AI data. Qdrant's new Kubernetes-native design, coupled with Airbyte’s powerful data ingestion pipelines meet the needs of developers who are both prototyping and building production-level apps. Airbyte simplifies the process of data integration by providing a platform that connects to various sources and destinations effortlessly. Moreover, Qdrant Hybrid Cloud leverages advanced indexing and search capabilities to empower users to explore and analyze their data efficiently. In a major benefit to Generative AI, businesses can leverage Airbyte's data replication capabilities to ensure that their data in Qdrant Hybrid Cloud is always up to date. This empowers all users of Retrieval Augmented Generation (RAG) applications with effective analysis and decision-making potential, all based on the latest information. Furthermore, by combining Airbyte's platform and Qdrant's hybrid cloud infrastructure, users can optimize their data operations while keeping costs under control via flexible pricing models tailored to individual usage requirements. > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte #### Optimizing Your GenAI Data Stack With Airbyte and Qdrant Hybrid Cloud By integrating Airbyte with Qdrant Hybrid Cloud, you can achieve seamless data ingestion from diverse sources into Qdrant's powerful indexing system. This integration enables you to derive valuable insights from your data. Here are some key advantages: **Effortless Data Integration:** Airbyte's intuitive interface lets you set up data pipelines that extract, transform, and load (ETL) data from various sources into Qdrant. Additionally, Qdrant Hybrid Cloud’s Kubernetes-native architecture means that the destination vector database can now be deployed in a few clicks to any environment. With such flexibility, you can supply even the most advanced RAG applications with optimal data pipelines. **Scalability and Performance:** With Airbyte and Qdrant Hybrid Cloud, you can scale your data infrastructure according to your needs. Whether you're dealing with terabytes or petabytes of data, this combination ensures optimal performance and scalability. This is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Powerful Indexing and Search:** Qdrant Hybrid Cloud’s architecture combines the scalability of cloud infrastructure with the performance of on-premises indexing. Qdrant's advanced algorithms enable lightning-fast search and retrieval of data, even across large datasets. **Open-Source Compatibility:** Airbyte and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive open-source documentation and code samples help users of all skill levels in leveraging highly advanced features of data ingestion and vector search. #### Build a Modern GenAI Application With Qdrant Hybrid Cloud and Airbyte ![hybrid-cloud-airbyte-tutorial](/blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Airbyte’s advanced data pipelines. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Airbyte’s data pipelines with Qdrant Hybrid Cloud, you will create a customer support system that is always synchronized with up-to-date knowledge. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-aleph-alpha.md
--- draft: false title: "Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud" short_description: "Empowering the world’s best companies in their AI journey." description: "Empowering the world’s best companies in their AI journey." preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha.png date: 2024-04-11T00:01:00Z author: Qdrant featured: false weight: 1012 tags: - Qdrant - Vector Database --- [Aleph Alpha](https://aleph-alpha.com/) and Qdrant are on a joint mission to empower the world’s best companies in their AI journey. The launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) furthers this effort by ensuring complete data sovereignty and hosting security. This latest collaboration is all about giving enterprise customers complete transparency and sovereignty to make use of AI in their own environment. By using a hybrid cloud vector database, those looking to leverage vector search for the AI applications can now ensure their proprietary and customer data is completely secure. Aleph Alpha’s state-of-the-art technology, offering unmatched quality and safety, cater perfectly to large-scale business applications and complex scenarios utilized by professionals across fields such as science, law, and security globally. Recognizing that these sophisticated use cases often demand comprehensive data processing capabilities beyond what standalone LLMs can provide, the collaboration between Aleph Alpha and Qdrant Hybrid Cloud introduces a robust platform. This platform empowers customers with full data sovereignty, enabling secure management of highly specific and sensitive information within their own infrastructure. Together with Aleph Alpha, Qdrant Hybrid Cloud offers an ecosystem where individual components seamlessly integrate with one another. Qdrant's new Kubernetes-native design coupled with Aleph Alpha's powerful technology meet the needs of developers who are both prototyping and building production-level apps. #### How Aleph Alpha and Qdrant Blend Data Control, Scalability, and European Standards Building apps with Qdrant Hybrid Cloud and Aleph Alpha’s models leverages some common value propositions: **Data Sovereignty:** Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing fully managed cluster management. Furthermore, as the best option for organizations that prioritize data sovereignty, Aleph Alpha offers foundation models which are aimed at serving regional use cases. Together, both products can be leveraged to keep highly specific data safe and isolated. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **European Origins & Expertise**: With a strong presence in the European Union ecosystem, Aleph Alpha is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. #### Build a Data-Sovereign AI System With Qdrant Hybrid Cloud and Aleph Alpha’s Models ![hybrid-cloud-aleph-alpha-tutorial](/blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud and Aleph Alpha’s advanced models. #### Tutorial: Build a Region-Specific Contract Management System Learn how to develop an AI system that reads lengthy contracts and gives complex answers based on stored content. This system is completely hosted inside of Germany for GDPR compliance purposes. The tutorial shows how enterprises with a vast number of stored contract documents can leverage AI in a closed environment that doesn’t leave the hosting region, thus ensuring data sovereignty and security. [Try the Tutorial](/documentation/examples/rag-contract-management-stackit-aleph-alpha/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-cohere.md
--- draft: true title: "Qdrant Hybrid Cloud and Cohere Support Enterprise AI" short_description: "Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere." description: "Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere." preview_image: /blog/hybrid-cloud-cohere/hybrid-cloud-cohere.png date: 2024-04-10T00:01:00Z author: Qdrant featured: false weight: 1011 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Cohere](https://cohere.com/) are partnering on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) to enable global audiences to build and scale their AI applications quickly and securely. With Cohere's world-class large language models (LLMs), getting the most out of vector search becomes incredibly easy. Qdrant's new Hybrid Cloud offering and its Kubernetes-native design can be coupled with Cohere's powerful models and APIs. This combination allows for simple setup when prototyping and deploying AI solutions. It’s no secret that Retrieval Augmented Generation (RAG) has shown to be a powerful method of building conversational AI products, such as chatbots or customer support systems. With Cohere's managed LLM service, scientists and developers can tap into state-of-the-art text generation and understanding capabilities, all accessible via API. Qdrant Hybrid Cloud seamlessly integrates with Cohere’s foundation models, enabling convenient data vectorization and highly accurate semantic search. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Cohere’s foundation models with Qdrant’s vector search capabilities, developers can create robust and scalable GenAI applications tailored to meet the demands of modern enterprises. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. #### Take Full Control of Your GenAI Application with Qdrant Hybrid Cloud and Cohere Building apps with Qdrant Hybrid Cloud and Cohere’s models comes with several key advantages: **Data Sovereignty:** Should you wish to keep both deployment together, this integration guarantees that your vector database is hosted in proximity to the foundation models and proprietary data, thereby reducing latency, supporting data locality, and safeguarding sensitive information to comply with regulatory requirements, such as GDPR. **Massive Scale Support:** Users can achieve remarkable efficiency and scalability in running complex queries across vast datasets containing billions of text objects and millions of users. This integration enables lightning-fast retrieval of relevant information, making it ideal for enterprise-scale applications where speed and accuracy are paramount. **Cost Efficiency:** By leveraging Qdrant's quantization for efficient data handling and pairing it with Cohere's scalable and affordable pricing structure, the price/performance ratio of this integration is next to none. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Your New App With Cohere and Qdrant Hybrid Cloud ![hybrid-cloud-cohere-tutorial](/blog/hybrid-cloud-cohere/hybrid-cloud-cohere-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Cohere’s embeddings. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Cohere’s models with Qdrant Hybrid Cloud, you will create a fully private customer support system. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-digitalocean.md
--- draft: false title: "Qdrant Hybrid Cloud and DigitalOcean for Scalable and Secure AI Solutions" short_description: "Enabling developers to deploy a managed vector database in their DigitalOcean Environment." description: "Enabling developers to deploy a managed vector database in their DigitalOcean Environment." preview_image: /blog/hybrid-cloud-digitalocean/hybrid-cloud-digitalocean.png date: 2024-04-11T00:02:00Z author: Qdrant featured: false weight: 1010 tags: - Qdrant - Vector Database --- Developers are constantly seeking new ways to enhance their AI applications with new customer experiences. At the core of this are vector databases, as they enable the efficient handling of complex, unstructured data, making it possible to power applications with semantic search, personalized recommendation systems, and intelligent Q&A platforms. However, when deploying such new AI applications, especially those handling sensitive or personal user data, privacy becomes important. [DigitalOcean](https://www.digitalocean.com/) and Qdrant are actively addressing this with an integration that lets developers deploy a managed vector database in their existing DigitalOcean environments. With the recent launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can seamlessly deploy Qdrant on DigitalOcean Kubernetes (DOKS) clusters, making it easier for developers to handle vector databases without getting bogged down in the complexity of managing the underlying infrastructure. #### Unlocking the Power of Generative AI with Qdrant and DigitalOcean User data is a critical asset for a business, and user privacy should always be a top priority. This is why businesses require tools that enable them to leverage their user data as a valuable asset while respecting privacy. Qdrant Hybrid Cloud on DigitalOcean brings these capabilities directly into developers' hands, enhancing deployment flexibility and ensuring greater control over data. > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. #### Get Started with Qdrant on DigitalOcean DigitalOcean customers can easily deploy Qdrant on their DigitalOcean Kubernetes (DOKS) clusters through a simple Kubernetis-native “one-line” installment. This simplicity allows businesses to start small and scale efficiently. - **Simple Deployment**: Leveraging Kubernetes, deploying Qdrant Hybrid Cloud on DigitalOcean is streamlined, making the management of vector search workloads in the own environment more efficient. - **Own Infrastructure**: Hosting the vector database on your DigitalOcean infrastructure offers flexibility and allows you to manage the entire AI stack in one place. - **Data Control**: Deploying within the own DigitalOcean environment ensures data control, keeping sensitive information within its security perimeter. To get Qdrant Hybrid Cloud setup on DigitalOcean, just follow these steps: - **Hybrid Cloud Setup**: Begin by logging into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and activate **Hybrid Cloud** feature in the sidebar. - **Cluster Configuration**: From Hybrid Cloud settings, integrate your DigitalOcean Kubernetes clusters as a Hybrid Cloud Environment. - **Simplified Deployment**: Use the Qdrant Management Console to effortlessly establish and oversee your Qdrant clusters on DigitalOcean. #### Chat with PDF Documents with Qdrant Hybrid Cloud on DigitalOcean ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on DigitalOcean for a RAG application. It highlights practical steps to integrate vector search with Jina AI's LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. This specific system is tied together via the LlamaIndex framework. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) For a comprehensive guide, our documentation provides detailed instructions on setting up Qdrant on DigitalOcean. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-haystack.md
--- draft: false title: "Qdrant Hybrid Cloud and Haystack for Enterprise RAG" short_description: "A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database." description: "A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database." preview_image: /blog/hybrid-cloud-haystack/hybrid-cloud-haystack.png date: 2024-04-10T00:02:00Z author: Qdrant featured: false weight: 1009 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Haystack](https://haystack.deepset.ai/) are continuing to expand their seamless integration to the new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering, allowing developers to deploy a managed vector database in their own environment of choice. Earlier this year, both Qdrant and Haystack, started to address their user’s growing need for production-ready retrieval-augmented-generation (RAG) deployments. The ability to build and deploy AI apps anywhere now allows for complete data sovereignty and control. This gives large enterprise customers the peace of mind they need before they expand AI functionalities throughout their operations. With a highly customizable framework like Haystack, implementing vector search becomes incredibly simple. Qdrant's new Qdrant Hybrid Cloud offering and its Kubernetes-native design supports customers all the way from a simple prototype setup to a production scenario on any hosting platform. Users can attach AI functionalities to their existing in-house software by creating custom integration components. Don’t forget, both products are open-source and highly modular! With Haystack and Qdrant Hybrid Cloud, the path to production has never been clearer. The elaborate integration of Qdrant as a Document Store simplifies the deployment of Haystack-based AI applications in any production-grade environment. Coupled with Qdrant’s Hybrid Cloud offering, your application can be deployed anyplace, on your own terms. >*“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. #### Simplifying RAG Deployment: Qdrant Hybrid Cloud and Haystack 2.0 Integration Building apps with Qdrant Hybrid Cloud and deepset’s framework has become even simpler with Haystack 2.0. Both products are completely optimized for RAG in production scenarios. Here are some key advantages: **Mature Integration:** You can connect your Haystack pipelines to Qdrant in a few lines of code. Qdrant Hybrid Cloud leverages the existing “Document Store” integration for data sources.This common interface makes it easy to access Qdrant as a data source from within your existing setup. **Production Readiness:** With deepset’s new product [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks), you can generate RESTful APIs from Haystack pipelines. This simplifies the deployment process and makes the service easily accessible by developers using Qdrant Hybrid Cloud to prepare RAG systems for production. **Flexible & Customizable:** The open-source nature of Qdrant and Haystack’s 2.0 makes it easy to extend the capabilities of both products through customization. When tailoring vector RAG systems to their own needs, users can develop custom components and plug them into both Qdrant Hybrid Cloud and Haystack for maximum modularity. [Creating custom components](https://docs.haystack.deepset.ai/docs/custom-components) is a core functionality. #### Learn How to Build a Production-Level RAG Service with Qdrant and Haystack ![hybrid-cloud-haystack-tutorial](/blog/hybrid-cloud-haystack/hybrid-cloud-haystack-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using deepset’s Haystack framework. #### Tutorial: Private Chatbot for Interactive Learning Learn how to develop a tutor chatbot from online course materials. You will create a Retrieval Augmented Generation (RAG) pipeline with Haystack for enhanced generative AI capabilities and Qdrant Hybrid Cloud for vector search. By deploying every tool on RedHat OpenShift, you will ensure complete privacy and data sovereignty, whereby no course content leaves your cloud. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to get started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-jinaai.md
--- draft: false title: "Cutting-Edge GenAI with Jina AI and Qdrant Hybrid Cloud" short_description: "Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud." description: "Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud." preview_image: /blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai.png date: 2024-04-10T00:03:00Z author: Qdrant featured: false weight: 1008 tags: - Qdrant - Vector Database --- We're thrilled to announce the collaboration between Qdrant and [Jina AI](https://jina.ai/) for the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), empowering users worldwide to rapidly and securely develop and scale their AI applications. By leveraging Jina AI's top-tier large language models (LLMs), engineers and scientists can optimize their vector search efforts. Qdrant's latest Hybrid Cloud solution, designed natively with Kubernetes, seamlessly integrates with Jina AI's robust embedding models and APIs. This synergy streamlines both prototyping and deployment processes for AI solutions. Retrieval Augmented Generation (RAG) is broadly adopted as the go-to Generative AI solution, as it enables powerful and cost-effective chatbots, customer support agents and other forms of semantic search applications. Through Jina AI's managed service, users gain access to cutting-edge text generation and comprehension capabilities, conveniently accessible through an API. Qdrant Hybrid Cloud effortlessly incorporates Jina AI's embedding models, facilitating smooth data vectorization and delivering exceptionally precise semantic search functionality. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Jina AI’s models with Qdrant’s vector search capabilities, developers can create robust and scalable applications tailored to meet the demands of modern enterprises. This combination allows organizations to build strong and secure Generative AI solutions. > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI #### Benefits of Qdrant’s Vector Search With Jina AI Embeddings in Enterprise RAG Scenarios Building apps with Qdrant Hybrid Cloud and Jina AI’s embeddings comes with several key advantages: **Seamless Deployment:** Jina AI’s best-in-class embedding APIs can be combined with Qdrant Hybrid Cloud’s Kubernetes-native architecture to deploy flexible and platform-agnostic AI solutions in a few minutes to any environment. This combination is purpose built for both prototyping and scalability, so that users can put together advanced RAG solutions anyplace with minimal effort. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Cost Efficiency:** By leveraging Jina AI's scalable and affordable pricing structure and pairing it with Qdrant's quantization for efficient data handling, this integration offers great value for its cost. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Gen AI Apps With Jina AI and Qdrant Hybrid Cloud ![hybrid-cloud-jinaai-tutorial](/blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build a modern GenAI application with Qdrant Hybrid Cloud and Jina AI embeddings. #### Tutorial: Hybrid Search for Household Appliance Manuals Learn how to build an app that retrieves information from PDF user manuals to enhance user experience for companies that sell household appliances. The system will leverage Jina AI embeddings and Qdrant Hybrid Cloud for enhanced generative AI capabilities, while the RAG pipeline will be tied together using the LlamaIndex framework. This example demonstrates how complex tables in PDF documentation can be processed as high quality embeddings with no extra configuration. By introducing Hybrid Search from Qdrant, the RAG functionality is highly accurate. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-langchain.md
--- draft: false title: "Developing Advanced RAG Systems with Qdrant Hybrid Cloud and LangChain " short_description: "Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications." description: "Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications." preview_image: /blog/hybrid-cloud-langchain/hybrid-cloud-langchain.png date: 2024-04-14T00:04:00Z author: Qdrant featured: false weight: 1007 tags: - Qdrant - Vector Database --- [LangChain](https://www.langchain.com/) and Qdrant are collaborating on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), which is designed to empower engineers and scientists globally to easily and securely develop and scale their GenAI applications. Harnessing LangChain’s robust framework, users can unlock the full potential of vector search, enabling the creation of stable and effective AI products. Qdrant Hybrid Cloud extends the same powerful functionality of Qdrant onto a Kubernetes-based architecture, enhancing LangChain’s capability to cater to users across any environment. Qdrant Hybrid Cloud provides users with the flexibility to deploy their vector database in a preferred environment. Through container-based scalable deployments, companies can leverage cutting-edge frameworks like LangChain while maintaining compatibility with their existing hosting architecture for data sources, embedded models, and LLMs. This potent combination empowers organizations to develop robust and secure applications capable of text-based search, complex question-answering, recommendations and analysis. Despite LLMs being trained on vast amounts of data, they often lack user-specific or private knowledge. LangChain helps developers build context-aware reasoning applications, addressing this challenge. Qdrant’s vector database sifts through semantically relevant information, enhancing the performance gains derived from LangChain’s data connection features. With LangChain, users gain access to state-of-the-art functionalities for querying, chatting, sorting, and parsing data. Through the seamless integration of Qdrant Hybrid Cloud and LangChain, developers can effortlessly vectorize their data and conduct highly accurate semantic searches—all within their preferred environment. > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain #### Discover Advanced Integration Options with Qdrant Hybrid Cloud and LangChain Building apps with Qdrant Hybrid Cloud and LangChain comes with several key advantages: **Seamless Deployment:** With Qdrant Hybrid Cloud's Kubernetes-native architecture, deploying Qdrant is as simple as a few clicks, allowing you to choose your preferred environment. Coupled with LangChain's flexibility, users can effortlessly create advanced RAG solutions anywhere with minimal effort. **Open-Source Compatibility:** LangChain and Qdrant support a dependable and mature integration, providing peace of mind to those developing and deploying large-scale AI solutions. With comprehensive documentation, code samples, and tutorials, users of all skill levels can harness the advanced features of data ingestion and vector search to their fullest potential. **Advanced RAG Performance:** By infusing LLMs with relevant context, Qdrant offers superior results for RAG use cases. Integrating vector search yields improved retrieval accuracy, faster query speeds, and reduced computational overhead. LangChain streamlines the entire process, offering speed, scalability, and efficiency, particularly beneficial for enterprise-scale deployments dealing with vast datasets. Furthermore, [LangSmith](https://www.langchain.com/langsmith) provides one-line instrumentation for debugging, observability, and ongoing performance testing of LLM applications. #### Start Building With LangChain and Qdrant Hybrid Cloud: Develop a RAG-Based Employee Onboarding System To get you started, we’ve put together a tutorial that shows how to create next-gen AI applications with Qdrant Hybrid Cloud using the LangChain framework and Cohere embeddings. ![hybrid-cloud-langchain-tutorial](/blog/hybrid-cloud-langchain/hybrid-cloud-langchain-tutorial.png) #### Tutorial: Build a RAG System for Employee Onboarding We created a comprehensive tutorial to show how you can build a RAG-based system with Qdrant Hybrid Cloud, LangChain and Cohere’s embeddings. This use case is focused on building a question-answering system for internal corporate employee onboarding. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-launch-partners.md
--- draft: false title: "Qdrant's Trusted Partners for Hybrid Cloud Deployment" slug: hybrid-cloud-launch-partners short_description: "With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment." description: "With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment." preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png social_preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png date: 2024-04-15T00:02:00Z author: Manuel Meyer featured: false tags: - Hybrid Cloud - launch partners --- With the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment, be it *in the cloud, on premise, or on the edge*. We are excited to have trusted industry players support the launch of Qdrant Hybrid Cloud, allowing developers to unlock best-in-class advantages for building production-ready AI applications: - **Deploy In Your Own Environment:** Deploy the Qdrant vector database as a managed service on the infrastructure of choice, such as our launch partner solutions [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). - **Seamlessly Integrate with Every Key Component of the Modern AI Stack:** Our new hybrid cloud offering also allows you to integrate with all of the relevant solutions for building AI applications. These include partner frameworks like [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Airbyte](/blog/hybrid-cloud-airbyte/), as well as large language models (LLMs) like [JinaAI](/blog/hybrid-cloud-jinaai/) and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). - **Ensure Full Data Sovereignty and Privacy Control:** Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process workloads either in the cloud or on-premise, ensuring data privacy and sovereignty requirements - all while being fully managed. #### Try Qdrant Hybrid Cloud on Partner Platforms ![Hybrid Cloud Launch Partners Tutorials](/blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners-tutorials.png) Together with our launch partners, we created in-depth tutorials and use cases for production-ready vector search that explain how developers can leverage Qdrant Hybrid Cloud alongside the best-in-class solutions of our launch partners. These tutorials demonstrate that Qdrant Hybrid Cloud is the most flexible foundation to build modern, customer-centric AI applications with endless deployment options and full data sovereignty. Let’s dive right in: **AI Customer Support Chatbot** with Qdrant Hybrid Cloud, Airbyte, Cohere, and AWS > This tutorial shows how to build a private AI customer support system using Cohere's AI models on AWS, Airbyte, and Qdrant Hybrid Cloud for efficient and secure query automation. [View Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) **RAG System for Employee Onboarding** with Qdrant Hybrid Cloud, Oracle Cloud Infrastructure (OCI), Cohere, and LangChain > This tutorial demonstrates how to use Oracle Cloud Infrastructure (OCI) for a secure setup that integrates Cohere's language models with Qdrant Hybrid Cloud, using LangChain to orchestrate natural language search for corporate documents, enhancing resource discovery and onboarding. [View Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) **Hybrid Search for Product PDF Manuals** with Qdrant Hybrid Cloud, LlamaIndex, and JinaAI > Create a RAG-based chatbot that enhances customer support by parsing product PDF manuals using Qdrant Hybrid Cloud, LlamaIndex, and JinaAI, with DigitalOcean as the cloud host. This tutorial will guide you through the setup and integration process, enabling your system to deliver precise, context-aware responses for household appliance inquiries. [View Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) **Region-Specific RAG System for Contract Management** with Qdrant Hybrid Cloud, Aleph Alpha, and STACKIT > Learn how to streamline contract management with a RAG-based system in this tutorial, which utilizes Aleph Alpha’s embeddings and a region-specific cloud setup. Hosted on STACKIT with Qdrant Hybrid Cloud, this solution ensures secure, GDPR-compliant storage and processing of data, ideal for businesses with intensive contractual needs. [View Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) **Movie Recommendation System** with Qdrant Hybrid Cloud and OVHcloud > Discover how to build a recommendation system with our guide on collaborative filtering, using sparse vectors and the Movielens dataset. [View Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) **Private RAG Information Extraction Engine** with Qdrant Hybrid Cloud and Vultr using DSPy and Ollama > This tutorial teaches you how to handle and structure private documents with large unstructured data. Learn to use DSPy for information extraction, run your LLM with Ollama on Vultr, and manage data with Qdrant Hybrid Cloud on Vultr, perfect for regulated environments needing data privacy. [View Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) **RAG System That Chats with Blog Contents** with Qdrant Hybrid Cloud and Scaleway using LangChain. > Build a RAG system that combines blog scanning with the capabilities of semantic search. RAG enhances the generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks. [View Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) **Private Chatbot for Interactive Learning** with Qdrant Hybrid Cloud and Red Hat OpenShift using Haystack. > In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [View Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Supporting Documentation Additionally, we built comprehensive documentation tutorials on how to successfully deploy Qdrant Hybrid Cloud on the right infrastructure of choice. For more information, please visit our documentation pages: - [How to Deploy Qdrant Hybrid Cloud on AWS](/documentation/hybrid-cloud/platform-deployment-options/#amazon-web-services-aws) - [How to Deploy Qdrant Hybrid Cloud on GCP](/documentation/hybrid-cloud/platform-deployment-options/#google-cloud-platform-gcp) - [How to Deploy Qdrant Hybrid Cloud on Azure](/documentation/hybrid-cloud/platform-deployment-options/#mircrosoft-azure) - [How to Deploy Qdrant Hybrid Cloud on DigitalOcean](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean) - [How to Deploy Qdrant on Oracle Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure) - [How to Deploy Qdrant on Vultr](/documentation/hybrid-cloud/platform-deployment-options/#vultr) - [How to Deploy Qdrant on Scaleway](/documentation/hybrid-cloud/platform-deployment-options/#scaleway) - [How to Deploy Qdrant on OVHcloud](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud) - [How to Deploy Qdrant on STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit) - [How to Deploy Qdrant on Red Hat OpenShift](/documentation/hybrid-cloud/platform-deployment-options/#red-hat-openshift) - [How to Deploy Qdrant on Linode](/documentation/hybrid-cloud/platform-deployment-options/#akamai-linode) - [How to Deploy Qdrant on Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo) #### Get Started Now! [Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login)
qdrant-landing/content/blog/hybrid-cloud-llamaindex.md
--- draft: false title: "New RAG Horizons with Qdrant Hybrid Cloud and LlamaIndex" short_description: "Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex." description: "Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex." preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex.png date: 2024-04-10T00:04:00Z author: Qdrant featured: false weight: 1006 tags: - Qdrant - Vector Database --- We're happy to announce the collaboration between [LlamaIndex](https://www.llamaindex.ai/) and [Qdrant’s new Hybrid Cloud launch](/hybrid-cloud/), aimed at empowering engineers and scientists worldwide to swiftly and securely develop and scale their GenAI applications. By leveraging LlamaIndex's robust framework, users can maximize the potential of vector search and create stable and effective AI products. Qdrant Hybrid Cloud offers the same Qdrant functionality on a Kubernetes-based architecture, which further expands the ability of LlamaIndex to support any user on any environment. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, companies can leverage a cutting-edge framework like LlamaIndex, while staying deployed in the same hosting architecture as data sources, embedding models and LLMs. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. While LLMs are trained on a great deal of data, they are not trained on user-specific data, which may be private or highly specific. LlamaIndex meets this challenge by adding context to LLM-based generation methods. In turn, Qdrant’s popular vector database sorts through semantically relevant information, which can further enrich the performance gains from LlamaIndex’s data connection features. With LlamaIndex, users can tap into state-of-the-art functions to query, chat, sort or parse data. Through the integration of Qdrant Hybrid Cloud and LlamaIndex developers can conveniently vectorize their data and perform highly accurate semantic search - all within their own environment. > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex #### Reap the Benefits of Advanced Integration Features With Qdrant and LlamaIndex Building apps with Qdrant Hybrid Cloud and LlamaIndex comes with several key advantages: **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture lets you deploy Qdrant in a few clicks, to an environment of your choice. Combined with the flexibility afforded by LlamaIndex, users can put together advanced RAG solutions anyplace at minimal effort. **Open-Source Compatibility:** LlamaIndex and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive documentation, code samples and tutorials support users of all skill levels in leveraging highly advanced features of data ingestion and vector search. **Advanced Search Features:** LlamaIndex comes with built-in Qdrant Hybrid Search functionality, which combines search results from sparse and dense vectors. As a highly sought-after use case, hybrid search is easily accessible from within the LlamaIndex ecosystem. Deploying this particular type vector search on Hybrid Cloud is a matter of a few lines of code. #### Start Building With LlamaIndex and Qdrant Hybrid Cloud: Hybrid Search in Complex PDF Documentation Use Cases To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using the LlamaIndex framework and the LlamaParse API. ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) #### Tutorial: Hybrid Search for Household Appliance Manuals Use this end-to-end tutorial to create a system that retrieves information from complex user manuals in PDF format to enhance user experience for companies that sell household appliances. You will build a RAG pipeline with LlamaIndex leveraging Qdrant Hybrid Cloud for enhanced generative AI capabilities. The LlamaIndex integration shows how complex tables inside of items’ PDF documents can be processed via hybrid vector search with no additional configuration. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
qdrant-landing/content/blog/hybrid-cloud-oracle-cloud-infrastructure.md
--- draft: true title: "OCI and Qdrant Hybrid Cloud for Maximum Data Sovereignty" short_description: "Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps." description: "Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps." preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure.png date: 2024-04-11T00:03:00Z author: Qdrant featured: false weight: 1005 tags: - Qdrant - Vector Database --- Qdrant and [Oracle Cloud Infrastructure (OCI) Cloud Engineering](https://www.oracle.com/cloud/) are thrilled to announce the ability to deploy [Qdrant Hybrid Cloud](/hybrid-cloud/) as a managed service on OCI. This marks the next step in the collaboration between Qdrant and Oracle Cloud Infrastructure, which will enable enterprises to realize the benefits of artificial intelligence powered through scalable vector search. In 2023, OCI added Qdrant to its [Oracle Cloud Infrastructure solution portfolio](https://blogs.oracle.com/cloud-infrastructure/post/vecto-database-qdrant-support-oci-kubernetes). Qdrant Hybrid Cloud is the managed service of the Qdrant vector search engine that can be deployed and run in any existing OCI environment, allowing enterprises to run fully managed vector search workloads in their existing infrastructure. This is a milestone for leveraging a managed vector search engine for data-sensitive AI applications. In the past years, enterprises have been actively engaged in exploring AI applications to enhance their products and services or unlock internal company knowledge to drive the productivity of teams. These applications range from generative AI use cases, for example, powered by retrieval augmented generation (RAG), recommendation systems, or advanced enterprise search through semantic, similarity, or neural search. As these vector search applications continue to evolve and grow with respect to dimensionality and complexity, it will be increasingly relevant to have a scalable, manageable vector search engine, also called out by Gartner’s 2024 Impact Radar. In addition to scalability, enterprises also require flexibility in deployment options to be able to maximize the use of these new AI tools within their existing environment, ensuring interoperability and full control over their data. > *"We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data."* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle. #### How Qdrant and OCI Support Enterprises in Unlocking Value Through AI Deploying Qdrant Hybrid Cloud on OCI facilitates vector search in production environments without altering existing setups, ideal for enterprises and developers leveraging OCI's services. Key benefits include: - **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture allows you to simply connect your OCI cluster as a Hybrid Cloud Environment and deploy Qdrant with a one-step installation ensuring a smooth and scalable setup. - **Seamless Integration with OCI Services:** The integration facilitates efficient resource utilization and enhances security provisions by leveraging OCI's comprehensive suite of services. - **Simplified Cluster Management**: Qdrant’s central cluster management allows to scale your cluster on OCI (vertically and horizontally), and supports seamless zero-downtime upgrades and disaster recovery, - **Control and Data Privacy**: Deploying Qdrant on OCI ensures complete data isolation, while enjoying the benefits of a fully managed cluster management. #### Qdrant on OCI in Action: Building a RAG System for AI-Enabled Support ![hybrid-cloud-oracle-cloud-infrastructure-tutorial](/blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png) We created a comprehensive tutorial to show how to leverage the benefits of Qdrant Hybrid Cloud on OCI and build AI applications with a focus on data sovereignty. This use case is focused on building a RAG system for FAQ, leveraging the strengths of Qdrant Hybrid Cloud for vector search, Oracle Cloud Infrastructure (OCI) as a managed Kubernetes provider, Cohere models for embedding, and LangChain as a framework. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) Deploying Qdrant Hybrid Cloud on Oracle Cloud Infrastructure only takes a few minutes due to the seamless Kubernetes-native integration. You can get started by following these three steps: 1. **Hybrid Cloud Activation**: Start by signing into your [Qdrant Cloud account](https://qdrant.to/cloud) and activate **Hybrid Cloud**. 2. **Cluster Integration**: In the Hybrid Cloud section, add your OCI Kubernetes clusters as a Hybrid Cloud Environment. 3. **Effortless Deployment**: Utilize the Qdrant Management Console to seamlessly create and manage your Qdrant clusters on OCI. You can find a detailed description in our documentation focused on deploying Qdrant on OCI. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).